text
stringlengths
4
2.78M
--- abstract: 'We study the hidden symmetries, the symmetries associated with Killing tensors, of the near-horizon geometry of odd-dimensional Kerr-AdS-NUT black holes in two limits: generic extremal and extremal vanishing horizon (EVH) limits. Starting from a Kerr-AdS-NUT black hole in ellipsoidal coordinates which admit integrable geodesic equations, we obtain the near-horizon extremal and EVH geometries and their principal and Killing tensors by taking the near-horizon limit. We explicitly demonstrate that geodesic equations are separable and integrable on these near-horizon geometries. We also compute the constants of motion and read the Killing tensors of these near-horizon geometries from the constants of motion. As we expected, they are the same as the Killing tensors given by taking the near-horizon limit.' author: - 'S. Sadeghian' title: | \ Kerr-AdS-NUT geometries --- 1 cm 0 Introduction ============ The exact symmetries in a general relativity framework are usually known as isometries that are given by Killing vectors. However, symmetries of a given metric can be generated by Killing tensors, as well. In this case, they are called *hidden symmetries*, as they are not manifested in the isometries. In some special cases, hidden symmetries reduce to the isometries when Killing tensors can be trivially written as the product of Killing vectors. The symmetries of a metric are reflected in the motion of probe particles on that background metric such that each of the Killing vectors or tensors gives a constant of motion. If the number of (independent) constants of motion is equal to the degrees of freedom of the probe particle, its equations are integrable. If it has more independent constants of motion, the system is superintegrable. The Killing tensors of a four-dimensional Kerr black hole and its generalization to a d-dimensional Kerr-AdS-NUT metric have been studied in Refs. [@Carter:1968rr; @Kubiznak:2008qp]. These symmetric second-rank tensors can be written as a contraction of two (antisymmetric) Killing-Yano tensors. In some sense, a Killing-Yano tensor is the “square-root” of a Killing tensor. The Hodge dual of this Killing-Yano tensor is a closed conformal Killing two-form, called *principal tensor*[@Frolov:2007nt]. Using the eigenvectors of the nondegenerate principal tensor, one can find a coordinate basis in which geodesics, Klein-Gordon, Dirac and Maxwell equations are separable in the probe limit[@Krtous:2006qy; @Kubiznak:2007kh; @Cariglia:2011qb; @Frolov:2006pe; @Lunin:2017drx]. (For a complete review, see Ref.[@Frolov:2017kze].) On the other hand, the near-horizon extremal geometry of a d-dimensional Kerr-AdS-NUT black hole is well understood. Similar to the other near-horizon extremal geometries of stationary black holes that show some universal properties including the attractor mechanism[@Astefanesei:2006dd; @Astefanesei:2007bf] and symmetry enhancement[@NHEG-general; @NHEG-2], its isometry group contains the $SO(2,1)$ group. Using this, one can describe the particle dynamics with conformal mechanics[@conformal-mechanics-BH-1; @conformal-mechanics-BH-2]. Moreover, there are special extremal black holes for which the symmetry enhances more in their near-horizon geometries. They are called extremal vanishing horizon (EVH) black holes [@NHEVH-1] and have been studied in Refs. [@NHEVH-MP; @Sadeghian:2017bpr]. In these cases, we find a common $SO(2,2)$ isometry group in the near-horizon EVH geometries[@NHEVH-three-theorems]. In this paper, we try to answer this question: Are the hidden symmetries enhanced in the near-horizon limit? This has been questioned in Ref. [@Mitsuka:2011bf] in four dimensions. Here, we start by studying the near-horizon geometry of odd-dimensional Kerr-AdS-NUT black holes in both extremal and EVH limits.[^1] Knowing the fact that the geodesic equations on black hole geometry are separable in ellipsoidal coordinates, we take the near-horizon limit in this coordinate system. The Killing tensors and their reduction to Killing vectors of near-horizon geometries has been studied in Refs. [@Rasmussen:2010rw; @Chernyavsky-Xu; @Kolar:2017vjl] only in the extremal case. Here, we extend it to the EVH case, as well. Also, we find the principal tensor of these near-horizon extremal/EVH geometries. Following the analysis in Refs. [@Hakobyan:2017qee; @Demirchian:2017uvo; @Demirchian:2018xsk], we study the separability of timelike geodesic equations on near-horizon extremal/EVH geometries of Kerr-AdS-NUT black holes, explicitly. Finding the constants of motion, we infer that timelike geodesics on the corresponding background metrics are integrable. A brief review of the Kerr-AdS-NUT metric ========================================= The metric of an odd-dimensional ($d=2n+1$) Kerr-AdS-NUT black hole in ellipsoidal coordinates [@general; @kerr] is \[Kerr-AdS-NUT\] ds\^2 &=& \_[=1]{}\^[n]{}( dy\_\^2 + \^2)\ && - (dt - \_[i=1]{}\^n \_i )\^2,\[oddmetric2\] where the metric functions are [^2] \[functions\] &&U\_= [’]{}\_[=1]{}\^n (y\_\^2 - y\_\^2),X\_= \_[k=1]{}\^n (a\_k\^2 - y\_\^2) + 2M\_,W= \_[=1]{}\^n (1-g\^2 y\_\^2),\ &&\_i= \_[=1]{}\^n(a\_i\^2-y\_\^2),\[2n1X\]\_i=a\_i\_i[\_[k=1]{}\^n]{}’(a\_i\^2-a\_k\^2),\_i=1-g\^2a\_i\^2, = \_[i=1]{}\^n \_i. Considering $y_\mu=(x_\a ,ir)$ with $\a=1,\dots,n-1$, one can see that $r$ is radial direction, and $x_\a$ and ${\phi}_i$ with $i=1,\dots,n$ are related to the angular variables. Note that $M_n$ is just equal to the mass parameter $M$, while the remaining $M_\alpha$’s are NUT parameters, denoted by $L_\alpha$. $a_i$ denotes the rotation parameters. Writing the metric in terms of $r$ coordinates, one can easily see that the horizon location is given by \[horizon\] X\_n(r=r\_h)= \_[k=1]{}\^n (a\_k\^2 +r\_h\^2) - 2M=0. The entropy and temperature of this horizon are \[SandT\] &&S= = (\_[i=1]{}\^n)r\_h,[\ ]{}\[3pt\]&&T==\_[i=1]{}\^n - , in which ${\cal A}_{n}$ is the volume of a unit n-sphere, and $G_N$ is the d-dimension Newton’s constant. We note that $\frac{\partial}{\partial t}$ and $\frac{\partial}{\partial \phi_i}$ are the Killing directions and the horizon is generated by the Killing vector, \_H=-\_[i=1]{}\^n\^i, where the horizon’s angular velocity along each of the $\phi_i$ directions, $\Omega^i$, is given by \^i=. Additionally, this geometry has an n-number of second-rank Killing tensors. Killing tensors are symmetric, and a rank-$r$ Killing tensor $K^{\mu_1\cdots \mu_r}$ satisfies \[K-eq\] \^[(]{}K\^[\_1\_r)]{}=0. The Killing tensors of Kerr-AdS-NUT geometry, in the coordinate system in which that metric is written, are [@Kubiznak:2008qp] \[KT\] K\_[(k)]{} &=& \_[=1]{}\^n{ ()\^2 + \^2}\ && - ( + \_[k=1]{}\^n )\^2,k = 0, …, n - 1,\[oddinv2\] where $c=\prod_{i=1}^n a_i^2$ and \[A’s\] S\_= \_[k=1]{}\^n (a\_k\^2-y\_\^2)\^2, A\_\^[(k)]{}=\_ [c]{} \_1<\_2…<\_k\ [\_1,…,\_k]{} \^[n]{} y\_[\_1]{}\^2 y\_[\_2]{}\^2 …y\_[\_k]{}\^2, A\^[(k)]{}=\_[\_1<\_2…<\_k ]{}\^[n]{} y\_[\_1]{}\^2 y\_[\_2]{}\^2 …y\_[\_k]{}\^2. One can simply check that $K_{(0)}$ is the inverse metric, as it trivially satisfies Eq. .\ After a coordinate transformation, the metric takes a simpler form (see appendix \[app-a\] for the details): \[simpler\] ds\^2= \_[=1]{}\^[[[n]{}]{}]{} -(\_[k=0]{}\^[[[n]{}]{}]{}[A\^[(k)]{}]{}d\_k)\^[2]{}. The orthonormal vierbeins ${{{{{\boldsymbol{e}}}^{\mu}}},\, {{\hat{{{\boldsymbol{e}}}}^{\mu}}}}$ (${\mu=1,\dots,{{{n}}}}$), and ${{{\hat{{{\boldsymbol{e}}}}^{0}}}}$ are $$\label{Darbouxform} {{{{\boldsymbol{e}}}^{\mu}}} = {\Bigl(\frac{U_\mu}{X_\mu}\Bigr)^{\!\frac12}}\,d y_{\mu}\;,\qquad {{\hat{{{\boldsymbol{e}}}}^{\mu}}} = {\Bigl(\frac{X_\mu}{U_\mu}\Bigr)^{\!\frac12}}\, \sum_{j=0}^{{{{n}}}-1}{A^{\!(j)}}_{\mu}d\psi_j\;,\qquad {{\hat{{{\boldsymbol{e}}}}^{0}}}= {\Bigl(\frac{c}{{A^{\!({{{n}}})}}}\Bigr)^{\frac12}}\,\sum_{k=0}^{{{{n}}}}{A^{\!(k)}}d\psi_k\;,$$ and their dual vectors ${{{{{\boldsymbol{e}}}_{\mu}}},\,{{\hat{{{\boldsymbol{e}}}}_{\mu}}},\,{{\hat{{{\boldsymbol{e}}}}_{0}}}}$ are $$\label{Darbouxvec} {{{{\boldsymbol{e}}}_{\mu}}} = {\Bigl(\frac{X_\mu}{U_\mu}\Bigr)^{\!\frac12}}\,{\partial_{y_\mu}}\;,\qquad {{\hat{{{\boldsymbol{e}}}}_{\mu}}} = {\Bigl(\frac{U_\mu}{X_\mu}\Bigr)^{\!\frac12}}\,\sum_{k=0}^{{{{n}}}-1+{\varepsilon}} {\frac{(-x_{\mu}^2)^{{{{n}}}-1-k}}{U_{\mu}}}\,{\partial_{\psi_{k}}}\;,\qquad {{\hat{{{\boldsymbol{e}}}}_{0}}}= \bigl(c {A^{\!({{{n}}})}}\bigr)^{\!-\frac12}\,{\partial_{\psi_{{{{n}}}}}}\;.$$ In these coordinates, the Killing tensors are also simplified to \[killing 2n+1\] K\_[(k)]{}=\_[=1]{}\^[n]{}A\_\^[(k)]{}-A\^[(k)]{} [[\_[0]{}]{}]{}[[\_[0]{}]{}]{}, Moreover, this geometry has more rich structure that admits Killing-Yano tensors. We remind the reader that a Killing-Yano tensor of rank $q$, $Y_{\mu_1\cdots \mu_q}$, is an antisymmetric tensor and solves \_[(]{}Y\_[)\_1\_[q-1]{}]{}=0. It is easy to show that the contraction of two Killing-Yano tensors in this way, \[KT-KY\] K\_=Y\_[ \_1\_s]{}Y\_\^[\_1\_s]{}, gives a (symmetric) Killing tensor. The Hodge dual of a $(d-2)$-rank Killing-Yano tensor is a closed conformal Killing-Yano tensor of second rank, called a principal tensor, that satisfies h=Y, \_ h\_=g\_\_-g\_\_. Here, $\xi$ is a primary Killing vector and is defined by \_=\^h\_. Since the principal tensor, $h$, is closed, a potential $b$ is locally associated with it: h=db. The importance of the principal tensor is that its orthogonal (nondegenerate) eigenvectors give a coordinate basis in which geodesic equations are separable. The principal tensor of a Kerr-AdS-NUT black hole is $$\label{PCCKY} h = \sum_{\mu=1}^{{{{n}}}} y_\mu\, d y_\mu \wedge \Bigl(\sum_{k=0}^{{{{n}}}-1}{A^{\!(k)}}_\mu d\psi_k\Bigr)=\, \sum_\mu y_\mu\, {{{{\boldsymbol{e}}}^{\mu}}}\wedge{{\hat{{{\boldsymbol{e}}}}^{\mu}}}\;,$$ and its local potential ${b}$ is $$\label{PCCKYpot} b = \frac12 \sum_{k=0}^{{{{n}}}-1}{A^{\!(k+1)}}\,d\psi_k\;.$$ near-horizon extremal geometry {#app-c} ============================== The extremal limit of this black hole is given by vanishing the temperature in Eq. . In this case, the horizon becomes degenerate and \[extremality\] X’\_n|\_[r=r\_h]{}=0. Note that $r_h$ is the solution to Eq. . The near-horizon transformations are also as follows: \[NHtr\] r=r\_h+r\_h ,dt=,d\_i=d\_i+\^idt,=. Applying these transformations and the constraint to the metric in the $\lambda \to 0$ limit, we find the near-horizon extremal geometry, \[nearhormetric2n+1\] && ds\^2= (-\^2 d\^2+ )+ \^2\ && +\_[=1]{}\^[n-1]{}dx\_\^2+\_[=1]{}\^[n-1]{} \^2, where $V=-\frac{1}{2}X''_n|_{r=r_h}$. The tilde over a function implies that it is evaluated at $r=r_h$. Principal and Killing tensors {#KT-extremal} ----------------------------- The Killing vectors of this geometry include the generators of rotation along $\varphi_i$, \[rot\] \_i=,i=1…n, and the generators of the $sl(2,R)$, as follows: \[SL2R-generators\] &&\_1=[\_]{},[\ ]{}\[3pt\]&& \_2=[\_]{}-\_,[\ ]{}\[3pt\]&&\_3=(\^2+)[\_]{}-2 [\_]{}-\_[i=1]{}\^n, where m\_i. These $\xi_i$’s satisfy $sl(2,R)$ algebra: =\_1,=2\_2,=\_3. It is clear that $\zeta_i$’s commute with $\xi_i$’s. The Casimir of $sl(2,R)$ algebra is =12(\_1 \_3+\_3 \_1)-(\_2)\^2. The nontrivial Killing tensors of this geometry have been studied earlier [@Chernyavsky-Xu] and are given by \[Killingtensors\] && \_[(k)]{}=- ( -\_[i=1]{}\^[n]{} )\^2 + ()\^2\ && +\_[=1]{}\^[n-1]{} \^2\ && +\_[=1]{}\^[n-1]{}()\^2 - \^2, where $k=0,\dots,n-1$, and the functions $\tilde{A}_\mu^{(k)}$, $\tilde{A}^{(k)}$ are related to $A_\mu^{(k)}$, $A^{(k)}$ in Eq. (\[A’s\]) by setting $x_n=\texttt{i}r_h$. The constant $c_i$’s are \[b\_i\] c\_i=. It is worth mentioning that these Killing tensors are invariant under the rotation and $sl(2,R)$ generated by Eqs. and , respectively: \[rot-SL2R-inv\] \_[\_i]{}\_[(k)]{}=0,\_[\_i]{}\_[(k)]{}=0, i, k The principal potential of four-dimensional near-horizon extremal Kerr-AdS-NUT geometry has been studied in Ref. [@Rasmussen:2010rw]. However, for the d-dimensional case, it is not clear from [@Chernyavsky-Xu] that this near-horizon geometry has the principal tensor or not. Apparently, the answer is negative, since the principal potential $b$ seems divergent in the near-horizon limit. Here, we will show that $b$ can still be well defined in the near-horizon limit if we use this freedom: $b$ is defined up to a shift like bb+C\_dx\^, with constant $C_\mu$’s, that does not affect the principal tensor $h$, as $h=db$. We will show that the term which blows up in the near-horizon limit is a constant times $dt$ and can be absorbed using this freedom. To apply the near-horizon transformations to $b$ in Eq. , we should write it in terms of $dt$ and $d\phi_i$ using the coordinate transformation given in appendix \[app-a\]. After a shift like $C\,dt$, it results in \[b\] b=(C+b\_0)dt+\_[i=1]{}\^[n]{}b\_id\_i, in which $b_0$ and $b_i$ are \[b-comp\] b\_0=\_[k=0]{}\^[n-1]{}A\^[(k+1)]{}(-g\^2)\^k,b\_i=\_[k=0]{}\^[n-1]{}A\^[(k+1)]{}(-a\_i\^2)\^[n-k]{}, and $\varepsilon_i$ and $\Xi$ are defined in Eq. . In the near-horizon limit , $b$ takes the form \[NH-expansion\] =+\_[i=1]{}\^[n]{}\_id\_i. Here, the prime denotes a derivative with respect to the $r$ coordinate. As a result of the calculations in Appendix \[app-b1-2\], $\left({b}_0+\sum_i {b}_i\,\Omega^i\right)\big|_{r_h}$ is constant and can be absorbed by the appropriate $C$. Therefore, it cancels the divergent term and results in =\_0d+\_[i=1]{}\^[n]{}\_i d \_i, where \_0r\_h\_[k=0]{}\^[n-1]{}\^[(k)]{}\_n. We note that $\tilde{b}_0$ and $\tilde{b}_i$ are functions of $x_\alpha$ through $\tilde{A}^{(k)}_n$ and $\tilde{A}^{(k+1)}$, respectively, and do not depend on $\rho$. Such behavior has been investigated, with an explicit example in Appendix \[app-b1-1\]. Integrability of geodesic equations ----------------------------------- The simplest constant of motion for timelike geodesics is, g\^[ab]{}p\_ap\_b=-m\_0\^2. Using the inverse metric of near-horizon geometry given by the $k=0$ of Eq. and the projection of the Casimir element $\mathcal{I}$ onto the momentum space, we have \[mass-1\] (+\^2)+ \^2-\_[=1]{}\^[n-1]{}=m\_0\^2, where $M_\a^{ij}$ is defined by M\_\^[ij]{} . The angular Hamiltonian, $\mathcal{E}$, which is defined by \[agular-H\] V(+\^2)=(p\_0-\_i p\_i)\^2-V\^2p\_\^2, can be rewritten in terms of angular variables using equation Eq. . Similar to the analysis of Refs. [@Hakobyan:2017qee; @Demirchian:2018xsk], we see that that the Hamilton-Jacobi equations are also separable on near-horizon extremal geometry of an odd-dimensional Kerr-AdS-NUT black hole. Using the identity Eq. , the equation can be conveniently rewritten as \[H-J-E\] \_[=1]{}\^[n]{}=0, where \[R&W\] &&R\_n-,[\ ]{}&&R\_X\_p\_\^2+\_[i,j=1]{}\^n M\_\^[ij]{}p\_ip\_j, [\ ]{}&&W\_m\_0\^2(-x\_\^2)\^[n-1]{}- \^2. Recalling the identity , we can rewrite the expression in the form \[HJ-1\] \_[=1]{}\^[n]{}=0. Here, $\nu_k$’s are some arbitrary and independent constants which can be considered as constants of motion. To find the $\nu_k$’s, we should reverse the equation: \[eq-1\] R\_+W\_=\_[k=1]{}\^[n-1]{}\_[k]{}(-x\_\^2)\^[n-1-k]{}. By multiplying it with $\frac{\tilde{A}_\mu^k}{{\prod_{\nu}}'\,(x_\nu^2-x_\mu^2)}$, summing over $\mu$, and using the identities , and , we have \_k=-+\_[=1]{}\^[n-1]{}- \^2,k=1,…, n-1. In addition to these $(n-1)$ constants of motion, $m_0^2$ is also a constant of motion. This can be considered as $\nu_0$ by the shift \_k\_k-m\_0\^2 \_[k,0]{}. We note to the reader that the range of $k$ was $[1,n-1]$ initially and did not include $k=0$. However, we extended it to include $k=0$. Recalling Eq. for the definition of $R_\a$ and $\mathcal{E}$ from , we have &&\_k=-(p\_0-\_i p\_i)\^2+\^2p\_\^2 \^[(k)]{}\_n, [\ ]{}\[3pt\]&&+\_[=1]{}\^[n-1]{}- \^2,where $k$ runs over $[0, n-1]$ now. Considering these constants, $\nu_k$, as the contraction of Killing tensors, $K_{(k)}^{\mu \nu}$, with momentum $p_\nu$, \_k=K\_[(k)]{}\^p\_p\_, and one can readily see that the resultant Killing tensors are the same as the Killing tensors in Eq. that we obtained by taking the near-horizon limit. For instance, the Killing tensor related to $\nu_0$ is the metric itself (since $A^{(0)}=A^{(0)}_\mu=1$). In addition to these $n$ constants of motion made of Killing tensors, there are $n$ constants of motion associated with Killing vectors $\zeta_i$, of the form $\zeta_i^\mu\,p_\mu$, and two others from the Cartan and Casimir elements of $sl(2,R)$. As a result of Eq. , these $2n+2$ constants are Poisson commuting. However, all these constants of motion are *not* independent, and there is a constraint between them: \[reducible-K\] \_[k=0]{}\^[n-1]{}\_[k]{} ([r\_h]{}\^2)\^[n-1-k]{}&=&-+ r\_h\^2\^2,as a combination of the corresponding Killing tensors can be written in terms of Killing vectors[@Rasmussen:2010rw; @Chernyavsky-Xu]. Altogether, geodesic equations on the near-horizon extremal geometry of a ($d=2n+1$)-dimensional Kerr-AdS-NUT black hole have a $2n+1$ *independent, commuting* constant of motion; therefore, they are integrable. near-horizon EVH geometry ========================= In the previous section, it is assumed that the horizon area is nonzero. On the other hand, one can take a limit in which both horizon area and temperature vanish with the same rate. That is called the EVH limit. As one can readily see from the form of entropy in Eq. , the horizon area vanishes once $r_h$ goes to zero. The EVH limit of a Kerr-AdS-NUT black hole in odd dimensions has been studied in Ref. [@Sadeghian:2017bpr] with more details. It is given by a limit \[EVHscaling\] r\_[h]{}= ,    a\_1=\_1 \^2,  M=\_[i=2]{}\^[n]{}a\_[i]{}\^2+ \^2,       0, where the parameter $\m$ is given by \[mtilde\] = (-\_3) \_[i=2]{}\^[n]{}a\_i\^2,\_3-g\^2-\_[i=2]{}\^n . (using the notations of Ref. [@Sadeghian:2017bpr] for $\lambda_3$). To obtain the near-horizon limit of an EVH black hole, we apply the EVH limit and the following transformations to the metric in Eq. \[Kerr-AdS-NUT\]: \[nearhorizon\] t=\_3 ,r=r\_[h]{}+ , \_1=p , \_[i]{}=\_[i]{}+\^i t, 2i n , where &&\_3=\_[k=2]{}\^[n]{}a\_k\^2,p=\_3 ,[\ ]{}&& V\_3=-X\_n”|\_[r=r\_h]{}=(-\_3)\_[i=2]{}\^[n]{}a\_i\^2, and we assume that $\epsilon \ll \gamma$ in the $\epsilon,\gamma\rightarrow 0$ limit. In this case, the near-horizon of an EVH black hole (NHEVH) reads as $$\begin{gathered} \label{NH_EVH} ds^2_{NH}=\frac{\tilde{U}}{V_3}\left( -\,\rho^2d\tau^2+\,\frac{d\rho^2}{\rho^2}+\rho^2 \;d\varphi^2\right)+\sum_{\a=1}^{n-1}\left[\frac{\tilde{U}_\a}{\tilde{X}_\a}\,dx_\a^2+\frac{\tilde{X}_\a}{\tilde{U}_\a}\,\left(\sum_{i=2}^{n}\frac{a_i^2\, \tilde{\gamma}_i}{ (a_i^2-x_\a^2)\,\tilde{\varepsilon}_i}d\varphi_i\right)^2\right]\,,\end{gathered}$$ where $\tilde{\gamma}_i$ and $\tilde{\varepsilon}_i$ are \[metricH\] \_i=a\_i\^2\_[=1]{}\^[n-1]{}(a\_i\^2-x\_\^2) ,\_i=a\_i[\_[k=2]{}\^[n]{}]{}’(a\_i\^2-a\_k\^2) . We note that the tilde on each quantity means that it is computed in the EVH and near-horizon limit. Therefore, the metric functions are \[def-tilde\] &&\_n=\_[=1]{}\^[n-1]{}x\_\^2, \_=-(1-g\^2x\_\^2)\_[k=2]{}\^[n]{}(a\_k\^2-x\_\^2)+2L\_,[\ ]{}&& \_=-x\_\^2[\_[=1]{}\^[n-1]{}]{}’(x\_\^2-x\_\^2),S\_=x\_\^4\_[k=2]{}\^[n]{}(a\_k\^2-x\_\^2)\^2. One can check that this geometry is also a solution to the pure Einstein theory. Principal and Killing tensors {#KT-EVH} ----------------------------- The near-horizon geometry of Kerr-AdS-NUT black hole is given in the previous section. As is clear from Eq. , the metric includes an $AdS_3$ factor and is invariant under the $SO(2,2)$ group. It can be viewed as two copies of $SL(2,R)$. In the coordinates v=+,u=-, the generators of these two $SL(2,R)$ are as follows: &&H\^+=[\_v]{},D\^+=v[\_v]{}-\_K\^+=v\^2[\_v]{}+[\_u]{}-2 v[\_]{},[\ ]{}&&H\^-=[\_u]{},D\^-=u[\_u]{}-\_K\^-=u\^2[\_u]{}+[\_v]{}-2 u[\_]{}, and each of these sets satisfies $sl(2,R)$ algebra: =H\^a=2D\^a,=K\^a, a=+,-. The Casimir of each copy is =12(H\^K\^+K\^H\^)-(D\^)\^2. One can simply check that the Casimirs are equal. Applying the EVH limit and near-horizon limit to the second-rank Killing tensors in Eq. gives \[NHEVHKT\] &&K\_[(k)]{}=\^[(k)]{}(-([\_]{})\^2+\^2([\_]{})\^2+([\_]{})\^2),\ && +\_[=1]{}\^[n-1]{} \^[(k)]{}\_(()\^2+(\_[i=2]{}\^[n]{})\^2). One can simply check that the $k=0$ case of them is just the inverse metric of the NHEVH of a Kerr-AdS-NUT black hole in Eq. . The principal tensor $h$ can be read from its potential $b$, defined in Eq. , by applying the near-horizon and EVH limits from Eqs. and , and taking the transformation into account. This gives =\_[i=2]{}\^[n]{}\_i d \_i, in which $\tilde{b}_i$’s are the $b_i$’s in Eq. that should be computed in the near-horizon EVH limit. Integrability of geodesic equations ----------------------------------- Again, we start from the simplest constant of motion for geodesics, i.e., g\^[ab]{}p\_ap\_b=-m\_0\^2. Using the inverse metric of near-horizon geometry given by the $k=0$ of Eq. and the projection of the Casimir element $\mathcal{I}$ onto the momentum space, we have \[mass-simplified1\] -+\_[=1]{}\^[n-1]{}p\_\^2+\_[=1]{}\^[n-1]{} \_[i,j=2]{}\^np\_ip\_j=-m\_0\^2, where $M_\a^{ij}$ is defined by M\_\^[ij]{}. The angular Hamiltonian, $\mathcal{E}$, which is defined by \[agular-H-def\] V\_3=V\_3((p\_0\^2-p\_\^2)-\^2p\_\^2),can be simplified using Eq. : \[Casimir-simplified\] =(\_[=1]{}\^[n-1]{}x\_\^2)(\_[=1]{}\^[n-1]{}p\_\^2+\_[=1]{}\^[n-1]{}\_[i,j=2]{}\^np\_ip\_j+m\_0\^2). Following the analysis of the separability of Hamilton-Jacobi equations in Refs. [@Hakobyan:2017qee; @Demirchian:2018xsk] reveals that the Hamilton-Jacobi equations are also separable on the near-horizon EVH geometry of a Kerr-AdS-NUT black hole in odd dimensions. Using the identity , the angular Hamiltonian ${\cal E}$, given in Eq. , can be conveniently represented through \[HJO\] \_[=1]{}\^[n -1]{}(R\_(p,x)- )=0, where R\_(p,x)-p\_\^2+\_[i,j=2]{}\^n M\_\^[ij]{}p\_ip\_j+m\_0\^2(-x\_\^2)\^[n-2]{}. \[24\]Recalling the identity , we can rewrite the expression in a more useful form: \[HJ1\] \_[=1]{}\^[n-1]{}(R\_(p,x) -\_[k=1]{}\^[n-1]{}\_[k]{}(-x\_\^2)\^[n-2-k]{})=0,\_[n-1]{}=-. Here, $\nu_k$’s are some arbitrary and independent constants which can be considered as constants of motion. To find $\nu_k$’s, we should reverse the equation: \[eq-1\] R\_(p,x)=\_[k=1]{}\^[n-1]{}\_[k]{}(-x\_\^2)\^[n-2-k]{}. By multiplication it with $\frac{\tilde{A}_\a^k}{{\prod_{\beta}\,}'(x_\beta^2-x_\a^2)}$, summation over $\a$, and using the identities and , we have \_k=-+\_[=1]{}\^[n-1]{},k=1,…, n-2. This result can be rewritten by substituting $\mathcal{E}$ from Eq. and noting that $\tilde{A}^{(n-1)}$ is just $\prod_{\a}x_\a^2$, as \_k=(\^2p\_\^2-(p\_0\^2-p\_\^2))\^[(k)]{}+\_[=1]{}\^[n-1]{}, In addition to these $(n-1)$ constants of motion, $m_0^2$ is also a constant of motion. This can be considered as $\nu_0$ by the shift \_k\_k-m\_0\^2 \_[k,0]{}. We note to the reader that the range of $k$ was $[1,n-1]$ initially and did not include $k=0$. However, we extended it to include $k=0$. Recalling Eq. for the definition of $R_\a$, we have &&\_k=(\^2p\_\^2-(p\_0\^2-p\_\^2))\^[(k)]{}+\_[=1]{}\^[n-1]{} \^[(k)]{}\_(p\_\^2+(\_[i=2]{}\^[n]{})\^2),[\ ]{}where $k$ now runs over $[0, n-1]$. Considering these constants, $\nu_k$, as the contraction of Killing tensors, $K_{(k)}^{\mu \nu}$, with momentum $p_\mu$, \_k=K\_[(k)]{}\^p\_p\_, one can readily see that the resultant Killing tensors are the same as the Killing tensors in Eq. that we obtained by taking the near-horizon limit. Similar to the constraint in the extremal case, we have \_[n-1]{}=-, and all $\nu_k$’s, are not independent of the Casimir. Therefore, we have $(2n+1)$ independent constants of motion in this case. So the geodesic equations on the near-horizon EVH Kerr-AdS-NUT geometry are also integrable and separable. Discussion and conclusion ========================= In this work, we studied the principal and Killing tensors of near-horizon extremal and EVH geometries of a Kerr-AdS-NUT black hole in odd dimensions. The even-dimensional case can be analyzed in a similar manner for the extremal case. Although the Killing tensors were given for the extremal case earlier [@Chernyavsky-Xu; @Kolar:2017vjl], we improve the discussion of hidden symmetries by introducing the principal tensor for near-horizon extremal and EVH geometries. The principal tensor is a closed form and locally accompanied by a potential. Then, this potential is defined up to an exact form. In the near-horizon limit, we used this freedom to make the principal tensor finite. The existence of this tensor for a given metric makes geodesics, Klein-Gordon, Dirac and Maxwell field equations separable on that background metric. We explicitly showed the separability of timelike geodesic equations on the mentioned near-horizon geometries. Finding the constants of motion associated with the geodesics, one can read the Killing tensors of the background metric. We observed that the obtained Killing tensors in this way are the same as the Killing tensors given by taking the near-horizon limit. One may also study the Penrose process and superradiance in these spacetimes and see if there are some distinct features due to taking special limits like in Ref. [@Mukherjee:2018dmm]. It is well known that the isometries enhance in the near-horizon extremal limit, and Killing vectors have $sl(2,R)$ algebra. However, there is no extra structure among the given second-rank Killing tensors and Killing vectors. Particularly, hidden symmetries (associated with the given second-rank Killing tensors) *do not enhance* in the near-horizon extremal or EVH limit. This statement should be revised for the equal angular momenta or for null geodesics. In spite of the fact that the Casimir of $sl(2,R)$ gives an extra constant of motion for the geodesic equations on near-horizon extremal geometry, this problem is still integrable (not superintegrable) since there is a relation between the constants associated with the Killing tensors, Casimir and Killing vectors. One may expect that for the EVH case where we have $so(2,2)$ as a subgroup of the isometries, we have more constants of motion. But this is not the case, because two constants that the Casimirs of $so(2,2)$ provide are equal. Therefore, a geodesic problem on near-horizon EVH geometry of a Kerr-AdS-NUT black hole is also integrable. These geometries have dual CFT descriptions from the AdS/CFT point of view. It would be interesting to find the meaning of the Killing tensors, hidden symmetries, and integrability of geodesics on the CFT side. Acknowledgments {#acknowledgments .unnumbered} =============== The author is grateful to Hovhannes Demirchian, Armen Nersessian, and especially M.M. Sheikh-Jabbari for discussions during our previous collaborations. I also thank the conference on “Gravity - New perspectives from strings and higher dimensions,” where the project was initiated. I learned hidden symmetries from David Kubiznak there and thank him. This work is partially supported by ICTP Program Network Scheme No. NT-04. A useful coordinate transformation {#app-a} ================================== Using the transformations =,\_i=, on the Kerr-NUT-AdS metric in odd dimensions ($D=2n+1$) and the definitions a\_0 &=& 1[g]{},  \_I= \_[=1]{}\^n (a\_I\^2 - y\_\^2), 0In,\ \_0 &=& - g\^[2n]{} t,X\_= \_[I=0]{}\^[n]{} (a\_I\^2 -y\_\^2) + 2M\_. the metric can be written as \[intermediate-metric\] ds\^2 = \_[=1]{}\^[n]{} { dy\_\^2 + ( \_[I=0]{}\^[n]{} )\^2} - (\_[I=0]{}\^[n]{} \_I d\_I )\^2. By using the relations &&\_I=(-1)\^n\_[k=0]{}\^n A\^k (-a\_I\^2)\^[n-k]{},[\ ]{}&&(a\_I\^2-y\_)\^[-1]{}\_I=(-1)\^[n-1]{}\_[k=0]{}\^[n-1]{} A\^k\_(-a\_I\^2)\^[n-k-1]{}, the metric takes the simpler form if we define \[transf\] d\_k=\_[I=0]{}\^n (-a\_I\^2)\^[n-k]{}d\_I. Principal tensor of near-horizon extremal geometry {#app-b1} ================================================== Case study : 5D NHEMP {#app-b1-1} --------------------- In this part, we restrict our attention to the $g=0$, $L_\alpha=0$, and $d=5$ case i.e., to the near-horizon extremal geometry of a five dimensional Myers-Perry black hole [@mp]. This solution is described by two rotation parameters, $a_1,a_2$. After solving Eq. for the horizon location in the extremal limit \[Eq. \], we find that r\_h\^2=a\_1a\_2, 2M=(a\_1+a\_2)\^2. The near-horizon metric is ds\^2&&=-++ [[*dx*]{}]{}\^[2]{} + ( 2[ ]{}++ ) \^[2]{}[\ ]{}&&+ (++ ) \^[2]{}, and its functions are &&=(x\^2+a\_1a\_2),=1,V=4,[\ ]{}&&\_1(x)=(a\_1\^2-x\^2)(a\_1+a\_2)a\_1,\_1=a\_1(a\_1\^2-a\_2\^2),[\ ]{}&&\_2(x)=(a\_2\^2-x\^2)(a\_1+a\_2)a\_2,\_2=-a\_2(a\_1\^2-a\_2\^2),[\ ]{}&&\_1(x)=-(x\^2+a\_1a\_2),X\_1(x)=x\^[-2]{}(a\_1\^2-x\^2)(a\_2\^2-x\^2). As discussed in section \[KT-EVH\], this geometry has two second-rank Killing tensors. One of them, $K_{(0)}$, is the metric itself, and another is \[K1\] K\_[(1)]{}=&&(\_)\^2+(\_)\^2+(\_[x]{})\^2[\ ]{}&&-(\_[\_1]{})\^2[\ ]{}&&+ (\_[\_2]{})\^2[\ ]{}&&+(\_)(\_[\_1]{})+(\_)(\_[\_2]{})[\ ]{}&&+ (\_[\_1]{})(\_[\_2]{}). The angular velocities are \^1=\^2=. Then, $b_0$ and the $b_i$’s in Eq. are equal to &&b\_0=-(g\^2r\^2x\^2+x\^2-r\^2),[\ ]{}&&b\_1=,[\ ]{}&&b\_2=. By applying the near-horizon transformation, r=r\_h+r\_h ,dt=,d\_i=d\_i+\^idt,=, to the principal potential $b$ in Eqs. and , it is easy to see that (b\_0+\^1b\_1+\^2b\_2)|\_[r\_h]{}=, which is constant. Therefore, choosing $C=-\frac{a_1^2\,a_2^2}{2\,(a_1+a_2)^2}$ will remove the divergent term of $b$ in the near-horizon limit. Then, we get =\_0d+\_1d\_1+\_2d\_2, where &&\_0r\_h(b\_0+\^1b\_1+\^2b\_2 )|\_[r\_h]{}=.[\ ]{}&&\_1=- ,\_2=. The principal tensor, $h=d\, b$, reads h=- ( a\_1a\_2+[x]{}\^[2]{} ) dd-ddx+dx(a\_1\^2 d\_1-a\_2\^2 d\_2).[\ ]{} The Hodge dual of $h$ gives a Killing-Yano tensor of this form: h= &&ddd\_1 - ddd\_2[\ ]{}&&+ddx d\_1-ddx d\_2[\ ]{}&&+ dx d\_1 d\_2. The Killing tensor made of this Killing-Yano tensor, using Eq. , is proportional to $K_{(1)}$ given by Eq. . Generic odd dimensions {#app-b1-2} ---------------------- As discussed in section \[KT-extremal\], the principal potential, $b$, is divergent in the near-horizon limit. However, it is defined up to a shift of the form bb+C\_dx\^, with constant $C_\mu$’s. This shift does not affect $h=d\,b$. We use this freedom to make $b$ finite in the near-horizon limit. The divergent term arises from $dt\to d\tau/\lambda$. In the following, we show that its coefficient, which is equal to $\left({b}_0+\sum_i {b}_i\,\Omega^i\right)\big|_{r_h}$, is a *constant*. We start by substituting $b_0$ and the $b_i$’s from Eq. : B([b]{}\_0+\_[i=1]{}\^n [b]{}\_i\^i)|\_[r\_h]{}&=&-\_[J=0]{}\^[n]{}(). The summation over $k$ can be easily done by changing $k=u-1$: S\_1\_[u=1]{}\^[n]{}(-a\_J\^2)\^[n+1-u]{}A\^[(u)]{}&=&(-a\_J\^2)[\ ]{}&=&\_[=0]{}\^[n]{}(y\_\^2-a\_J\^2)-(-a\_J\^2)\^[n+1]{}, where in the last line we used the definition $y_0^2\equiv 0$. The contribution of the last expression of $S_1$ to $B$ is a constant, $B_0$, which is desirable. Therefore, by applying the change &&I=M-1, a\_I=d\_M,[\ ]{}&&=-1,   y\_=z\_, to the $B$, we have B-B\_0=\_[M=1]{}\^[n+1]{}. Then, using the identity , the expression in the bracket can be expanded in powers of $(-d_M^{\,2})$. Using the identity for the summation over $M$, simplifies $B$ significantly and leads to B-B\_0=, which is obviously constant and can be absorbed by $C_0$. Therefore, the near-horizon expansion of $b_\tau$ starts from $\rho$, as explained in Eq. , and gives =\_0d+\_[i=1]{}\^[n]{}\_i d \_i, where \_0r\_h\_[k=0]{}\^[n-1]{}\^[(k)]{}\_n. Useful identities {#app-b} ================= \[Ids\] &&\_[=1]{}\^[N]{}=\_[q,0]{},\[n-2\]\ [\ ]{}&&\_[=1]{}\^[N]{}=,\[28\]\ [\ ]{}&&= \_[=1]{}\^[N]{} ,\[27\]\ [\ ]{}&&\_[=1]{}\^[N]{}=\_[q,p]{},q=1,…N-1, \[29\]\ [\ ]{}&&\_[=1]{}\^N (x\_\^2+)=\_[k=0]{}\^[N]{}A\^[(k)]{}\^[N-k]{} , =\_[k=0]{}\^[N-1]{} A\^[(k)]{}\_\^[N-1-k]{}.\[expanded\] [nn]{} B. Carter, Global structure of the Kerr family of gravitational fields, [Phys. Rev.  [**174**]{}, 1559 (1968)](https://doi.org/10.1103/PhysRev.174.1559); Hamilton-Jacobi and Schrodinger separable solutions of Einstein’s equations, [Commun. Math. Phys.  [**10**]{}, 280 (1968)](https://doi.org/10.1007/BF03399503); M. Walker and R. Penrose, On quadratic first integrals of the geodesic equations for type \[22\] spacetimes, [Commun. Math. Phys.  [**18**]{}, 265 (1970)](https://doi.org/10.1007/BF01649445). D. Kubiznak, Hidden Symmetries of Higher-Dimensional Rotating Black Holes, [arXiv:0809.2452](https://arxiv.org/abs/0809.2452). V. P. Frolov and D. Kubiznak, Hidden Symmetries of Higher Dimensional Rotating Black Holes, [Phys. Rev. Lett.  [**98**]{}, 011101 (2007)](https://doi.org/10.1103/PhysRevLett.98.011101); D. Kubiznak and V. P. Frolov, Hidden Symmetry of Higher Dimensional Kerr-NUT-AdS Spacetimes, [Class. Quant. Grav.  [**24**]{}, no. 3, F1 (2007)](https://doi.org/10.1088/0264-9381/24/3/F01). P. Krtous, D. Kubiznak, D. N. Page and V. P. Frolov, Killing-Yano Tensors, Rank-2 Killing Tensors, and Conserved Quantities in Higher Dimensions, [JHEP [**0702**]{}, 004 (2007)](https://doi.org/10.1088/1126-6708/2007/02/004). D. Kubiznak and P. Krtous, On conformal Killing-Yano tensors for Plebanski-Demianski family of solutions, [Phys. Rev. D [**76**]{}, 084036 (2007)](https://doi.org/10.1103/PhysRevD.76.084036). M. Cariglia, P. Krtous and D. Kubiznak, Dirac Equation in Kerr-NUT-(A)dS Spacetimes: Intrinsic Characterization of Separability in All Dimensions, [Phys. Rev. D [**84**]{}, 024008 (2011)](https://doi.org/10.1103/PhysRevD.84.024008). V. P. Frolov, P. Krtous and D. Kubiznak, Separability of Hamilton-Jacobi and Klein-Gordon Equations in General Kerr-NUT-AdS Spacetimes, [JHEP [**0702**]{}, 005 (2007)](https://doi.org/10.1088/1126-6708/2007/02/005); P. Krtous, D. Kubiznak, D. N. Page and M. Vasudevan, Constants of geodesic motion in higher-dimensional black-hole spacetimes, [Phys. Rev. D [**76**]{}, 084034 (2007)](https://doi.org/10.1103/PhysRevD.76.084034); Complete integrability of geodesic motion in general Kerr-NUT-AdS spacetimes, [Phys. Rev. Lett.  [**98**]{}, 061102 (2007)](https://doi.org/10.1103/PhysRevLett.98.061102). O. Lunin, Maxwell’s equations in the Myers-Perry geometry, [JHEP [**1712**]{}, 138 (2017)](https://doi.org/10.1007/JHEP12(2017)138); P. Krtouš, V. P. Frolov and D. Kubizňák, Separation of Maxwell equations in Kerr–NUT–(A)dS spacetimes, [Nucl. Phys. B [**934**]{}, 7 (2018)](https://doi.org/10.1016/j.nuclphysb.2018.06.019). V. Frolov, P. Krtous and D. Kubiznak, Black holes, hidden symmetries, and complete integrability, [Living Rev. Rel.  [**20**]{}, no. 1, 6 (2017)](https://doi.org/10.1007/s41114-017-0009-9). D. Astefanesei, K. Goldstein, R. P. Jena, A. Sen and S. P. Trivedi, Rotating attractors, [JHEP [**0610**]{}, 058 (2006)](https://doi.org/10.1088/1126-6708/2006/10/058). D. Astefanesei and H. Yavartanoo, Stationary black holes and attractor mechanism, [Nucl. Phys. B [**794**]{}, 13 (2008)](https://doi.org/10.1016/j.nuclphysb.2007.10.015). J. M. Bardeen and G. T. Horowitz, The Extreme Kerr throat geometry: A Vacuum analog of $AdS_2 \times S^2$, [Phys. Rev. D [**60**]{} (1999) 104030](https://doi.org/10.1103/PhysRevD.60.104030); H. K. Kunduri, J. Lucietti and H. S. Reall, Near-horizon symmetries of extremal black holes, [Class. Quant. Grav.  [**24**]{}, 4169 (2007)](https://doi.org/10.1088/0264-9381/24/16/012); H. K. Kunduri and J. Lucietti, A Classification of near-horizon geometries of extremal vacuum black holes, [J. Math. Phys.  [**50**]{}, 082502 (2009)](https://doi.org/10.1063/1.3190480); H. K. Kunduri and J. Lucietti, Classification of near-horizon geometries of extremal black holes, [Living Rev. Rel.  [**16**]{}, 8 (2013)](https://doi.org/10.12942/lrr-2013-8); P. Figueras, H. K. Kunduri, J. Lucietti and M. Rangamani, Extremal vacuum black holes in higher dimensions, [Phys. Rev. D [**78**]{} (2008) 044042](https://doi.org/10.1103/PhysRevD.78.044042). P. Claus, M. Derix, R. Kallosh, J. Kumar, P. K. Townsend and A. Van Proeyen, Black holes and superconformal mechanics, [Phys. Rev. Lett.  [**81**]{} (1998) 4553](https://doi.org/10.1103/PhysRevLett.81.4553). A. Galajinsky, Particle dynamics on $AdS_2\times S^2$ background with two-form flux, [Phys. Rev. D [**78**]{} (2008) 044014](https://doi.org/10.1103/PhysRevD.78.044014); A. Galajinsky, Particle dynamics near extreme Kerr throat and supersymmetry, [JHEP [**1011**]{}, 126 (2010)](https://doi.org/10.1007/JHEP11(2010)126); A. Galajinsky and A. Nersessian, Conformal mechanics inspired by extremal black holes in d=4, [JHEP [**1111**]{}, 135 (2011)](https://doi.org/10.1007/JHEP11(2011)135); A. Galajinsky and K. Orekhov, N=2 superparticle near-horizon of extreme Kerr-Newman-AdS-dS black hole, [Nucl. Phys. B [**850**]{}, 339 (2011)](https://doi.org/10.1016/j.nuclphysb.2011.04.015); S. Bellucci, A. Nersessian and V. Yeghikyan, Action-Angle Variables for the Particle Near Extreme Kerr Throat, [Mod. Phys. Lett. A [**27**]{} (2012) 1250191](https://doi.org/10.1142/S021773231250191X); A. Galajinsky, A. Nersessian and A. Saghatelian, Superintegrable models related to near-horizon extremal Myers-Perry black hole in arbitrary dimension, [JHEP [**1306**]{}, 002 (2013)](https://doi.org/10.1007/JHEP06(2013)002). M. M. Sheikh-Jabbari and H. Yavartanoo, EVH Black Holes, AdS3 Throats and EVH/CFT Proposal, [JHEP [**1110**]{}, 013 (2011)](https://doi.org/10.1007/JHEP10(2011)013). J. de Boer, M. Johnstone, M. M. Sheikh-Jabbari and J. Simon, Emergent IR Dual 2d CFTs in Charged AdS5 Black Holes, [Phys. Rev. D [**85**]{} (2012) 084039](https://doi.org/10.1103/PhysRevD.85.084039); H. Golchin, M. M. Sheikh-Jabbari and A. Ghodsi, Dual 2d CFT Identification of Extremal Black Rings from Holes, [JHEP [**1310**]{}, 194 (2013)](https://doi.org/10.1007/JHEP10(2013)194). S. Sadeghian and M. H. Vahidinia, AdS$_3$ to dS$_3$ transition in the near-horizon of asymptotically de Sitter solutions, [Phys. Rev. D [**96**]{}, no. 4, 044004 (2017)](https://doi.org/10.1103/PhysRevD.96.044004). S. Sadeghian, M. M. Sheikh-Jabbari, M. H. Vahidinia and H. Yavartanoo, Three Theorems on near-horizon Extremal Vanishing Horizon Geometries, [Phys. Lett. B [**753**]{}, 488 (2016)](https://doi.org/10.1016/j.physletb.2015.12.057); near-horizon Structure of Extremal Vanishing Horizon Black Holes, [Nucl. Phys. B [**900**]{}, 222 (2015)](https://doi.org/10.1016/j.nuclphysb.2015.09.010). Y. Mitsuka and G. Moutsopoulos, No more CKY two-forms in the NHEK, [Class. Quant. Grav.  [**29**]{}, 045004 (2012)](https://doi.org/10.1088/0264-9381/29/4/045004). J. Rasmussen, On hidden symmetries of extremal Kerr-NUT-AdS-dS black holes, [J. Geom. Phys.  [**61**]{}, 922 (2011)](https://doi.org/10.1016/j.geomphys.2011.01.006). J. Xu and R. H. Yue, On Hidden Symmetries of d > 4 NHEK-N-AdS Geometry, [Commun. Theor. Phys.  [**63**]{}, no. 1, 31 (2015)](https://doi.org/10.1088/0253-6102/63/1/06); D. Chernyavsky, Reducibility of Killing tensors in d>4 NHEK geometry, [J. Geom. Phys.  [**83**]{}, 12 (2014)](https://doi.org/10.1016/j.geomphys.2014.03.013). I. Kolar and P. Krtous, NUT-like and near-horizon limits of Kerr-NUT-(A)dS spacetimes, [Phys. Rev. D [**95**]{}, no. 12, 124044 (2017)](https://doi.org/10.1103/PhysRevD.95.124044). T. Hakobyan, A. Nersessian and M. M. Sheikh-Jabbari, near-horizon extremal Myers–Perry black holes and integrability of associated conformal mechanics, [Phys. Lett. B [**772**]{}, 586 (2017)](https://doi.org/10.1016/j.physletb.2017.07.028 ). H. Demirchian, Note on constants of motion in conformal mechanics associated with near-horizon extremal Myers-Perry black holes, [Mod. Phys. Lett. A [**32**]{} (2017) no.27, 1750144](https://doi.org/10.1142/S0217732317501449). H. Demirchian, A. Nersessian, S. Sadeghian and M. M. Sheikh-Jabbari, Integrability of geodesics in near-horizon extremal geometries: Case of Myers-Perry black holes in arbitrary dimensions, [Phys. Rev. D [**97**]{}, no. 10, 104004 (2018)](https://doi.org/10.1103/PhysRevD.97.104004); H. Demirchian, A. Nersessian, S. Sadeghian and M. M. Sheikh-Jabbari, Proceeding of the conference “SYMPHYS-XVII”, to appear in Physics of Atomic Nuclei (2019). W. Chen, H. Lu and C. N. Pope, General Kerr-NUT-AdS metrics in all dimensions, [Class. Quant. Grav.  [**23**]{} (2006) 5323](https://doi.org/10.1088/0264-9381/23/17/013). S. Mukherjee, S. Chakraborty and N. Dadhich, On some novel features of the Kerr-Newman-NUT Spacetime, [arXiv:1807.02216](https://arxiv.org/abs/1807.02216). R. C. Myers and M. J. Perry, Black Holes in Higher Dimensional Space-Times, [Annals Phys. [**172**]{} (1986) 304](http://dx.doi.org/10.1016/0003-4916(86)90186-7), R. C. Myers, “Myers-Perry black holes,” [arXiv:1111.1903](https://arxiv.org/abs/1111.1903). [^1]: Such black holes can become EVH black holes only in odd dimensions[@Sadeghian:2017bpr]. [^2]: The prime on the product symbol means that the factor which makes the product vanishing is removed.
--- author: - Doris Folini - Rolf Walder bibliography: - '3898.bib' date: 'Received ... ; accepted ...' title: 'Supersonic turbulence in shock-bound interaction zones I: symmetric settings' --- Introduction {#sec:intro} ============ Supersonically turbulent, shock-bound interaction zones are important for a variety of astrophysical objects. They contribute, for example, to structure formation in molecular clouds [@hunter-et-al:86; @ballesteros-hartmann-vazquez:99; @hartmann-et-al:01; @hueckstaedt:03; @heyer-brunt:04; @vazquez-semadeni:04] and to galaxy formation [@anninos-norman:96; @kang-et-al:05]. They affect the X-ray emission of line-driven hot-star winds [@owocki-et-al:88; @feldmeier-et-al:97; @feldmeier-owocki:98; @oskinova-et-al:04] and contribute substantially to the physics and emitted spectrum of colliding wind binaries [@stevens-et-al:92; @nussbaumer-walder:93; @folini-walder:00; @marchenko-et-al:03; @corcoran-et-al:05]. The currently most promising model for the prompt emission of $\gamma$-ray bursts is based on internal shocks [@rees-meszaros:94; @panaitescu-et-al:99; @piran:04; @fan-wei:04]. A similar mechanism has been proposed for micro-quasars [@kaiser-et-al:00], BL Lacs and Blazars [@ghisellini-et-al:02; @mimica-et-al:04], and Herbig-Haro objects [@matzner-mckee:99]. So far, the shape and turbulent interior of shock-bound interaction zones have been mostly studied separately. In this paper we focus on the system as a whole, stressing that upwind flows, confining interfaces of the interaction zone, and the interior structure of this zone form a tightly coupled system. The turbulence within the interaction zone affects the shape of the confining shocks, which in turn determines how much energy is thermalized at these shocks and how much energy remains available for driving the turbulence. A variety of papers have been written on the shape and stability of 2D interaction zones, of which we mention only a few. @vishniac:94 shows by analytical means that geometrically thin, isothermal, 2D, planar, shock-bounded slabs are non-linearly unstable, coining the term non-linear thin shell instability, or NTSI, for this instability. @blondin-marks:96 essentially reproduce these analytical predictions numerically, also mentioning the occurrence of supersonic turbulence within the slab. Performing 2D radiative and isothermal simulations of colliding molecular clouds, @klein-woods-tod:98 observe the complex shaping and instability of the collision zone. The role of a radiative cooling layer has been addressed by several authors. @strickland-blondin:95 numerically investigated flows against a wall in 2D, finding that an unstable cooling layer introduces disturbances in the interface separating the cooling layer from the cooled matter. Looking at colliding flows instead of a flow against a wall, @walder-folini:98 show that one unstable cooling layer is sufficient to destabilize both confining interfaces of the cooled matter. In addition, the cooled matter becomes supersonically turbulent. If self-gravity is included fragmentation of the interaction zone is observed [@anninos-norman:96; @hunter-et-al:86]. An overwhelming amount of literature meanwhile exists on supersonic turbulence. At least part of this attention arises because it is thought that supersonic turbulence can explain the structuring and support of molecular clouds and thus that it plays a decisive role in star formation. A comprehensive view of this issue can be found in the recent reviews by @maclow-klessen:04, @elmegreen-scalo:04, and @scalo-elmegreen:04. Of particular interest for the work we present here is the paper by @maclow:99, where Fig. 4 shows that the wave length of the driving is apparent in the spatial scale of the turbulent structure for monochromatically driven turbulence in a 3D periodic box. The possible importance of the finite size of the slab was recently pointed out by @burkert-hartmann:04. We are trying to make four points with this paper. First, we argue that, within the frame of isothermal Euler equations and in infinite space, the solution may be self-similar and dependent only on the upstream Mach-number, at least to first approximation. Based on this assumption, we give expressions for average quantities of the slab. Second, we show that the numerical solution, which is defined only on a finite computational domain and includes (implicit) numerical dissipation, remains close to self-similar, as long as the width of the slab is small and the root-mean-square Mach-number larger than one. Third, we stress the tight mutual coupling between the turbulence and its driving. Fourth, we point out that spatial scales generally grow with extension $\ell_{\mathrm{cdl}}$ of the interaction zone, but decrease with increasing upstream Mach-number $M_{\mathrm{u}}$. Results are based on a set of simulations that differ only in their upwind Mach-numbers. In this paper we restrict the analysis of these simulations to the above-mentioned three objectives. We postpone a more detailed analysis of the interior structure of the interaction zone to a subsequent paper. In the following, we first give the details of our physical model and numerical method in Sect. \[sec:runs\_and\_tools\] . In Sect. \[sec:anal-scaling\] we derive the self-similar scaling relations. The numerical results are present in Sect. \[sec:num\_results\]. Discussion follows in Sect. \[sec:discussion\], and conclusions in Sect. \[sec:conc\]. Physical model and numerical method {#sec:runs_and_tools} =================================== The numerical treatment of supersonic turbulence is an issue in its own right, so we start this section with a brief summary of some results that are relevant to the present work. We then specify the physical model we consider, explain the numerical method we use and the simulations we perform. Simulating supersonic turbulence {#sec:simulating} -------------------------------- The shock-compressed layer studied in this paper is supersonically turbulent with root-mean-square Mach-numbers between about 1 and 10. An important fraction of the kinetic energy is dissipated in shocks. Euler equations are sufficient for describing this part of the problem. A cascade transfers the remaining energy to higher and higher wave numbers until it is finally destroyed on the viscous dissipation scale. To also capture this part of the problem, the compressible Navier-Stokes equations should be used; however, the range of spatial scales associated with the energy cascade exceeds the capacity of any computer by far. In subsonic turbulence, one way out is to use a suitable sub-grid scale model. The model is used to compute an effective viscosity coefficient, which should mimic the cascading between the smallest scale still resolved by the numerical grid and the viscous dissipation scale as precisely as possible. This coefficient is then used in the Navier-Stokes equations instead of the physical viscosity [@lesieur:99]. For the approach to work it is essential that the effective viscosity obtained from the sub-grid scale model exceeds the (implicit) numerical viscosity of the overall numerical scheme. This can be achieved in subsonic turbulence by the use of low-dissipation schemes [@lele:92]. In supersonic turbulence, explicit sub-grid scale modeling so far does not exist in the above sense. The basic reason is that the numerical treatment of supersonic turbulence requires schemes that can treat shocks appropriately, such as the widely used shock capturing schemes. The (implicit) numerical viscosity of such schemes is, however, much too large to match the above requirement, even if the schemes are of a high order [@garnier-et-al:99; @porter-et-al:92]. One strategy for this case, the so called MILES approach (monotone integrated large-eddy simulation), was proposed by @boris-et-al:92 and further explored by @porter-et-al:92 [@porter-et-al:94]. The basic claim is that the numerical viscosity inherent to shock capturing schemes [@hirsch:95; @leveque:02] acts already as a physically correct sub-grid scale model. Solving the Euler equations by means of a shock capturing scheme thus should yield the correct physical answer. The validity of the claim that implicit numerical viscosity alone leads to a correct physical solution was investigated by @garnier-et-al:99 for a selection of shock capturing schemes, among them a MUSCL-scheme (monotone upwind scheme for conservation laws) similar to the one we use (see Sect. \[sec:num\_meth\]). For the cases considered (essentially decaying subsonic), they find that the scheme indeed acts as a (very dissipative) sub-grid scale model in that it preserves the flow from energy accumulation on small spatial scales. However, they also find that structures defined on less than 5 grid points are affected by substantial numerical damping. @porter-et-al:94 find, in addition, that the dissipation properties of their scheme (MUSCL with PPM) are highly non-linear, and also they depend not only on the grid spacing but also on the wave length of the flow structure. Structures on less than 32 grid points are affected by numerical damping. We rely on the MILES approach in this paper for the lack of a better model, although, to our knowledge, the validity and quality of the approach has never been tested for supersonic turbulence. The numerical solutions we obtain are thus rather solutions of the Navier-Stokes equations. Nevertheless, as dissipation in shocks by far dominates numerical dissipation, we expect the ’Euler character’ of the solution to prevail. The model problem {#sec:phys_model} ----------------- The model problem we consider consists of a 2D, plane-parallel, infinitely extended, isothermal, shock compressed slab. A sketch is given in Fig. \[fig:sketch2d\]. Two high Mach-number flows, oriented parallel (left flow, subscript $l$) and anti-parallel (right flow, subscript $r$) to the x-direction, collide head on. The resulting high-density interaction zone, the shock compressed slab, is oriented in the y-direction. We denote this interaction zone by CDL for ‘cold dense layer’ to remain consistent with notation used already in @walder-folini:96 [@walder-folini:98]. We investigated this system within the frame of Euler equations (but see also Sect. \[sec:simulating\]), together with a polytropic equation of state, $$\begin{aligned} \frac{{\partial}\rho}{{\partial}t} + \vec{\nabla} \left( \rho \vec{v} \right) & = & 0, \\ \frac{{\partial}\rho \vec{v}}{{\partial}t} + \vec{\nabla} \left( \rho \vec{v} \otimes \vec{v} + \frac{p}{\mu} I \right) & = & 0, \\ \frac{{\partial}E}{{\partial}t} + \vec{\nabla} ( \vec{v} \left(E + p \right) ) & = & 0, \\ e & = & p/(\gamma - 1) .\end{aligned}$$ Here, $\rho$ is the particle density, $\mu$ the average mass per particle, $\vec{v} = (v_\mathrm{x},v_\mathrm{y})$ is the velocity vector, $p$ thermal pressure, $I$ the identity tensor, $e$ the thermal energy density, and $E=\rho \vec{v}^{2}/2 + e$ the total energy density. For the polytropic exponent, we choose $\gamma = 1.000001$. This value guarantees that jump conditions and wave speeds of a Mach-90 shock are within 0.01 per cent of the isothermal values. Within the frame of this paper we consider only symmetric settings, where the left (subscript $l$) and right (subscript $r$) colliding flow have identical parameters (subscript $u$ for upstream): $\rho_{\mathrm{l}} = \rho_{\mathrm{r}} \equiv \rho_{\mathrm{u}}$ and $|v_{\mathrm{l}}| = |v_{\mathrm{r}}| \equiv v_{\mathrm{u}}$. We look at the problem in a dimensionless form and express velocities in units of the isothermal sound speed $a=\sqrt{T k_{\mathrm{B}}/\mu}$, with $T$ the temperature and $k_{\mathrm{B}}$ the Boltzmann constant. Densities we express in terms of the upstream density $\rho_{\mathrm{u}}$. Finally, we express lengths in units of $\mathrm{Y}_{\mathrm{0}}$, the smallest y-extent of the computational domain we used. This artificial choice is necessary as there is no natural time-independent length scale to the problem (see Sect. \[sec:anal-scaling\]). ![Sketch of physical model problem. $\rho_{\mathrm{i}}$, $M_{\mathrm{i}}$, and $s_{\mathrm{i}}$ denote the density, Mach-number, and confining shock of the left ($i=l$) and right ($i=r$) flow. $\rho$ and $M$ denote the density and Mach-number of the CDL. $\alpha$ is the absolute value of the angle between the x-axis and the tangent to the shock. CDL is the shock-compressed interaction zone. The dashed rectangle indicates the computational domain with y-extension Y. Periodic boundary conditions in y-direction imply periodic continuation of the solution (dotted continuation of left and right shock).[]{data-label="fig:sketch2d"}](3898f1.eps){width="8.5cm"} Numerical method {#sec:num_meth} ---------------- Our results were with the AMRCART-code[^1]. We used the multidimensional high-resolution finite-volume-integration scheme developed by @colella:90 on the basis of a Cartesian mesh. Tests showed that this algorithm, compared to dimensional splitting schemes, is significantly more accurate in capturing flow features not aligned with the axis of the mesh. In all our simulations we used a version of the scheme that is (formally) second order accurate in space and in time for smooth flows. We combine this integration scheme with the adaptive mesh algorithm by @berger:85. While a rather coarse mesh was sufficient for the upwind flows, the turbulent CDL was resolved on a much finer scale. We found it useful to have our CDL moving in positive x-direction at a speed of about Mach 20-40. If the CDL was essentially stationary with respect to the computational grid, we observed alignment effects of strong shocks that were nearly parallel to a cell interface (in y-direction). Through the global motion of the CDL, which implied supersonic motion of the confining shocks with respect to the computational grid, we got rid of this problem. We checked that this procedure introduced no systematic effects into the solution. The problem of alignment effects when dealing with high Mach-number flows, nearly stationary shocks, and high order upwind schemes is well known and not particular to our scheme [@colella-woodward:84; @quirk:94; @jasak-weller:95]. Other work arounds exist, such as smoothing of interfaces by additional viscosity, which is often applied in PPM implementations. ### Numerical settings and integration time {#sec:num_settings} In the x-direction, our computational domain extended over $200\, \mathrm{Y}_{\mathrm{0}}$. The y-extent $\mathrm{Y}$ of our domain varied between simulations, $\mathrm{Y}_{\mathrm{0}} \le \mathrm{Y} \le 6\, \mathrm{Y}_{\mathrm{0}}$ (see Table \[tab:list\_of\_runs\]). Boundary conditions at the left and right boundaries (x-direction) were ‘supersonic inflow’. In the y-direction we had periodic boundary conditions. The cell size at the coarsest level was $0.2 \, \mathrm{Y}_{\mathrm{0}}$. The cells at the finest level, covering the CDL, were smaller by a factor $2^{6}$ to $2^{9}$, yielding between 320 and 2560 cells over a distance $\mathrm{Y}_{\mathrm{0}}$ (depending on the simulation, see Table \[tab:list\_of\_runs\]). As will be shown, the relevant time-dependent quantity for the evolution of CDL mean quantities is the average x-extension of the CDL, $\ell_{\mathrm{cdl}}$. We defined it as $\ell_{\mathrm{cdl}} \equiv V / \mathrm{Y}$, where $V$ is the 2D volume of the CDL. For later use we also introduce the volume integrated density $m_{\mathrm{cdl}} \equiv \int_{\mathrm{V}} \rho$, the mean density $\rho_{\mathrm{m}} \equiv m_{\mathrm{cdl}} / V$, and the average column density $ N \equiv m_{\mathrm{cdl}}/\mathrm{Y} = \rho_{\mathrm{m}} \ell_{\mathrm{cdl}}$. The last quantity was made dimensionless by division through $ N_{\mathrm{0}} \equiv \rho_{\mathrm{u}}\mathrm{Y}_{\mathrm{0}}$. We stopped most simulations at $\ell_{\mathrm{cdl}} = \mathrm{Y}/2$. ### Initial conditions {#sec:initial_conditions} We investigated three different initial conditions, I=0,1,2. [**I=0:**]{} No CDL exists at $t=0$. The left and right flows are initially separated by a single interface. The interface is wiggled with a single, sinusoidal mode of wave length $0.1\, \mathrm{Y}$ and amplitude $0.0195\, \mathrm{Y}_{\mathrm{0}}$ (about 3 to 25 grid cells, depending on the discretization). [**I=1:**]{} A CDL is present at time $t=0$. It has a column density of $ N = 14 \, N_{\mathrm{0}}$ and a thickness of $0.03125\, \mathrm{Y}_{\mathrm{0}}$. The confining shocks are both wiggled, with the same sinusoidal mode and amplitude as the interface in the case I=0. The mass within the CDL is at rest and of constant density, $\rho = \rho_{\mathrm{u}} M_{\mathrm{u}}^{2}$, the density the CDL would have in 1D. Note that this initialization implies some violation of the Rankine-Hugoniot jump conditions at the interfaces. [**I=2:**]{} A CDL is present at time $t=0$, with column density $N = 56\, N_{\mathrm{0}}$ and a thickness of $0.125\, \mathrm{Y}_{\mathrm{0}}$. The right shock is wiggled as for I=1, the left shock is straight. The density and velocity in the CDL are set as for I=1. We stress that the initial wiggling of the shocks is not compelling. The only effect of this wiggling is to speed up the initial phase of the evolution. Test cases using another wiggling or starting from straight shocks end up like the simulations we are going to present in the following. We would like to add a side note on this last point, from our observation that the slab is also destabilized when bound by straight shocks. This has already been reported by @blondin-marks:96, who ascribed the destabilization to ’numerical noise’. Meanwhile, @robinet-et-al:00 have investigated what is called the carbuncle phenomenon in some more detail. They showed that - contrary to what has been believed so far - a single straight shock is linearly unstable for exactly one mode associated to the upstream Mach-number of $M_{\mathrm{crit}} = [(5+\gamma) / (3-\gamma)]^{1/2}$. For isothermal conditions, this yields $M_{\mathrm{crit}}= \sqrt{3}$. They also showed that this single unstable mode is sufficient for making straight shocks aligned with the mesh numerically unstable at all Mach-numbers if the computation is done with a low-viscosity, high-order, shock-capturing scheme. To what degree this instability for a straight shock of any Mach-number is really physical seems an open question to us. The different runs ------------------ The runs we performed differ in their upwind Mach-numbers, which lie in a range $5 \lapprox M_{\mathrm{u}} \lapprox 90 $, as well as in their initialization, numerical discretization, and the y-extent of the domain. The labels of the different runs are built up as M\_I.R.Y. Here, M is the upwind Mach-number, I the initialization (0, 1, or 2), R gives the refinement of the spatial discretization, relative to the coarsest grid simulation we performed (1, 2, 4, or 8). R=1 corresponds to a finest cell size of about $3 \cdot 10^{-3} \, \mathrm{Y}_{\mathrm{0}}$, R=2 indicates a twice smaller cell size. Y is the domain size (1, 2, 4, or 6) in units of $\mathrm{Y}_{\mathrm{0}}$. For example, R22\_0.2.4 denotes a run with $M_{\mathrm{u}}=22$, initialization I=0, finest cell size about $1.5 \cdot 10^{-3} \mathrm{Y}_{\mathrm{0}}$, and y-extent $4 \, \mathrm{Y}_{\mathrm{0}}$. The runs we performed are listed in Table \[tab:list\_of\_runs\]. Individual columns in Table \[tab:list\_of\_runs\] contain (column number in square brackets): label of run \[1\], following the scheme label=M$_{\mathrm{u}}$\_I.R.Y, where I is the initial condition, R the refinement factor such that cell size = $3.125 \cdot 10^{-3} \mathrm{Y}_{\mathrm{0}} / \mathrm{R}$, and Y is the y-extension of the computational domain in units of $\mathrm{Y}_{\mathrm{0}}$; Mach-number of upstream flow, $M_{\mathrm{u}}$ \[2\]; stopping time of simulation in terms of $\ell( N )$ \[3\]; y-averaged x-extension of CDL at stopping time, relative to y-extent of computational domain, $\ell_{\mathrm{cdl}}/\mathrm{Y}$ \[4\]; average quantities \[5-9\] of: rms Mach-number, $M_{\mathrm{rms}}$ \[5\]; mean density in units of upstream density, $\rho_{\mathrm{m}}/\rho_{\mathrm{u}}$ \[6\]; shock length in units of y-domain, $\ell_{\mathrm{sh}}/Y$ \[7\]; driving efficiency, $f_{\mathrm{eff}}$ \[8\]; averages taken over $10 \le \ell( N ) \le 70$ for I=0 and over $60 \le \ell( N ) \le 120$ for I=1, for I=2 we give the values at the end of the simulation in parentheses instead. Scaling properties of the model problem {#sec:anal-scaling} ======================================= ![The self-similar 1D solution of isothermal colliding supersonic flows in density (top) and velocity (bottom). The interaction zone (labeled CDL) is bounded by two shocks, $s_{\mathrm{l}}$ and $s_{\mathrm{r}}$, having speeds $v^s_l$ and $v^s_r$ in the rest frame of the CDL. The density and velocity of the 1D interaction zone, we denote by $\rho_{\mathrm{1d}}$ and $v_{\mathrm{1d}}$, respectively. []{data-label="fig:basic_structure_1d"}](3898f2a.eps){width="4.5cm"} ![The self-similar 1D solution of isothermal colliding supersonic flows in density (top) and velocity (bottom). The interaction zone (labeled CDL) is bounded by two shocks, $s_{\mathrm{l}}$ and $s_{\mathrm{r}}$, having speeds $v^s_l$ and $v^s_r$ in the rest frame of the CDL. The density and velocity of the 1D interaction zone, we denote by $\rho_{\mathrm{1d}}$ and $v_{\mathrm{1d}}$, respectively. []{data-label="fig:basic_structure_1d"}](3898f2b.eps){width="4.5cm"} Within the frame of Euler equations and in infinite space, the problem of isothermal supersonically colliding flows can be solved analytically in 1D. The solution, sketched in Fig. \[fig:basic\_structure\_1d\] and Sect. \[sec:anal-scaling\_1d\], is self-similar and depends only on two free parameters, the Mach-numbers of the left and right upwind flow. In 2D the situation is more complicated: the solution is unstable [@vishniac:94; @blondin-marks:96], the shocks confining the CDL are non-stationary and oblique, the interior of the CDL is supersonically turbulent. Nevertheless, in infinite space it seems reasonable to [*assume*]{} that the solution, on average, may still evolve in a self-similar manner. We base this assumption on the following two observations. First, the isothermal Euler equations are scale-free in infinite space. Second, the free parameters of the problem ($\rho_{\mathrm{u}}$, $M_{\mathrm{u}}$, and $a$) do not introduce any fixed length or time scale. Under these conditions, it is possible that the solution also does not depend on length or time separately, but only on their ratio. If so, all length scales should evolve equally with time, which implies, in particular, that the solution then should not depend on the extension of the CDL. We stress, however, that [*we have no proof of the above assumption of self-similarity.*]{} In the remainder of this section, we elaborate a bit further on the implications of the assumed self-similarity. In Sect. \[sec:num\_results\] we will see that the relations derived here give a good approximation of the numerical results, but we stress already here three important points. The numerical simulations are carried out in finite space (not infinite); numerical dissipation might play a role; and the simulations are stopped for the most part while the CDL is still small, about half the size of the y-extent of the computational domain. Important aspects that can only be obtained from the numerical solution include quantities related to the driving of the turbulence, the values of proportionality constants, and the interior structure of the CDL. We neglect this last aspect, however, in the current paper to focus on mean quantities instead. Self-similar 1D solution {#sec:anal-scaling_1d} ------------------------ Denoting the density and velocity of the CDL by $\rho_{\mathrm{1d}}$ and $v_{\mathrm{1d}}$, and those of the left and right upwind flows by $\rho_{\mathrm{i}}$ and $v_{\mathrm{i}}$ ($i=l,r$), the solution in the rest frame of the CDL is given by $$\begin{aligned} \label{eq:self_sim1} \rho_{\mathrm{1d}} / \rho_{\mathrm{i}} & = & M_{\mathrm{i}}^\mathrm{2} + 1 \approx M_{\mathrm{i}}^2,\\ \label{eq:self_sim2} v_{\mathrm{1d}} & = & 0, \\ \label{eq:self_sim3} |v^{s}_{\mathrm{i}}| & = & aM_{\mathrm{i}} / (M_{\mathrm{i}}^{2} - 1) \approx a/ M_{\mathrm{i}} << a.\end{aligned}$$ Here, $v^{s}_{\mathrm{i}}$ is the velocity of the confining shocks and $a$ is again the isothermal sound speed. The approximations hold for large Mach-numbers. The self-similar character is apparent: the solution is not a function of $x$ and $t$ but only a function of $x/t$ through $v^{s}_{\mathrm{i}}$. A relation between characteristic length and time scales of the solution, the self-similarity variable $\kappa_{\mathrm{1d}}$, can be obtained as follows. As a length scale, we take the spatial extension $\ell_{\mathrm{1d}}$ of the CDL, and as a time scale the time $\tau$ needed to accumulate the corresponding column density ${ N }_{\mathrm{1d}}$. From the relations $$N_{\mathrm{1d}} = \rho_{\mathrm{1d}} \ell_{\mathrm{1d}}. \label{eq:mass_column}$$ and $$N_{\mathrm{1d}} = \tau \left( \rho_l v_l + \rho_r v_r\right) \label{eq:mass_cons}$$ and using $\rho_\mathrm{l}/\rho_\mathrm{r} = M^{2}_\mathrm{r}/M^{2}_\mathrm{l}$ (see Eq. \[eq:self\_sim1\]), we obtain $$\kappa_{\mathrm{1d}} \equiv \frac{\ell_{\mathrm{1d}}}{\tau} = a \frac{M_l + M_r}{M_l \cdot M_r}. \label{eq:1d_kappa}$$ Thus for strong shocks $\kappa_{\mathrm{1d}}$ is nothing else than $|v^{s}_{\mathrm{l}}| + |v^{s}_{\mathrm{r}}|$. Specializing to symmetric settings ($\mathrm{l} = \mathrm{r}$) yields $\rho_{\mathrm{1d}} / \rho_{\mathrm{u}} = M_{\mathrm{u}}^{2}$ and $\kappa_{\mathrm{1d}} =2a/M_{\mathrm{u}}$. Scaling properties of the 2D symmetric solution {#sec:anal-scaling_2d} ----------------------------------------------- In the following, we derive scaling relations for the 2D solution, assuming self-similarity. We confront these relations with corresponding numerical results in Sect. \[sec:num\_results\]. ### Density, Mach-number, self-similarity variable In the following, all velocities are again given in the rest frame of the CDL and we [*assume*]{} that a self-similar solution exists. A natural choice for the (constant) self-similarity variable then is again $\kappa_{\mathrm{2d}} \equiv \ell_{\mathrm{cdl}}/\tau$. Using the definitions of Sect. \[sec:num\_settings\] we must have, as in the 1D case, $$\begin{aligned} \label{eq:col-dens-2d-1} N & = & \rho_{\mathrm{m}} \ell_{\mathrm{cdl}},\\ \label{eq:col-dens-2d-2} N & = & 2 \tau \rho_{\mathrm{u}} v_{\mathrm{u}}.\end{aligned}$$ Dividing the two equations through each other yields $\kappa_{\mathrm{2d}} = 2 \rho_{\mathrm{u}} v_{\mathrm{u}} / \rho_{\mathrm{m}}$. As $\kappa_{\mathrm{2d}}$ is a constant, the CDL mean density $\rho_{\mathrm{m}}$ must be constant in time. The root-mean-square velocity $v_{\mathrm{rms}}^{2}$ then has to be constant in time as well, at least if the CDL density and velocity, $\rho$ and $v$, are uncorrelated (in which case we can replace the average over the product $\rho v^{2}$ by the product of the averages of $\rho$ and $v^{2}$) and if kinetic pressure dominates over thermal pressure. This can be seen from equating the total upwind pressure with the total pressure within the CDL, $$\rho_{\mathrm{u}} (a^{2} + v_{\mathrm{u}}^{2}) = \rho_{\mathrm{m}} (a^2 + v_{\mathrm{rms}}^2). \label{eq:p1_cdl}$$ The simplest ansatz for $\rho_{\mathrm{m}}$ and $v_{\mathrm{rms}}$ is that they only depend on the upstream Mach-number, $$\begin{aligned} \label{eq:ansatz_rho} \rho_{\mathrm{m}}/\rho_{\mathrm{u}} & = & \eta_{\mathrm{1}} M_{\mathrm{u}}^{\beta_{\mathrm{1}}}, \\ \label{eq:ansatz_v} v_{\mathrm{rms}}/a & = & \eta_{\mathrm{2}} M_{\mathrm{u}}^{\beta_{\mathrm{2}}}.\end{aligned}$$ Using the ansatz for $\rho_{\mathrm{m}}$ we obtain a first expression for $\kappa_{\mathrm{2d}}$ from Eqs. \[eq:col-dens-2d-1\] and \[eq:col-dens-2d-2\], $$\kappa_{\mathrm{2d}} = 2 a \eta_{\mathrm{1}}^{-1} M_{\mathrm{u}}^{1-\beta_{\mathrm{1}}} \propto a M_{\mathrm{u}}^{1-\beta_{\mathrm{1}}}. \label{eq:2d_kappa_coldens}$$ A second expression for $\kappa_{\mathrm{2d}}$, we obtain from Eq. \[eq:p1\_cdl\] $$\rho_{\mathrm{u}}a^{2}(1+M_{\mathrm{u}}^{2}) = \rho_{\mathrm{m}} (a^2 + v_{\mathrm{rms}}^2) = \frac{a^2 N }{\ell_{\mathrm{cdl}}} ( 1 + \eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2 \beta_{\mathrm{2}}} ). \label{eq:p_cdl}$$ Again using Eq. \[eq:col-dens-2d-2\] to replace $ N $, one obtains $$\kappa_{\mathrm{2d}} = 2 a M_{\mathrm{u}} \frac{ 1 + \eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2 \beta_{\mathrm{2}}}} {1 +M_{\mathrm{u}}^{2}} \approx 2 a \eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2\beta_{\mathrm{2}} -1} \propto a M_{\mathrm{u}}^{2\beta_{\mathrm{2}} -1}. \label{eq:2d_kappa_pressure}$$ The approximation is good for high Mach-number flows, with $\eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2 \beta_{\mathrm{2}}} >> 1$, and for $\beta_{\mathrm{2}} > 0$, which is, however, to be expected for supersonic turbulence. Comparing Eqs. \[eq:2d\_kappa\_coldens\] and \[eq:2d\_kappa\_pressure\] gives $$\begin{aligned} \label{eq:beta12} \beta_{\mathrm{2}} & = & 1 - \beta_{\mathrm{1}}/2, \\ \label{eq:eta12} \eta_{\mathrm{1}}^{-1} & = & \eta_{\mathrm{2}}^{2}.\end{aligned}$$ ### Driving energy {#sec:a_drive_eff} From energy conservation, we have $ \dot{\cal E}_{\mathrm{diss}} = \dot{\cal E}_{\mathrm{drv}} - \dot{\cal E}_{\mathrm{kin}}$. Here $\dot{\cal E}_{\mathrm{drv}}$ is the energy flux density entering the CDL per time and per unit length in the y-direction, and $\dot{\cal E}_{\mathrm{diss}}$ denotes the energy density dissipated per time within an average column of length $\ell_{\mathrm{cdl}}$ of the CDL. Finally, $\dot{\cal E}_{\mathrm{kin}}$ is the change per time of the kinetic energy contained within such an average column. We first turn to the driving energy $\dot{\cal E}_{\mathrm{drv}}$ and come back to $\dot{\cal E}_{\mathrm{diss}}$ and $\dot{\cal E}_{\mathrm{kin}}$ in Sect. \[sec:energy\_dissipation\]. Part of the total (left plus right) upwind kinetic energy flux density, ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}}v_{\mathrm{u}}^{3}$, is thermalized at the shocks confining the CDL. The remaining part, $\dot{\cal E}_{\mathrm{drv}}$, drives the turbulence in the CDL. We assume that $\dot{\cal E}_{\mathrm{drv}}$ and ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}}$ are related by a function of the upwind Mach-number only, $$\dot{\cal E}_{\mathrm{drv}} = f_{\mathrm{eff}}(M_{\mathrm{u}}){\cal F}_{\mathrm{e_{\mathrm{kin}},u}}. \label{eq:def_feff}$$ We call the function $f_{\mathrm{eff}}$ the driving efficiency. An expression for $f_{\mathrm{eff}}$ can be derived by using the jump conditions for strong, oblique shocks, $$\begin{aligned} \rho_{\mathrm{d}} & = & \rho_{\mathrm{u}} M_{\mathrm{\perp,u}}^{2} = \rho_{\mathrm{u}} M_{\mathrm{u}}^{2} \sin^{2}\alpha, \nonumber \\ v_{\mathrm{\perp,d}} & = & v_{\mathrm{\perp,u}} M_{\mathrm{\perp,u}}^{-2} = \frac{a}{M_{\mathrm{u}} \sin \alpha}, \nonumber \\ v_{\mathrm{\parallel,d}} & = & v_{\mathrm{\parallel,u}} = a M_{\mathrm{u}} \cos\alpha. \label{eq:oblique_jump}\end{aligned}$$ The subscript d denotes downstream quantities, right after shock passage; the subscripts ${\mathrm{\perp}}$ and ${\mathrm{\parallel}}$ denote flow components perpendicular and parallel to the shock, respectively; and $\alpha$ is given in Fig. \[fig:sketch2d\]. Using Eq. \[eq:oblique\_jump\] we obtain $$\begin{aligned} \dot{\cal E}_{\mathrm{drv}} & = & \frac{1}{\mathrm{Y}}\int_{s_{\mathrm{l,r}}} ds \frac{\rho_{\mathrm{d}} v_{\mathrm{d}}^{2}}{2} v_{\mathrm{\perp,d}} \nonumber \\ & = & \frac{\rho_{\mathrm{u}}v_{\mathrm{u}}^{3}}{2Y} \int_{\mathrm{Y}_{\mathrm{l,r}}} dy (1 - \sin^{2}\alpha + \frac{1}{M_{\mathrm{u}}^{4}\sin^{2}\alpha}), \label{eq:edrive_bowshock1}\end{aligned}$$ where the integral over $s_{\mathrm{l,r}}$ and $\mathrm{Y}_{\mathrm{l,r}}$ runs over both shocks and where it was used that $\sin \alpha \; ds = dy$. The last term on the right hand side of Eq. \[eq:edrive\_bowshock1\] is omitted in the following. This is justified, as the shocks we observe in our simulations fulfill $\sin \alpha >> M_{\mathrm{u}}^{-2}$ for the most part (see Sect. \[sec:confshocks\]). For $f_{\mathrm{eff}}(M_{\mathrm{u}})$ we thus obtain $$f_{\mathrm{eff}} = \frac{1}{2Y} \int_{\mathrm{Y_{\mathrm{l,r}}}} dy (1 - \sin^{2}\alpha) \equiv 1 - \sin^{2}\alpha_{\mathrm{eff}} \label{eq:a_feff}$$ where we used the midpoint rule. The angle $\alpha_{\mathrm{eff}}$ can be interpreted as an average bending angle. As the ansatz for the Mach-number dependence of $f_{\mathrm{eff}}$ we thus take $$f_{\mathrm{eff}} = 1 - \sin^{2}\alpha_{\mathrm{eff}} = 1 - \eta_{\mathrm{3}}M_{\mathrm{u}}^{\beta_{\mathrm{3}}}. \label{eq:b_feff}$$ ### Energy dissipation {#sec:energy_dissipation} A first expression for the column-integrated dissipated energy per time can be obtained from energy conservation, $\dot{\cal E}_{\mathrm{diss}} = \dot{\cal E}_{\mathrm{drv}} - \dot{\cal E}_{\mathrm{kin}}$. For $\dot{\cal E}_{\mathrm{drv}}$ we just derived an expression, Eqs. \[eq:def\_feff\] and  \[eq:b\_feff\]. For $\dot{\cal E}_{\mathrm{kin}}$ we obtain, within the frame of self-similarity, $$\dot{\cal E}_{\mathrm{kin}} = \frac{\rho_{\mathrm{m}} v_{\mathrm{rms}}^{2}}{2} \frac{d \ell_{\mathrm{cdl}}}{dt} = \rho_{\mathrm{u}} a^{3} \frac{\eta_{\mathrm{2}}}{2} M_{\mathrm{u}}^{3-\beta_{\mathrm{1}}}, \label{eq:ekin}$$ where we used Eqs. \[eq:ansatz\_rho\], \[eq:ansatz\_v\], and \[eq:2d\_kappa\_pressure\] to \[eq:eta12\]. Together we get $$\dot{\cal E}_{\mathrm{diss}} = \rho_{\mathrm{u}} a^{3} M_{\mathrm{u}}^{3} \; [1 - \eta_{\mathrm{3}}M_{\mathrm{u}}^{\mathrm{\beta_{\mathrm{3}}}} - 0.5 \; \eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{\mathrm{-\beta_{\mathrm{1}}}}]. \label{eq:ediss_cons}$$ The energy dissipated per time within an average column of length $\ell_{\mathrm{cdl}}$ is thus independent of this length. If energy dissipation occurs only (as within the frame of Euler equations) or at least dominantly in shocks, which implies that the average distance between shocks increases and / or the average strength of the shocks decreases as the CDL grows. A second expression for $\dot{\cal E}_{\mathrm{diss}}$ can be obtained from dimensional considerations. The energy dissipated per unit volume per unit time must be proportional to $\rho_{\mathrm{diss}} v_{\mathrm{diss}}^{3} \ell_{\mathrm{diss}}^{-1}$. Here, $\rho_{\mathrm{diss}}$, $v_{\mathrm{diss}}$, and $\ell_{\mathrm{diss}}$ are the characteristic density, velocity, and length scale of the dissipation. The energy dissipation within an average column of length $\ell_{\mathrm{cdl}}$ can thus be written as $\dot{\cal E}_{\mathrm{diss}} \propto \rho_{\mathrm{diss}} v_{\mathrm{diss}}^{3} \ell_{\mathrm{diss}}^{-1} \ell_{\mathrm{cdl}}$. As all length scales must evolve equally with time within the frame of self-similarity, $\ell_{\mathrm{cdl}}/\ell_{\mathrm{diss}}$ must be constant, thus $$\dot{\cal E}_{\mathrm{diss}} \propto \rho_{\mathrm{diss}} v_{\mathrm{diss}}^{3}. \label{eq:ediss2}$$ Comparison of Eqs. \[eq:ediss\_cons\] and \[eq:ediss2\] suggests $v_{\mathrm{diss}} \propto a M_{\mathrm{u}}$ and a more complicated Mach-number dependence for $\rho_{\mathrm{diss}}$. As $v_{\mathrm{rms}}$ is the only velocity scale we have, it seems natural to assume that $v_{\mathrm{diss}} \propto v_{\mathrm{rms}}$. It then follows that $v_{\mathrm{rms}} \propto a M_{\mathrm{u}}$ or $\beta_{\mathrm{2}}=1$ (and $\beta_{\mathrm{1}}=0$). We note that @gammie-ostriker:96 even found $v_{\mathrm{diss}} = v_{\mathrm{rms}}$ for a 1D case. Summary of expected scaling relations {#sec:expectedrelations} ------------------------------------- If a self-similar solution exists, we expect the following dependencies: $$\begin{aligned} \label{eq:exp_rho} \rho_{\mathrm{m}} & = & \eta_{\mathrm{1}} \rho_{\mathrm{u}} M_{\mathrm{u}}^{\beta_{\mathrm{1}}} = \eta_{\mathrm{1}} \rho_{\mathrm{u}},\\ \label{eq:exp_mach} M_{\mathrm{rms}} & = & \eta_{\mathrm{2}} M_{\mathrm{u}}^{\beta_{\mathrm{2}}} = \eta_{\mathrm{1}}^{-1/2} M_{\mathrm{u}},\\ \label{eq:exp_kappa} \kappa_{\mathrm{2d}} & = & \ell_{\mathrm{cdl}}/\tau = 2 \eta_{\mathrm{1}}^{-1} a M_{\mathrm{u}},\\ \label{eq:exp_edrv} \dot{\cal E}_{\mathrm{drv}} & = & \rho_{\mathrm{u}} a^{3} M_{\mathrm{u}}^{3} (1 - \eta_{\mathrm{3}} M_{\mathrm{u}}^{\beta_{\mathrm{3}}}),\\ \label{eq:exp_ekin} \dot{\cal E}_{\mathrm{kin}} & = & \rho_{\mathrm{u}} a^{3} M_{\mathrm{u}}^{3} \; 0.5 \; \eta_{\mathrm{2}}^{2},\\ \label{eq:exp_ediss} \dot{\cal E}_{\mathrm{diss}} & = & \rho_{\mathrm{u}} a^{3} M_{\mathrm{u}}^{3} (1 -\eta_{\mathrm{3}} M_{\mathrm{u}}^{\beta_{\mathrm{3}}} - 0.5 \; \eta_{\mathrm{2}}^{2}).\end{aligned}$$ Note the differences to the 1D solution: Eq. \[eq:exp\_rho\] predicts the CDL mean density to be independent of $M_{\mathrm{u}}$ and $\kappa_{\mathrm{2d}} \propto a M_{\mathrm{u}}$, in contrast to $\rho_{\mathrm{1d}} \propto M_{\mathrm{u}}^{2}$ and $\kappa_{\mathrm{1d}} \propto a / M_{\mathrm{u}}$. In deriving the above relations, we made four basic assumptions: a) we have simple Mach-number dependencies of $\rho_{\mathrm{m}}$, $v_{\mathrm{rms}}$, and $f_{\mathrm{eff}}$, Eqs. \[eq:ansatz\_rho\], \[eq:ansatz\_v\], and \[eq:b\_feff\]; b) the CDL density and velocity are uncorrelated; c) we have high Mach-numbers in the sense that $\eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2 \beta_{\mathrm{2}}} >> 1$ or $M_{\mathrm{rms}}^{2} >> 1$; d) $v_{\mathrm{diss}} \propto v_{\mathrm{rms}}$. In Sect. \[sec:num\_results\] we are going to check the validity of these assumptions and confront Eqs. \[eq:exp\_rho\] to \[eq:exp\_ediss\] with numerically obtained values. We expect good agreement as long as $M_{\mathrm{rms}} >> 1$, thus dissipation in shocks likely dominates, and as long as $\ell_{\mathrm{cdl}} << \mathrm{Y}$. The ’Euler character’ of the solution should prevail under these conditions. We also determine those quantities that cannot be derived analytically. These are, on the one hand, the coefficients $\eta_{\mathrm{1}}$ and $\eta_{\mathrm{3}}$, as well as the exponent $\beta_{\mathrm{3}}$. On the other hand, there are quantities for which we have no analytical expression at all, like the wiggling of the confining shocks, the associated distribution of the angle $\alpha$, or the Mach-number dependence of the length of the confining shocks. Numerical results {#sec:num_results} ================= We now present our numerical results. After a brief phenomenological description of the solution in Sect. \[sec:pheno\], we give quantitative results for initial conditions I=0 in Sect. \[sec:symmetric\_nocdl\]. Results for initial conditions I=1 and I=2 are given in Sect. \[sec:symmetric\_withcdl\], and asymmetric settings are briefly addressed in Sect. \[sec:results\_asym\]. Discretization and domain studies are the topic of Sect. \[sec:griddomain\]. Brief phenomenological description {#sec:pheno} ---------------------------------- ![The interaction zone of run R22\_1.2.2, shown in density (logarithmic scale, in units of $\rho_{\mathrm{u}}$, color bar from 0 to 4), for three different times: $\ell( N ) \approx 34$ (top), $\ell( N ) \approx 54$ (middle), $\ell( N ) \approx 74$ (bottom). The spatial scale of patches, filaments, and wiggling of the confining shocks increases with $\ell( N )$.[]{data-label="fig:pheno_dens"}](3898f3a.ps){width="8.5cm"} ![The interaction zone of run R22\_1.2.2, shown in density (logarithmic scale, in units of $\rho_{\mathrm{u}}$, color bar from 0 to 4), for three different times: $\ell( N ) \approx 34$ (top), $\ell( N ) \approx 54$ (middle), $\ell( N ) \approx 74$ (bottom). The spatial scale of patches, filaments, and wiggling of the confining shocks increases with $\ell( N )$.[]{data-label="fig:pheno_dens"}](3898f3b.ps){width="8.5cm"} ![The interaction zone of run R22\_1.2.2, shown in density (logarithmic scale, in units of $\rho_{\mathrm{u}}$, color bar from 0 to 4), for three different times: $\ell( N ) \approx 34$ (top), $\ell( N ) \approx 54$ (middle), $\ell( N ) \approx 74$ (bottom). The spatial scale of patches, filaments, and wiggling of the confining shocks increases with $\ell( N )$.[]{data-label="fig:pheno_dens"}](3898f3c.ps){width="8.5cm"} We begin with a brief qualitative description of the CDL. As an example, the density structure of run R22\_1.2.2 is shown in Fig. \[fig:pheno\_dens\] for three different times. A first characteristic is the local bending of the confining shocks. The spatial scale of these wiggles increases linearly with time, as the CDL accumulates more and more matter and gets more and more extended. The inclination of the wiggles with respect to the direction of the upstream flows decreases with increasing upstream Mach-number (see Sect. \[sec:confshocks\]). Occasionally, we observe a superimposed ’bending mode’ (e.g. bottom panel in Fig. \[fig:pheno\_dens\]), which in appearance is somewhat similar to the bending modes of the NTSI described by @vishniac:94. A second characteristic is the patchy appearance of the CDL. The turbulent interior is organized in filaments and patches, regions within which a flow variable remains more or less constant. The spatial extension of these patches increases as well as the CDL accumulates more and more matter. The flow variables clearly mirror the supersonic character of the turbulence: the contrast between high-density filaments and extended patches in Fig. \[fig:pheno\_dens\] easily reaches two orders of magnitude, the root-mean-square velocity is well above sound, and the mean density is substantially reduced compared to the 1D case. Shocks within the CDL are ubiquitous. Settings without CDL at $t=0$ {#sec:symmetric_nocdl} ----------------------------- For symmetric settings, and if there is no CDL at time $t=0$, we expect to see the self-similar relations we derived in Sect. \[sec:anal-scaling\_2d\]. We express the time evolution of the solution we express in terms of $$\label{eq:ellnmass} \ell( N ) \equiv N / N_{\mathrm{0}} = \frac{\rho_{\mathrm{m}} \ell_{\mathrm{cdl}}} {\rho_{\mathrm{u}} \mathrm{Y}_{\mathrm{0}}}.$$ This function monotonically increases at about the same rate as the mean extension of the CDL, since $\rho_{\mathrm{m}} \approx \eta_{\mathrm{1}} \rho_{\mathrm{u}}$ (Eq. \[eq:exp\_rho\]). In fact, $\rho_{\mathrm{m}} \approx 30 \rho_{\mathrm{u}}$ (Sect. \[sec:means\]) and thus $\ell( N ) = 60$ corresponds to $\ell_{\mathrm{cdl}} \approx 2 \mathrm{Y}_{\mathrm{0}}$. For the symmetric case we consider in this paper, $\ell ( N ) $ is proportional to the elapsed time. Using Eq. \[eq:col-dens-2d-2\] to express $N$, we can write $$\label{eq:ellntime} \ell ( N ) \equiv N / N_{\mathrm{0}} = \frac{2 \tau \rho_{\mathrm{u}} v_{\mathrm{u}}} {\rho_{\mathrm{u}} \mathrm{Y}_{\mathrm{0}}} = \tau \frac{2 v_{\mathrm{u}}}{Y_{\mathrm{0}}},$$ and $\ell ( N ) = 60$ then corresponds to a time $ \tau = 30 Y_{\mathrm{0}} / v_{\mathrm{u}}$. Or, if we use $v_{\mathrm{u}} \approx 5 v_{\mathrm{rms}}$ (Sect. \[sec:means\]) and $\mathrm{Y}_{\mathrm{0}} \approx \ell_{\mathrm{cdl}} / 2$ for $\ell ( N ) = 60$, we obtain $\tau \approx 3 \ell_{\mathrm{cdl}} / v_{\mathrm{rms}}$. Unless otherwise stated, averages and best fits in this section are always taken over the interval $10 \le \ell( N ) \le 70 $ and over all runs without CDL at time $t=0$. The interval was chosen such that initialization effects have died away and that domain effects do not matter yet (Sect. \[sec:diff\_y\_ext\]). We mention here already that the two most extreme simulations in terms of $M_{\mathrm{u}}$, R5\_0.2.4 and R87\_0.2.4, often differ somewhat from the other simulations. In the case of R5\_0.2.4, we ascribe the deviation to the only subsonic turbulence and the correlation of density and velocity ($M_{\mathrm{rms}} \approx 0.9$ and corr$(\rho,v) \approx -0.4$, see Sect. \[sec:means\]). In the case of R87\_0.2.4, the shocks become sometimes too strongly inclined with respect to the computational grid to be properly resolved by our numerical grid (Sect. \[sec:confshocks\]). ### CDL mean quantities and correlations {#sec:means} We first turn to the correlation of $\rho$ and $v$ and the CDL mean quantities $\rho_{\mathrm{m}}$ and $M_{\mathrm{rms}}$, Eqs. \[eq:exp\_rho\] and \[eq:exp\_mach\]. One of our basic assumptions in deriving these self-similar relations, namely point b) that the CDL density and velocity are uncorrelated, we find confirmed by our simulations. For nearly all symmetric simulations without initial CDL and for $ 10 \le \ell( N ) \le 70 $, we have $0.1 \ge \mathrm{corr}(\rho,v) \ge -0.1$. The only exceptions are the three low Mach-number runs R11\_0.2.4, R11\_0.2.2, and R5\_0.2.4 with correlations of about -0.2, -0.2, and -0.4 respectively. The top panel of Fig. \[fig:mean\_tis\] shows the time evolution of corr$(\rho,v)$ for five selected runs that differ only in their upwind Mach-number, $5 \le M_{\mathrm{u}} \le 90$. In the middle and bottom panel of the the same figure, $\rho_{\mathrm{m}}/\rho_{\mathrm{u}}$ and $M_{\mathrm{rms}}/M_{\mathrm{u}}$ are shown as a function of $\ell( N )$ for the same runs. Two things are apparent. First, the ratios take similar values for all five runs, indicating that indeed $\beta_{\mathrm{1}} \approx 0$ and $\beta_{\mathrm{2}} \approx 1$ for the exponents in Eqs. \[eq:exp\_rho\] and \[eq:exp\_mach\]. Second, the ratios are not constant with $\ell( N )$, indicating that the numerical solution is indeed only approximately self-similar. We come back to this point in Sect. \[sec:discussion\]. To determine optimum exponents $\beta_{\mathrm{i}}$, $i=1,2$, we rewrite Eqs. \[eq:exp\_rho\] and \[eq:exp\_mach\] as equations for $\eta_{\mathrm{1}}$ and $\eta_{\mathrm{2}}$ and minimize the variance $\sigma^{2}(\eta_{\mathrm{i}})$. Considering all data points within $10 \le \ell( N ) \le 70$ of all runs without a CDL at $t=0$, we find the smallest variances for $\beta_{\mathrm{1}} = 0 $ and for $\beta_{\mathrm{2}} = 1$. The corresponding means are $\mu(\eta_{\mathrm{1}}) \approx 28$ and $\mu(\eta_{\mathrm{2}}) \approx 0.21$. Although clearly identifiable, the minima of $\sigma$ are relatively shallow. Changing $\beta_{\mathrm{1}}$ or $\beta_{\mathrm{2}}$ by $\pm 0.1$, or excluding the very low Mach-number case R5\_0.2.4 (for which $M_{\mathrm{rms}} \approx 0.9$) changes $\sigma$ by only about 5%. By repeating the analysis but allowing for a linear dependence of $\eta_{\mathrm{i}}$ on $\ell( N )$, we obtain the same optimum values for $\beta_{\mathrm{1}}$ and $\beta_{\mathrm{2}}$ but with considerably smaller variance. As $\ell( N )$ increases from 10 to 70, $\eta_{\mathrm{1}}$ rises by about 25% (from 25 to 31), while $\eta_{\mathrm{2}}$ decreases by about 15% (from 0.22 to 0.19). Part of our assumption a), namely the simple Mach-number dependencies of $\rho_{\mathrm{m}}$ and $M_{\mathrm{rms}}$, thus seems justified. With $\eta_{\mathrm{2}} = 0.21$, assumption c), $\eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2} >> 1$, is also fulfilled for most of our simulations. An exception is again run R5\_0.2.4, for which $\eta_{\mathrm{2}}^{2} M_{\mathrm{u}}^{2} \approx 1$. In summary, the simulation results, $\rho_{\mathrm{m}} \approx 28 \rho_{\mathrm{u}}$ and $M_{\mathrm{rms}} \approx 0.21 M_{\mathrm{u}}$, essentially confirm the expected relations, Eqs. \[eq:exp\_rho\] and \[eq:exp\_mach\]. $\eta_{\mathrm{1}}^{1/2} \eta_{\mathrm{2}}=1$, predicted by Eq. \[eq:eta12\], is fulfilled to within 10% at any given time. The mean density is (nearly) independent of $M_{\mathrm{u}}$. As expected, the solution is only approximately self-similar, $M_{\mathrm{rms}}$ decreases by about 15% as $\ell( N)$ increases from 10 to 70. ![Time evolution of corr$(\rho,v)$ (top), $\rho_{\mathrm{m}}/\rho_{\mathrm{u}}$ (middle), and $M_{\mathrm{rms}}/M_\mathrm{u}$ (bottom) for runs R5\_0.2.4 (dotted, dark blue), R11\_0.2.4 (dashed, purple), R22\_0.2.4 (solid, red), R33\_0.2.4 (dash-dotted, orange), R43\_0.2.4 (dash-three-dots, green), and R87\_0.2.4 (long dashes, pink). For these runs, $\ell( N ) = 60$ corresponds to $\ell_{\mathrm{cdl}} \approx \mathrm{Y}/2$.[]{data-label="fig:mean_tis"}](3898f4a.ps){width="9.0cm"} ![Time evolution of corr$(\rho,v)$ (top), $\rho_{\mathrm{m}}/\rho_{\mathrm{u}}$ (middle), and $M_{\mathrm{rms}}/M_\mathrm{u}$ (bottom) for runs R5\_0.2.4 (dotted, dark blue), R11\_0.2.4 (dashed, purple), R22\_0.2.4 (solid, red), R33\_0.2.4 (dash-dotted, orange), R43\_0.2.4 (dash-three-dots, green), and R87\_0.2.4 (long dashes, pink). For these runs, $\ell( N ) = 60$ corresponds to $\ell_{\mathrm{cdl}} \approx \mathrm{Y}/2$.[]{data-label="fig:mean_tis"}](3898f4b.ps){width="9.0cm"} ![Time evolution of corr$(\rho,v)$ (top), $\rho_{\mathrm{m}}/\rho_{\mathrm{u}}$ (middle), and $M_{\mathrm{rms}}/M_\mathrm{u}$ (bottom) for runs R5\_0.2.4 (dotted, dark blue), R11\_0.2.4 (dashed, purple), R22\_0.2.4 (solid, red), R33\_0.2.4 (dash-dotted, orange), R43\_0.2.4 (dash-three-dots, green), and R87\_0.2.4 (long dashes, pink). For these runs, $\ell( N ) = 60$ corresponds to $\ell_{\mathrm{cdl}} \approx \mathrm{Y}/2$.[]{data-label="fig:mean_tis"}](3898f4c.ps){width="9.0cm"} ### Confining shocks {#sec:confshocks} The turbulence within the CDL is driven by the upstream flows. The confining shocks of the CDL affect this driving in two ways. The less inclined the shocks are on average with respect to the direction of the upstream flows (smaller angle $\alpha_{\mathrm{eff}}$ in Eq. \[eq:a\_feff\]), the more kinetic energy survives shock passage and is available for driving the turbulence. The smaller the spatial scale on which the angle $\alpha$ varies, the smaller the scale on which the energy input changes. In the following, we analyze how these shock properties depend on $M_{\mathrm{u}}$ and on $\ell_{\mathrm{cdl}}$. For this purpose, we specify the following basic quantities. The discrete x-position of the left and right shocks, $s_{\mathrm{l}}$ and $s_{\mathrm{r}}$, defined for each discrete y-position $y_{j}$ as the two cell boundaries where the Mach-number drops for the first time from its upwind value $M_{u}$ to $0.8 M_{u}$. We determine the average extension of the CDL, $\ell_{\mathrm{cdl}}$, as $$\ell_{\mathrm{cdl}} = \frac{1}{J} \sum_{j=1}^{J} [ s_{\mathrm{r}}(y_{j}) - s_{\mathrm{l}}(y_{j}) ]. \label{eq:l_cdl_num}$$ The length of the left and right shocks, $\ell_{\mathrm{sh,l}}$ and $\ell_{\mathrm{sh,r}}$, are computed as $$\ell_\mathrm{sh,i} = \sum_{j=1}^{J} [ (s_{\mathrm{i}}(y_{j}) - s_{\mathrm{i}}(y_{j-1}))^{2} + (y_{j} - y_{j-1})^{2}]^{1/2},$$ where $J$ is the number of cells in y-direction, and $i=l,r$. We define the angle $\alpha_{\mathrm{l,r}}(y_{j})$ as the angle between the x-axis and the tangent to the shock (see Fig. \[fig:sketch2d\]). Its numerical computation is described in Appendix \[app:alpha\]. To obtain a number distribution, we sort the values $\alpha_{\mathrm{l,r}}(y_{j}) \in [0,\pi/2]$ into 60 bins. Finally, to obtain a measure for the scale on which the shocks are wiggled, we look at the auto-correlation functions $\Gamma_{\mathrm{l,r}}$, $$\Gamma_{\mathrm{i}}(y_{\mathrm{corr}}) = \frac{<[s_{\mathrm{i}}(y_{j}) - \bar{s}_{\mathrm{i}}]\cdot [s_{\mathrm{i}}(y_{j}+y_{\mathrm{corr}}) - \bar{s}_{\mathrm{i}}]>} {\sigma^{2}_{\mathrm{s}}}, \label{eq:def_autocorr}$$ where $\sigma_{\mathrm{s}}^{2}$ is the variance of the shock position $s_{\mathrm{i}}$, and $<.>$ denotes the mean over all discrete position $y_{j}$. For each time, we determine $y_{\mathrm{corr_{\mathrm{0}}}}$ such that $\Gamma_{\mathrm{i}} (y_{\mathrm{corr_{\mathrm{0}}}}) = 0.5$. Averaging $y_{\mathrm{corr_{\mathrm{0}}}}$ over both shocks gives a mean auto-correlation length $\ell_{\mathrm{corr}}$, $$\ell_{\mathrm{corr}} = \frac{1}{2} \left[ y_{\mathrm{corr_{\mathrm{0}}}}(s_{\mathrm{l}}) + y_{\mathrm{corr_{\mathrm{0}}}}(s_{\mathrm{r}}) \right]. \label{eq:lcorr}$$ A larger auto-correlation length $\ell_{\mathrm{corr}}$ then indicates that the shocks are wiggled on a larger spatial scale, but it does not give the scale of the wiggles in absolute units (see below). All four quantities, CDL extension, number distribution of angle $\alpha$, shock length, and correlation length, are shown in Fig. \[fig:shell\_shock\_tis\]. ![Quantities related to the confining shocks: average extension $\ell_{\mathrm{cdl}}$ of the CDL (first panel), total normalized shock length $l_{\mathrm{sh}}/ (\mathrm{Y} M_{\mathrm{u}}^{0.8})$ (second panel), number distribution (60 bins) of obliqueness angle $\alpha$ averaged over $10 \le \ell( N ) \le 70$ (third panel), auto-correlation length $\ell_{\mathrm{corr}} / \mathrm{Y_{\mathrm{0}}}$ (fourth panel), and scaled auto-correlation length $\ell_{\mathrm{corr}} / (\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6})$, (fifth panel). Individual curves denote the same runs as in Fig. \[fig:mean\_tis\].[]{data-label="fig:shell_shock_tis"}](3898f5a.ps){width="9.0cm"} ![Quantities related to the confining shocks: average extension $\ell_{\mathrm{cdl}}$ of the CDL (first panel), total normalized shock length $l_{\mathrm{sh}}/ (\mathrm{Y} M_{\mathrm{u}}^{0.8})$ (second panel), number distribution (60 bins) of obliqueness angle $\alpha$ averaged over $10 \le \ell( N ) \le 70$ (third panel), auto-correlation length $\ell_{\mathrm{corr}} / \mathrm{Y_{\mathrm{0}}}$ (fourth panel), and scaled auto-correlation length $\ell_{\mathrm{corr}} / (\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6})$, (fifth panel). Individual curves denote the same runs as in Fig. \[fig:mean\_tis\].[]{data-label="fig:shell_shock_tis"}](3898f5b.ps){width="9.0cm"} ![Quantities related to the confining shocks: average extension $\ell_{\mathrm{cdl}}$ of the CDL (first panel), total normalized shock length $l_{\mathrm{sh}}/ (\mathrm{Y} M_{\mathrm{u}}^{0.8})$ (second panel), number distribution (60 bins) of obliqueness angle $\alpha$ averaged over $10 \le \ell( N ) \le 70$ (third panel), auto-correlation length $\ell_{\mathrm{corr}} / \mathrm{Y_{\mathrm{0}}}$ (fourth panel), and scaled auto-correlation length $\ell_{\mathrm{corr}} / (\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6})$, (fifth panel). Individual curves denote the same runs as in Fig. \[fig:mean\_tis\].[]{data-label="fig:shell_shock_tis"}](3898f5c.ps){width="9.0cm"} ![Quantities related to the confining shocks: average extension $\ell_{\mathrm{cdl}}$ of the CDL (first panel), total normalized shock length $l_{\mathrm{sh}}/ (\mathrm{Y} M_{\mathrm{u}}^{0.8})$ (second panel), number distribution (60 bins) of obliqueness angle $\alpha$ averaged over $10 \le \ell( N ) \le 70$ (third panel), auto-correlation length $\ell_{\mathrm{corr}} / \mathrm{Y_{\mathrm{0}}}$ (fourth panel), and scaled auto-correlation length $\ell_{\mathrm{corr}} / (\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6})$, (fifth panel). Individual curves denote the same runs as in Fig. \[fig:mean\_tis\].[]{data-label="fig:shell_shock_tis"}](3898f5d.ps){width="9.0cm"} ![Quantities related to the confining shocks: average extension $\ell_{\mathrm{cdl}}$ of the CDL (first panel), total normalized shock length $l_{\mathrm{sh}}/ (\mathrm{Y} M_{\mathrm{u}}^{0.8})$ (second panel), number distribution (60 bins) of obliqueness angle $\alpha$ averaged over $10 \le \ell( N ) \le 70$ (third panel), auto-correlation length $\ell_{\mathrm{corr}} / \mathrm{Y_{\mathrm{0}}}$ (fourth panel), and scaled auto-correlation length $\ell_{\mathrm{corr}} / (\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6})$, (fifth panel). Individual curves denote the same runs as in Fig. \[fig:mean\_tis\].[]{data-label="fig:shell_shock_tis"}](3898f5e.ps){width="9.0cm"} ![Variation of $\Gamma_{\mathrm{l}}$, color coded, as a function of $y_{\mathrm{corr}}$ for run R43\_0.2.4 (top panel). To allow for better display the color scale is limited to a range $-0.5 \le \Gamma_{\mathrm{l}} \le +0.5$. Lower or higher values of $\Gamma_{\mathrm{l}}$ are uniformly colored in dark blue or red, respectively. For the same run, $\Gamma_{\mathrm{l}}$ is shown as a function of $y_{\mathrm{corr}}$ for three selected times (bottom panel). $\ell ( N ) = 30$ (solid), $\ell ( N ) = 50$ (dotted), $\ell ( N ) = 70$ (dashed).[]{data-label="fig:gamma"}](3898f6a.ps){width="9.0cm"} ![Variation of $\Gamma_{\mathrm{l}}$, color coded, as a function of $y_{\mathrm{corr}}$ for run R43\_0.2.4 (top panel). To allow for better display the color scale is limited to a range $-0.5 \le \Gamma_{\mathrm{l}} \le +0.5$. Lower or higher values of $\Gamma_{\mathrm{l}}$ are uniformly colored in dark blue or red, respectively. For the same run, $\Gamma_{\mathrm{l}}$ is shown as a function of $y_{\mathrm{corr}}$ for three selected times (bottom panel). $\ell ( N ) = 30$ (solid), $\ell ( N ) = 50$ (dotted), $\ell ( N ) = 70$ (dashed).[]{data-label="fig:gamma"}](3898f6b.ps){width="9.0cm"} The first panel of Fig. \[fig:shell\_shock\_tis\] shows the essentially linear growth of the CDL with $\ell( N )$. The growth rate, however, slowly decreases with increasing $\ell( N )$. The slope of a linear fit in the range $40 < \ell( N ) < 70$ is roughly 10% flatter than the slope obtained in the range $10 < \ell( N ) < 40$. This fits with the slight increase in $\rho_{\mathrm{m}}$, observable in the middle panel of Fig. \[fig:mean\_tis\]. The second panel of Fig. \[fig:shell\_shock\_tis\] shows that the average shock length $\ell_\mathrm{sh} = 0.5(\ell_\mathrm{sh,l} + \ell_\mathrm{sh,r})$ is fairly constant with respect to $\ell( N )$ but increases with $M_{\mathrm{u}}$. Assuming a dependence of the form $\ell_\mathrm{sh} = \eta_{\mathrm{sh}} \mathrm{Y} M_{\mathrm{u}}^{\beta_{\mathrm{sh}}}$, the variance $\sigma^{2} (\eta_{\mathrm{sh}})$ becomes minimal for $\beta_{\mathrm{sh}}=0.8$. As can be seen, the two runs R5\_0.2.4 and R87\_0.2.4 again behave somewhat differently. If we neglect these two runs, $\beta_{\mathrm{sh}}$ remains unchanged but $\sigma$ is reduced by about 40%. The third panel of Fig. \[fig:shell\_shock\_tis\] shows that larger upwind Mach-numbers lead to less inclined shocks with respect to the direction of the upstream flows (lower values of $\alpha$). Shown is the number distribution of $\alpha$, averaged over $10 \le \ell( N ) \le 70$. Individual runs show a slight shift towards higher values of $\alpha$ as $\ell( N )$ increases. This shift is, however, small compared to the effect of $M_{\mathrm{u}}$. The fourth panel of Fig. \[fig:shell\_shock\_tis\] shows the auto-correlation length $\ell_{\mathrm{corr}}$. It not only depends on $M_{\mathrm{u}}$ but is also proportional to $\ell_{\mathrm{cdl}}$. The best fit is found to be $\ell_{\mathrm{corr}} \approx 0.7 \ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6}$. The fifth panel of Fig. \[fig:shell\_shock\_tis\] shows $\ell_{\mathrm{corr}}$ scaled with this best fit. From these scaling properties of $\ell_{\mathrm{corr}}$, we take that higher values of $M_{\mathrm{u}}$ lead to smaller scale wiggling of the shocks with respect to $\ell_{\mathrm{cdl}}$. The absolute value of $\ell_{\mathrm{corr}}$ clearly depends on the choice of the threshold value in our definition, $\Gamma ( y_{\mathrm{corr}} ) = 0.5$. Figure \[fig:gamma\] illustrates the variation of $\Gamma_{\mathrm{l}}$ as a function of $y_{\mathrm{corr}}$ at the example of run R43\_0.2.4. The top panel of Fig. \[fig:gamma\] shows that the initially present sinusoidal wiggling of the confining shocks does not get lost until about $\ell ( N ) = 15$, which is rather late compared to the other runs. Mode-like signatures again appear around $\ell ( N ) \gapprox 50$. Our data give, however, no clear answer to how typical and persistent such signatures are. A basic problem is that their wave length soon becomes comparable (within a factor of 2 or so) to the domain size in the y-direction, which may affect the signatures. From the bottom panel of Fig. \[fig:gamma\], on the other hand, it can be taken that $\Gamma_{\mathrm{l}}$ essentially decreases linearly from 1 to about 0.2. The other simulations show a similar behavior. Consequently, the above scaling properties of $\ell_{\mathrm{corr}}$ should also be obtained if smaller threshold values are used, down to about $\Gamma ( y_{\mathrm{corr}} )= 0.2$. Figs. \[fig:mean\_tis\] and \[fig:shell\_shock\_tis\] also allow some insight into why runs R5\_0.2.4 and R87\_0.2.4 sometimes fit not so well. The third panel of Fig. \[fig:shell\_shock\_tis\] shows that our spatial resolution is barely sufficient for run R87\_0.2.4, the largest upwind Mach-number we have considered. The number distribution here peaks at around $\alpha \approx 0.1$. In terms of discrete positions this means that the shock position changes by about 15 cells in the x-direction as one moves from $y_{j}$ to $y_{j+1}$. Run R5\_0.2.4, on the other hand, may deviate just because of its low Mach-number. The turbulence within its CDL is subsonic, $M_{\mathrm{rms}} \approx 0.9$; and with $\eta_{\mathrm{2}}^{2}M_{\mathrm{u}}^{2} \approx 1.1$ and $\mathrm{corr}(\rho,v) \approx -0.4$ (Fig. \[fig:mean\_tis\], top panel), it violates two of the basic assumptions made when deriving the self-similar scaling laws in Sect. \[sec:anal-scaling\_2d\]. In summary, as $M_{\mathrm{u}}$ increases, the bounding shocks become less inclined with respect to the direction of the upstream flows (smaller $\alpha$), the fraction of upstream kinetic energy that survives the passage through the bounding shocks increases, and the bounding shocks themselves are wiggled on progressively smaller scales (smaller $\ell_{\mathrm{corr} }/ \ell_{\mathrm{cdl}}$. ### Energy balance {#sec:driving_efficiency} ![Driving efficiency (top panel) and best fit $\eta_{\mathrm{3}} = (1 - f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (bottom panel). For details see text. []{data-label="fig:driving-efficiency"}](3898f7a.ps){width="9.0cm"} ![Driving efficiency (top panel) and best fit $\eta_{\mathrm{3}} = (1 - f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (bottom panel). For details see text. []{data-label="fig:driving-efficiency"}](3898f7b.ps){width="9.0cm"} Energy input into the CDL occurs only at its confining interfaces. Energy dissipation, on the other hand, occurs throughout the CDL volume. Nevertheless, according to the analysis in Sect. \[sec:anal-scaling\_2d\] both $\dot{\cal E}_{\mathrm{drv}}$ and $\dot{\cal E}_{\mathrm{diss}}$ should be independent of the CDL extension if dissipation is only due to shocks and if $\ell_{\mathrm{cdl}}$ is small compared to $\mathrm{Y}$. The average distance between shocks must then increase and / or the average strength of the shocks must decrease as the CDL grows. ![ Numerically obtained (top panel) and theoretically expected (middle panel) energy dissipation in units of the upstream kinetic energy flux density ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}} v_{\mathrm{u}}^{3}$. The constants in Eq. \[eq:exp\_ediss\] were set to the best fit values, $\eta_{\mathrm{3}} = 3.3$, $\beta_{\mathrm{3}}= -0.7$, and $\eta_{\mathrm{2}} = 0.21$. We used $\eta_{\mathrm{3}} = 2.75$ for run R5\_0.2.4 (for details see text). The bottom panel shows the ratio of the two quantities. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\dot{\cal E}_{\mathrm{diss}}$ was smoothed using a running mean with time window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:energy-dissipation"}](3898f8a.ps){width="9.0cm"} ![ Numerically obtained (top panel) and theoretically expected (middle panel) energy dissipation in units of the upstream kinetic energy flux density ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}} v_{\mathrm{u}}^{3}$. The constants in Eq. \[eq:exp\_ediss\] were set to the best fit values, $\eta_{\mathrm{3}} = 3.3$, $\beta_{\mathrm{3}}= -0.7$, and $\eta_{\mathrm{2}} = 0.21$. We used $\eta_{\mathrm{3}} = 2.75$ for run R5\_0.2.4 (for details see text). The bottom panel shows the ratio of the two quantities. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\dot{\cal E}_{\mathrm{diss}}$ was smoothed using a running mean with time window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:energy-dissipation"}](3898f8b.ps){width="9.0cm"} ![ Numerically obtained (top panel) and theoretically expected (middle panel) energy dissipation in units of the upstream kinetic energy flux density ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}} v_{\mathrm{u}}^{3}$. The constants in Eq. \[eq:exp\_ediss\] were set to the best fit values, $\eta_{\mathrm{3}} = 3.3$, $\beta_{\mathrm{3}}= -0.7$, and $\eta_{\mathrm{2}} = 0.21$. We used $\eta_{\mathrm{3}} = 2.75$ for run R5\_0.2.4 (for details see text). The bottom panel shows the ratio of the two quantities. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\dot{\cal E}_{\mathrm{diss}}$ was smoothed using a running mean with time window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:energy-dissipation"}](3898f8c.ps){width="9.0cm"} ![Effect of smoothing $\dot{\cal E}_{\mathrm{diss}}$ with a running mean and window $\Delta \ell ( N ) = \pm 1$, illustrated by run R33\_0.2.4. Shown is $\dot{\cal E}_{\mathrm{diss}}$ in units of ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}}v_{\mathrm{u}}^{3}$, before (dashed, black) and after (solid, red) smoothing, in units of erg cm$^{-3}$s$^{-1}$.[]{data-label="fig:smoothing"}](3898f9.ps){width="9.0cm"} To determine $\dot{\cal E}_{\mathrm{drv}}$ we must compute the driving efficiency $f_{\mathrm{eff}} = \dot{\cal E}_{\mathrm{drv}} / {\cal F}_{\mathrm{e_{\mathrm{kin}},u}}$. The corresponding integral in Eq. \[eq:a\_feff\] is evaluated numerically, and the resulting driving efficiency is shown in the top panel of Fig. \[fig:driving-efficiency\]. As can be seen, larger Mach-numbers lead to more efficient driving, and a smaller part of the upstream kinetic energy is thermalized already at the confining shocks. The driving efficiency $f_{\mathrm{eff}}$ increases by about a factor of four between runs R5\_0.2.4 and R87\_0.2.4. Also noteworthy is that the absolute value of the driving power $\dot{\cal E}_{\mathrm{drv}}$ differs by more than 4 orders of magnitude between runs R5\_0.2.4 and R87\_0.2.4. The best fit for the assumed Mach-number dependence (minimization of $\sigma(\eta_{\mathrm{3}})$ in Eq. \[eq:b\_feff\]) yields $\beta_{\mathrm{3}} = -0.7$. The corresponding values of $\eta_{\mathrm{3}} = (1 - f_{\mathrm{eff}})M_{\mathrm{u}}^{0.7}$ are shown in the bottom panel of Fig. \[fig:driving-efficiency\]. From the figure we take that the second part of our assumption a), the simple Mach-number dependence of $f_{\mathrm{eff}}$, seems justified. The figure also shows that $f_{\mathrm{eff}}$, and thus the driving power $\dot{\cal E}_{\mathrm{drv}}$, is not strictly independent of $\ell_{\mathrm{cdl}}$ but decreases with increasing $\ell( N )$. Repeating the best fit analysis but allowing for a linear dependence of $\eta_{\mathrm{3}}$ on $\ell( N )$ again leads to $\beta_{\mathrm{3}} = -0.7$, while $\eta_{\mathrm{3}}$ changes from 3.1 to 3.6 as $\ell( N )$ goes from 10 to 70. The average value of $\eta_{\mathrm{3}}$ is 3.3. Omission of the extreme runs R5\_0.2.4 and R87\_0.2.4 does not change the result. We determine the dissipated energy as $\dot{\cal E}_{\mathrm{diss}}= \dot{\cal E}_{\mathrm{drv}} - \dot{\cal E}_{\mathrm{kin}} $ (Sect. \[sec:a\_drive\_eff\]), where $\dot{\cal E}_{\mathrm{kin}}$ is the change per time of the kinetic energy within an average column of the CDL, and $\dot{\cal E}_{\mathrm{kin}}$ is directly from our simulation data. Figure \[fig:energy-dissipation\] shows the numerically obtained value $\dot{\cal E}_{\mathrm{diss}}$ (top panel) and the theoretically expected value (Eq. \[eq:exp\_ediss\]) $\dot{\cal E}_{\mathrm{diss}}^{\mathrm{th}}$ (middle panel), both in units of ${\cal F}_{\mathrm{e_{\mathrm{kin}},u}} = \rho_{\mathrm{u}} v_{\mathrm{u}}^{3}$, as well as the ratio of the two (bottom panel). For better display, the theoretical value, which must not depend on $\ell ( N )$, is shown as a (constant) function of $\ell ( N )$. For the constants in Eq. \[eq:exp\_ediss\] we used the numerically obtained average values, $\eta_{\mathrm{3}} = 3.3$, $\beta_{\mathrm{3}}= -0.7$, and $\eta_{\mathrm{2}} = 0.21$. We used $\eta_{\mathrm{3}} = 2.75$ only for R5\_0.2.4, in accordance with the bottom panel of Fig. \[fig:driving-efficiency\]. The numerically obtained value was smoothed for better display using a running mean with window size $\Delta \ell( N ) = \pm 1$. The effect of the smoothing is illustrated in Fig. \[fig:smoothing\] with the example of run R11\_0.2.4. Looking at the data of $\dot{\cal E}_{\mathrm{diss}}$ and $\dot{\cal E}_{\mathrm{drv}}$, three points may be stressed. First, $\dot{\cal E}_{\mathrm{diss}}$ (Fig. \[fig:energy-dissipation\], top panel) mirrors $\dot{\cal E}_{\mathrm{drv}} = {\cal F}_{\mathrm{e_{\mathrm{kin}},u}} f_{\mathrm{eff}}$ (Fig. \[fig:driving-efficiency\], top panel), and the values usually differ by less than 10%. This is not surprising. It implies, however, that for larger upstream Mach-numbers, a larger fraction of the upstream kinetic energy is thermalized only within the volume of CDL and not already at its confining shocks. For $M_{\mathrm{u}} \gapprox 20$, the energy dissipated within the CDL exceeds 50% of the upstream kinetic energy (Fig. \[fig:energy-dissipation\], top panel). Second, the bottom panel of Fig. \[fig:energy-dissipation\] shows that $\dot{\cal E}_{\mathrm{diss}}^{\mathrm{th}}$ and $\dot{\cal E}_{\mathrm{diss}}$ agree to within 10% most of the time. Given the wide range covered (5 orders of magnitude in $\dot{\cal E}_{\mathrm{diss}}$, a factor of 20 in $M_{\mathrm{u}}$, and an increase by a factor of 7 in $\ell ( N )$), we conclude that the self-similar solution gives a good estimate. Third, from the same figure it can be seen that $\dot{\cal E}_{\mathrm{diss}}$ generally decreases, except for run R5\_0.2.4. Excluding R5\_0.2.4, a linear fit to $\dot{\cal E}_{\mathrm{diss}} / \dot{\cal E}_{\mathrm{diss}}^{\mathrm{th}}$ yields a decrease of 10% as $\ell ( N )$ increases from 10 to 70. A similar fit to $\dot{\cal E}_{\mathrm{drv}} / \dot{\cal E}_{\mathrm{drv}}^{\mathrm{th}}$ with $\dot{\cal E}_{\mathrm{drv}}^{\mathrm{th}} = \rho_{\mathrm{u}} v_{\mathrm{u}}^{3} ( 1 - 3.3 M_{\mathrm{u}}^{-0.7})$ yields an even slightly larger decrease of 13%. The net dissipation, $\dot{\cal E}_{\mathrm{diss}} / \dot{\cal E}_{\mathrm{drv}}$, in fact increases by 3%. Thus, as the CDL size increases, the absolute dissipation within an average column decreases while the net dissipation increases. In summary, the predicted scaling laws, Eqs. \[eq:exp\_edrv\] to \[eq:exp\_ediss\], are – within the range of applicability – essentially confirmed by the simulations. The fraction of upstream kinetic energy dissipated only within the CDL, and not at the confining shocks, thus increases with $M_{\mathrm{u}}$. Best-fit analysis for the numerical constants yields $f_{\mathrm{eff}} = 1 - 3.3 \; M_{\mathrm{u}}^{-0.7}$. Both $\dot{\cal E}_{\mathrm{drv}}$ and $\dot{\cal E}_{\mathrm{diss}}$ decrease slightly with increasing $\ell_{\mathrm{cdl}}$. The net dissipation rate $\dot{\cal E}_{\mathrm{diss}} / \dot{\cal E}_{\mathrm{drv}}$ increases, but only slightly (3% increase as $\ell ( N )$ goes from 10 to 70.) ![Characteristic length $\ell_{\mathrm{e_{\mathrm{kin}}}}$ of the turbulence (top), in units of $\ell_{\mathrm{corr}}$ (middle), and scaled with best-fit $\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{0.6}$ (bottom) as functions of $\ell( N )$. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\ell_{\mathrm{ekin}}$ was smoothed by a running mean with window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:turbulence-length"}](3898f10a.ps){width="9.0cm"} ![Characteristic length $\ell_{\mathrm{e_{\mathrm{kin}}}}$ of the turbulence (top), in units of $\ell_{\mathrm{corr}}$ (middle), and scaled with best-fit $\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{0.6}$ (bottom) as functions of $\ell( N )$. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\ell_{\mathrm{ekin}}$ was smoothed by a running mean with window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:turbulence-length"}](3898f10b.ps){width="9.0cm"} ![Characteristic length $\ell_{\mathrm{e_{\mathrm{kin}}}}$ of the turbulence (top), in units of $\ell_{\mathrm{corr}}$ (middle), and scaled with best-fit $\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{0.6}$ (bottom) as functions of $\ell( N )$. Individual curves denote the same runs as in Fig. \[fig:mean\_tis\]. For better display, $\ell_{\mathrm{ekin}}$ was smoothed by a running mean with window $\Delta \ell ( N ) = \pm 1$.[]{data-label="fig:turbulence-length"}](3898f10c.ps){width="9.0cm"} ![Plots of $\mathrm{div}(\vec{v})$ for two runs that are identical except for their upstream Mach-number. Larger upstream Mach-numbers lead, on average, to finer structure within the CDL and smaller scale wiggling of the confining shocks. Shown are runs R33\_0.2.4 (top) and R11\_0.2.4 (bottom), both at a time when $\ell_{\mathrm{cdl}} \approx 2 \, \mathrm{Y}_{\mathrm{0}} = \mathrm{Y}/2$. Blue (dark lines) indicates convergence, red (dark patches) divergence.[]{data-label="fig:div-ti1.5-tihf"}](3898f11a.ps){width="8.0cm"} ![Plots of $\mathrm{div}(\vec{v})$ for two runs that are identical except for their upstream Mach-number. Larger upstream Mach-numbers lead, on average, to finer structure within the CDL and smaller scale wiggling of the confining shocks. Shown are runs R33\_0.2.4 (top) and R11\_0.2.4 (bottom), both at a time when $\ell_{\mathrm{cdl}} \approx 2 \, \mathrm{Y}_{\mathrm{0}} = \mathrm{Y}/2$. Blue (dark lines) indicates convergence, red (dark patches) divergence.[]{data-label="fig:div-ti1.5-tihf"}](3898f11b.ps){width="8.0cm"} ### Length scales of the turbulence {#sec:driving-wave length} In Sect. \[sec:confshocks\] we looked at the scaling properties of the confining shocks and pointed out that shorter auto-correlation lengths $\ell_{\mathrm{corr}}$ imply smaller-scale wiggling, thus smaller scale changes of the kinetic energy entering the CDL. In the following, we show that the interface based quantity $\ell_{\mathrm{corr}}$ is proportional to the length scale derived from the volume properties of the turbulence. We take this as evidence of the tight coupling between volume and interface properties, between the turbulence and its driving. On dimensional grounds, we can define two length scales based on volume properties of the turbulence, $$\begin{aligned} \label{eq:lambda_ekin} \ell_{\mathrm{e_{\mathrm{kin}}}} & \equiv & \frac{ N ^{-1/2} {\cal E}_{\mathrm{kin}}^{3/2}}{\dot{\cal E}_{\mathrm{diss}}}, \\ \ell_{\mathrm{v_{\mathrm{rms}}}} & \equiv & \frac{ N v_{\mathrm{rms}}^{3}}{\dot{\cal E}_{\mathrm{diss}}}, \label{eq:lambda_vrms}\end{aligned}$$ where ${\cal E}_{\mathrm{kin}} = \frac{\ell_{\mathrm{cdl}}}{\mathrm{2V}}\int_{\mathrm{V}} \rho v^{2}$ is the average column integrated kinetic energy density. Here $V$ is again the 2D volume of the CDL, introduced in Sect. \[sec:num\_settings\]. The two scales are equal up to a numerical constant if the density and velocity are uncorrelated, in which case we can replace the average over the product $\rho v^{2}$ by the product of the averages of $\rho$ and $v^{2}$, ${\cal E}_{\mathrm{kin}} = \ell_{\mathrm{cdl}} \rho_{\mathrm{m}} v_{\mathrm{rms}}^{2} = N v_{\mathrm{rms}}^{2}$. As this is the case in most of our simulations we look at only one of the above quantities in the following, $\ell_{\mathrm{e_{\mathrm{kin}}}}$, shown in the top panel of Fig. \[fig:turbulence-length\]. For better display, as $\ell_{\mathrm{e_{\mathrm{kin}}}}$ inherits the large time variability of $\dot{\cal E}_{\mathrm{diss}}$, it is smoothed in the same way as $\dot{\cal E}_{\mathrm{diss}}$ in the bottom panel of Fig. \[fig:energy-dissipation\]. Assuming a relation of the form $\ell_{\mathrm{e_{\mathrm{kin}}}} = \alpha_{\mathrm{e_{\mathrm{kin}}}} \ell_{\mathrm{corr}} $, we obtain optimal fits (minimum of $\sigma^{2}(\alpha_{\mathrm{e_{\mathrm{kin}}}})$) for $\alpha_{\mathrm{e_{\mathrm{kin}}}} \approx 1.3 $. The fits become only slightly better if a weak linear dependence of $\alpha_{\mathrm{e_{\mathrm{kin}}}}$ on $\ell( N )$ is allowed (13% change as $\ell( N )$ goes from 10 to 70). $\ell_{\mathrm{e_{\mathrm{kin}}}} / \ell_{\mathrm{corr}}$ is shown in the middle panel of Fig. \[fig:turbulence-length\]. Looking directly at the dependence of $\ell_{\mathrm{e_{\mathrm{kin}}}}$ on $\ell_{\mathrm{cdl}}$ and $M_{\mathrm{u}}$, we find $\ell_{\mathrm{e_{\mathrm{kin}}}} \propto \ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6}$. This is the same dependence we found for $\ell_{\mathrm{corr}}$ in Sect. \[sec:confshocks\], $\ell_{\mathrm{e_{\mathrm{kin}}}}$ scaled with this best fit is shown in the bottom panel of Fig. \[fig:turbulence-length\]. With increasing upstream Mach-number the characteristic length scale $\ell_{\mathrm{e_{\mathrm{kin}}}}$ thus decreases with respect to the CDL extension. This is consistent with our observation that for the same $\ell_{\mathrm{cdl}}$ the interior of the CDL shows finer structuring (patches, filaments) for higher values of $M_{\mathrm{u}}$. Figure \[fig:div-ti1.5-tihf\] illustrates this observation with the example of runs R11\_0.2.4 and R33\_0.2.4. Shown in the figure is $\mathrm{div}(\vec{v})$, as the flow patterns, especially shocks, are better visible in this quantity than in density. In summary, our simulations show that the inherent length scale of the turbulence is proportional to the auto-correlation length of the confining shocks, independent of $M_{\mathrm{u}}$ and $\ell_{\mathrm{cdl}}$. With increasing $M_{\mathrm{u}}$, both length scales decrease relative to the CDL extension, $\ell_{\mathrm{e_{\mathrm{kin}}}} / \ell_{\mathrm{cdl}} \propto M_{\mathrm{u}}^{-0.6}$. The appearance of the CDL, the size of its patches and filaments, behaves similarly. Settings with CDL at $t=0$ {#sec:symmetric_withcdl} -------------------------- ![Comparing runs with and without an initial CDL. Shown are $\rho_{\mathrm{m}} / \rho_{\mathrm{u}}$ (first), $M_{\mathrm{rms}}/M_\mathrm{u}$ (second), the scaled driving efficiency $(1-f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (third), and the scaled characteristic length of the turbulence $\ell_{\mathrm{e_{\mathrm{kin}}}}/\ell_{\mathrm{cdl}} \cdot M_{\mathrm{u}}^{0.6}$ for all symmetric runs. Line styles and colors denote initial conditions, 0 (solid line, blue), 1 (dashed line, red), and 2 (dash-dotted line, orange).[]{data-label="fig:diff_ini"}](3898f12a.ps){width="9.0cm"} ![Comparing runs with and without an initial CDL. Shown are $\rho_{\mathrm{m}} / \rho_{\mathrm{u}}$ (first), $M_{\mathrm{rms}}/M_\mathrm{u}$ (second), the scaled driving efficiency $(1-f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (third), and the scaled characteristic length of the turbulence $\ell_{\mathrm{e_{\mathrm{kin}}}}/\ell_{\mathrm{cdl}} \cdot M_{\mathrm{u}}^{0.6}$ for all symmetric runs. Line styles and colors denote initial conditions, 0 (solid line, blue), 1 (dashed line, red), and 2 (dash-dotted line, orange).[]{data-label="fig:diff_ini"}](3898f12b.ps){width="9.0cm"} ![Comparing runs with and without an initial CDL. Shown are $\rho_{\mathrm{m}} / \rho_{\mathrm{u}}$ (first), $M_{\mathrm{rms}}/M_\mathrm{u}$ (second), the scaled driving efficiency $(1-f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (third), and the scaled characteristic length of the turbulence $\ell_{\mathrm{e_{\mathrm{kin}}}}/\ell_{\mathrm{cdl}} \cdot M_{\mathrm{u}}^{0.6}$ for all symmetric runs. Line styles and colors denote initial conditions, 0 (solid line, blue), 1 (dashed line, red), and 2 (dash-dotted line, orange).[]{data-label="fig:diff_ini"}](3898f12c.ps){width="9.0cm"} ![Comparing runs with and without an initial CDL. Shown are $\rho_{\mathrm{m}} / \rho_{\mathrm{u}}$ (first), $M_{\mathrm{rms}}/M_\mathrm{u}$ (second), the scaled driving efficiency $(1-f_{\mathrm{eff}}) M_{\mathrm{u}}^{0.7}$ (third), and the scaled characteristic length of the turbulence $\ell_{\mathrm{e_{\mathrm{kin}}}}/\ell_{\mathrm{cdl}} \cdot M_{\mathrm{u}}^{0.6}$ for all symmetric runs. Line styles and colors denote initial conditions, 0 (solid line, blue), 1 (dashed line, red), and 2 (dash-dotted line, orange).[]{data-label="fig:diff_ini"}](3898f12d.ps){width="9.0cm"} ![Time evolution of angle distribution for run R22\_2.2.2. Shown is the average angle distribution for $ 10 < \ell( N ) < 70$ (dashed, blue), $ 70 < \ell( N ) < 130$ (dash-dotted, green), $130 < \ell( N ) < 190$ (dash-three-dots, orange), $190 < \ell( N ) < 250$ (long dashes, purple), $250 < \ell( N ) < 310$ (solid, red). Also shown are the distributions for run R5\_0.2.4 (black dots, right line) and for run R11\_0.2.4 (black dots, left line), both averaged over $10 < \ell( N ) < 70$. []{data-label="fig:ddd_time_evol_ang_dist"}](3898f13.ps){width="9.0cm"} We performed additional runs to study the influence of an initially present CDL. Figure \[fig:diff\_ini\] illustrates the results for some selected quantities. Shown are all the runs we performed with initial condition I=0 (no CDL at $t=0$), I=1 (moderate CDL at $t=0$), and I=2 (massive CDL at $t=0$). Comparison of the I=1 and I=0 curves in Fig. \[fig:diff\_ini\] shows that an initially present CDL of moderate column density ($ N =14 \, N_{\mathrm{0}}$) soon develops characteristics similar to those found in simulations without initial CDL. A quasi-steady state is reached for $\ell( N ) \gapprox 40$. The I=1 and I=0 curves then agree to within about a factor of two for volume quantities like $\rho_{\mathrm{m}}$ and $M_{\mathrm{rms}}$ (first two panels in Fig. \[fig:diff\_ini\]). Agreement seems slightly better for interface related quantities. For $(1-f_{\mathrm{eff}})\,M^{0.7}$, shown in the third panel of Fig. \[fig:diff\_ini\], the I=1 and I=0 curves lie more or less on top of each other. The same is true for $\ell_{\mathrm{e_{\mathrm{kin}}}}/ \ell_{\mathrm{cdl}} \,M_{\mathrm{u}}^{0.6}$, shown in the bottom panel of Fig. \[fig:diff\_ini\]. The situation is slightly different for runs with an initially rather massive CDL (I=2, with initially $ N =56\, N_{\mathrm{0}}$). Also in these simulations the CDL gets more and more turbulent. For all the quantities shown in Fig. \[fig:diff\_ini\], the I=2 curves approach the I=1 and I=0 curves. However, it takes these runs much longer to saturate. Only for $\ell( N ) > 240$ the curves finally seem to saturate, at similar values as the I=0 and I=1 curves. That saturation does indeed occur around that time is also supported by Fig. \[fig:ddd\_time\_evol\_ang\_dist\]. As can be seen, the average angle distribution of the confining shocks for run R22\_2.2.2 first shifts to higher and higher values as $\ell( N )$ increases. It then stagnates for the last two averaging periods, $ 190 < \ell( N ) < 250$ and $250 < \ell( N ) < 310$. In summary, we conclude that our symmetric simulations all end up in a similar quasi-steady final state. An initially present CDL only delays the development. The incoming flows also manage to generate (and sustain) a similar level of turbulence also within an initially massive CDL. ![Average $f_{\mathrm{eff}}$ as a function of $M_{\mathrm{rms}}$ for all our symmetric simulations (triangles). In addition, we included data from our asymmetric runs (asterisks), for which $1.6 M_{\mathrm{r}} \le M_{\mathrm{l}} \le 64 M_{\mathrm{r}}$ and which initially have no CDL. Averages were taken over $10 \le \ell( N ) \le 70$ for simulations without initial CDL (blue triangles and green asterisks), over $40 \le \ell( N ) \le 70$ for runs with a moderate initial CDL (red triangles), and over $70 \le \ell( N ) \le 140$ for runs with a massive initial CDL (orange triangles). Lines show $f_{\mathrm{eff}} = 1 - M_{\mathrm{rms}}^{\xi}$ with $\xi=-0.6$ (dashed) and $\xi=-0.6 \pm 0.1$ (dotted).[]{data-label="fig:feff_vs_mrms"}](3898f14.ps){width="9.0cm"} Asymmetric cases {#sec:results_asym} ---------------- We also computed a few asymmetric cases, where the two upwind Mach-numbers are different, $M_{\mathrm{l}} \ne M_{\mathrm{r}}$. For the same reason as given in Sect. \[sec:anal-scaling\], we expect the solution to only depend on $M_{\mathrm{l}}$ and $M_{\mathrm{r}}$. These dependencies are more complicated than those assumed in Sect. \[sec:anal-scaling\] as we now have two different upwind Mach-numbers. The simple dependencies of Sect. \[sec:anal-scaling\] should, however, be recovered in the limit $M_{\mathrm{l}} \rightarrow M_{\mathrm{r}}$. The basic physical reason for the more complicated dependencies on the upwind Mach-numbers lies in the strong back coupling between the turbulence within the CDL and the driving of this turbulence by the upwind flows. Our asymmetric simulations demonstrate clearly (much more clearly than the symmetric simulations) that the turbulence crucially affects the driving: although $M_{\mathrm{l}}$ and $M_{\mathrm{r}}$ are strongly different, the corresponding driving efficiencies are about equal, $f_{\mathrm{eff,l}} \approx f_{\mathrm{eff,r}}$. Thus the efficiency does not depend primarily on the upwind flow. In fact, Fig. \[fig:feff\_vs\_mrms\] shows that for both symmetric and asymmetric runs $f_{\mathrm{eff}}$ (averaged now over both shocks) can be described well by $$f_{\mathrm{eff}} = 1 - M_{\mathrm{rms}}^{-0.6}.$$ The angle distribution of the two shocks behaves accordingly in that it is similar for both shocks and determined by $M_{\mathrm{rms}}$ rather than by either $M_{\mathrm{l}}$ or $M_{\mathrm{r}}$. A more detailed analysis of the asymmetric case, including an approximate analytical solution, will be presented in a subsequent paper. ![Comparing runs that differ only in the y-extent of the domain ($\mathrm{Y} = 2 \mathrm{Y}_{\mathrm{0}}$ and $\mathrm{Y} = 4 \mathrm{Y}_{\mathrm{0}}$). Shown are $ M_{\mathrm{rms,2Y}} / M_{\mathrm{rms,Y}}$ ([**top**]{}), $ f_{\mathrm{eff,2Y}} / f_{\mathrm{eff,Y}}$ ([**middle**]{}), and $ \ell_{\mathrm{corr,2Y}} / \ell_{\mathrm{corr,Y}}$ ([**bottom**]{}). Individual curves denote runs R11\_0.2.\* (dashed, purple), R22\_0.2.\* (solid, red), R33\_0.2.\* (dash-dotted, orange), R43\_0.2.\* (dash-three-dots, green).[]{data-label="fig:m_feff_lcorr_tis_y_2y"}](3898f15a.ps){width="9.0cm"} ![Comparing runs that differ only in the y-extent of the domain ($\mathrm{Y} = 2 \mathrm{Y}_{\mathrm{0}}$ and $\mathrm{Y} = 4 \mathrm{Y}_{\mathrm{0}}$). Shown are $ M_{\mathrm{rms,2Y}} / M_{\mathrm{rms,Y}}$ ([**top**]{}), $ f_{\mathrm{eff,2Y}} / f_{\mathrm{eff,Y}}$ ([**middle**]{}), and $ \ell_{\mathrm{corr,2Y}} / \ell_{\mathrm{corr,Y}}$ ([**bottom**]{}). Individual curves denote runs R11\_0.2.\* (dashed, purple), R22\_0.2.\* (solid, red), R33\_0.2.\* (dash-dotted, orange), R43\_0.2.\* (dash-three-dots, green).[]{data-label="fig:m_feff_lcorr_tis_y_2y"}](3898f15b.ps){width="9.0cm"} ![Comparing runs that differ only in the y-extent of the domain ($\mathrm{Y} = 2 \mathrm{Y}_{\mathrm{0}}$ and $\mathrm{Y} = 4 \mathrm{Y}_{\mathrm{0}}$). Shown are $ M_{\mathrm{rms,2Y}} / M_{\mathrm{rms,Y}}$ ([**top**]{}), $ f_{\mathrm{eff,2Y}} / f_{\mathrm{eff,Y}}$ ([**middle**]{}), and $ \ell_{\mathrm{corr,2Y}} / \ell_{\mathrm{corr,Y}}$ ([**bottom**]{}). Individual curves denote runs R11\_0.2.\* (dashed, purple), R22\_0.2.\* (solid, red), R33\_0.2.\* (dash-dotted, orange), R43\_0.2.\* (dash-three-dots, green).[]{data-label="fig:m_feff_lcorr_tis_y_2y"}](3898f15c.ps){width="9.0cm"} ![Comparison of runs R22\_0.2.2 and R22\_0.2.6, illustrating the effect of a three-times larger y-extent of the computational domain on long time scales. Shown are $M_{\mathrm{rms}}/M_{\mathrm{u}}$ ([**top**]{}) for R22\_0.2.2 (solid, light blue) and R22\_0.2.6 (dashed, dark red) and the ratio $ M_{\mathrm{rms,3Y}} / M_{\mathrm{rms,Y}}$ ([**bottom**]{}).[]{data-label="fig:m_tis_y_3y"}](3898f16a.ps){width="9.0cm"} ![Comparison of runs R22\_0.2.2 and R22\_0.2.6, illustrating the effect of a three-times larger y-extent of the computational domain on long time scales. Shown are $M_{\mathrm{rms}}/M_{\mathrm{u}}$ ([**top**]{}) for R22\_0.2.2 (solid, light blue) and R22\_0.2.6 (dashed, dark red) and the ratio $ M_{\mathrm{rms,3Y}} / M_{\mathrm{rms,Y}}$ ([**bottom**]{}).[]{data-label="fig:m_tis_y_3y"}](3898f16b.ps){width="9.0cm"} ![Scaled auto-correlation lengths of all symmetric simulations on domains with a y-extent less or equal to $2 \mathrm{Y}_{\mathrm{0}}$ ([**top**]{}) and a y-extent greater or equal to $4 \mathrm{Y}_{\mathrm{0}}$ ([**bottom**]{}).[]{data-label="fig:lcorr_all_y_2y"}](3898f17a.ps){width="9.0cm"} ![Scaled auto-correlation lengths of all symmetric simulations on domains with a y-extent less or equal to $2 \mathrm{Y}_{\mathrm{0}}$ ([**top**]{}) and a y-extent greater or equal to $4 \mathrm{Y}_{\mathrm{0}}$ ([**bottom**]{}).[]{data-label="fig:lcorr_all_y_2y"}](3898f17b.ps){width="9.0cm"} Grid and domain studies {#sec:griddomain} ----------------------- The numerical results presented in Sect. \[sec:symmetric\_nocdl\] were all based on simulations with a domain $\mathrm{Y}=4 \mathrm{Y}_{\mathrm{0}}$ and a discretization of $1.5 \cdot 10^{-3} \mathrm{Y}_{\mathrm{0}} $ (R=2) or 2560 cells in the y-direction. Here we want to check whether these choices have any systematical effect on the numerical results of Sect. \[sec:symmetric\_nocdl\]. ### Different y-extent {#sec:diff_y_ext} To check whether the size of the computational domain has any systematic effect on the results of Sect. \[sec:symmetric\_nocdl\], we performed some of the simulations again, but this time on smaller domains of $\mathrm{Y}=2 \mathrm{Y}_{\mathrm{0}}$ and $\mathrm{Y}=\mathrm{Y}_{\mathrm{0}}$. We also performed one simulation on a larger domain $\mathrm{Y}=6 \mathrm{Y}_{\mathrm{0}}$. Figure \[fig:m\_feff\_lcorr\_tis\_y\_2y\] illustrates our findings for simulations on domains $\mathrm{Y}=2 \mathrm{Y}_{\mathrm{0}}$ and $\mathrm{Y}=4 \mathrm{Y}_{\mathrm{0}}$. $M_{\mathrm{rms}}$ shows no systematic effect and is, as such, representative of other volume-related quantities (Fig. \[fig:m\_feff\_lcorr\_tis\_y\_2y\], top panel). As a typical representative for interface-related quantities, $f_{\mathrm{eff}}$ also shows no clear overall effect of the domain size (Fig. \[fig:m\_feff\_lcorr\_tis\_y\_2y\], middle panel). The quantity for which we find the most clear effect is the auto-correlation length $\ell_{\mathrm{corr}}$ (Fig. \[fig:m\_feff\_lcorr\_tis\_y\_2y\], bottom panel). However, even for $\ell_{\mathrm{corr}}$ the effect sets in only for two of the four runs and only for $\ell( N ) \gapprox 30$, i.e. once the CDL extension reaches about half the size of the smaller domain. For the numerical results in Sect. \[sec:symmetric\_nocdl\], $\ell_{\mathrm{cdl}} \approx \mathrm{Y}/2$ corresponds to $\ell( N )= 60$. We conclude that the y-extent of the computational domain has no apparent systematic effect on these results up to $\ell( N ) \lapprox 30$ and probably even up to $\ell( N ) \lapprox 60$. A systematic effect of the computational domain on the numerical solution does become apparent if the simulations are carried on much longer. One pair of runs, R22\_0.2.2 and R22\_0.2.6, were carried on much longer, till $\ell ( N ) \approx 200$. For this pair of runs, Fig. \[fig:m\_tis\_y\_3y\] shows the evolution of $M_{\mathrm{rms}}$ for each run, as well as their ratio, $M_{\mathrm{rms,3y}}/M_{\mathrm{rms,y}}$. The run on the smaller domain apparently shows a faster decay in $M_{\mathrm{rms}}$ after $\ell ( N ) \approx 100 $. From Fig. \[fig:lcorr\_all\_y\_2y\] we take that the behavior of this one pair of runs is most likely the rule, and not the exception. The top panel of Fig. \[fig:lcorr\_all\_y\_2y\] shows $\ell_{\mathrm{corr}}$, scaled, for all the symmetric runs we have performed and whose domain has a y-extent $\le 2 \mathrm{Y}_{\mathrm{0}}$. The bottom panel of Fig. \[fig:lcorr\_all\_y\_2y\] gives the same quantity for all the runs with a domain extention $\ge 4\mathrm{Y}_{\mathrm{0}}$. Comparison of the two figures shows that runs on a domain $\le 2 \mathrm{Y}_{\mathrm{0}}$ saturate around $\ell_{\mathrm{corr}} M_{\mathrm{u}}^{0.6} \approx 1.6 \mathrm{Y}_{\mathrm{0}}$. For runs on a domain $\ge 4\mathrm{Y}_{\mathrm{0}}$, $\ell_{\mathrm{corr}}$ reaches much higher values. ![Comparison of $M_{\mathrm{rms}}$ for runs whose spatial resolution differs by a factor of 2 (subscript c = coarse, f = fine). Shown are (giving only the name of the finer run) runs R22\_0.2.4 (solid, red), R22\_0.4.4 (dash-three-dots, blue), R43\_0.2.4 (long dashes, purple), and R11\_0.2.4 (dash-dotted, orange).[]{data-label="fig:diff_disc"}](3898f18.ps){width="9.0cm"} ![Plots of $\mathrm{div}(\vec{v})$ for two runs that are identical to run R11\_0.2.4, shown in Fig. \[fig:div-ti1.5-tihf\], except for their discretization. The runs shown here were computed with two times lower (top) and at four times lower (bottom) resolution. Blue (dark lines) indicates convergence, red (dark patches) divergence. As can be seen, the number of convergent regions within an average CDL column decreases with decreasing resolution.[]{data-label="fig:div-tihf-c-tihf-cc"}](3898f19a.ps){width="8.0cm"} ![Plots of $\mathrm{div}(\vec{v})$ for two runs that are identical to run R11\_0.2.4, shown in Fig. \[fig:div-ti1.5-tihf\], except for their discretization. The runs shown here were computed with two times lower (top) and at four times lower (bottom) resolution. Blue (dark lines) indicates convergence, red (dark patches) divergence. As can be seen, the number of convergent regions within an average CDL column decreases with decreasing resolution.[]{data-label="fig:div-tihf-c-tihf-cc"}](3898f19b.ps){width="8.0cm"} ### Different discretization {#sec:diff_resol} The results presented in Sect. \[sec:symmetric\_nocdl\] were all based on simulations with a discretization of $1.5 \cdot 10^{-3} \mathrm{Y}_{\mathrm{0}} $ (R=2) or 2560 cells in the y-direction. To check the effect of the discretization on our results, we repeated several simulations with coarser and/or finer discretization. These simulations indeed reveal a systematic effect of the discretization on the values of average quantities. Nevertheless, the general properties of the solution, its approximate self-similarity and Mach-number dependences, remain unaltered. Only the numerical constants $\eta_{\mathrm{i}}$ are affected. The changes are, however, small when compared to the differences between the 1D and 2D solution (for example, $\rho_{\mathrm{m}} = \eta_{\mathrm{1}} \rho_{\mathrm{u}}$ in 2D, while $\rho_{\mathrm{m}} = M_{\mathrm{u}}^{2} \rho_{\mathrm{u}}$ in 1D). We find that finer discretization generally leads to reduced turbulence. Using finer meshes we obtained larger mean densities and lower values of $M_{\mathrm{rms}}$, as shown in Fig. \[fig:diff\_disc\]. The driving efficiency gets lower and the shocks become more inclined with respect to the direction of the upstream flows, and the angle distribution is shifted to lower values. The characteristic length scale $\ell_{\mathrm{e_{\mathrm{kin}}}}$ remains about constant if taken in units of $\ell_{\mathrm{cdl}}$. A possible explanation for the reduction of turbulence (smaller $M_{\mathrm{rms}}$) on finer grids could be the dominance of shocks for the energy dissipation in the CDL. On a coarser grid, the network of shocks within the CDL is less dense. The divergence plots shown in Fig. \[fig:div-tihf-c-tihf-cc\] illustrate this effect. A closer analysis of this idea is, however, beyond the scope of the present paper. We stress that, so far, no convergence has been reached in our discretization studies. Looking at the comparison of the three runs R22\_0.1.4, R22\_0.2.4, and R22\_0.4.4 in Fig.\[fig:diff\_disc\] shows that each reduction of the cell size by a factor of two leads to a reduction of about 20% in $M_{\mathrm{rms}}$. This indicates that the resolution of 2560 cells in y-direction in our standard runs (R\*\_0.2.4) and of 5120 cells in the y-direction in the refined runs is still not sufficient. This should be kept in mind when interpreting these results or any results on shock bound turbulent structures in 2D, let alone 3D. Also, no clear picture emerges regarding the deviation of $M_{\mathrm{rms}}$ from the constant value predicted by Eq. \[eq:ansatz\_v\]. A linear fit to $M_{\mathrm{rms}}$ for $10 \le \ell ( N ) \le 70$ yields -12% for run R22\_0.2.4 and -23% for the two times coarser run R22\_0.1.4. For runs R43\_0.\*.4, the grid dependence is the other way round: R43\_0.2.4 shows a decrease of -25%, the twice coarser run R43\_0.1.4 decreases by only -15%. Discussion {#sec:discussion} ========== We want to address four points in this section. First, we sketch possible reasons for the slight difference between the numerical solution and the relations we derived in Sect. \[sec:anal-scaling\]. Second, we look once more at the driving of the turbulence and, in particular, the back-coupling between interface and volume properties. Third, we briefly consider our results in an astrophysical context, in particular with regard to molecular clouds. Finally, based on preliminary numerical results, we sketch the effect of some additional physics. The numerical solution versus the analytical solution ----------------------------------------------------- In Sect. \[sec:anal-scaling\_2d\] we suggested that a self-similar solution to our 2D model problem may still exist for the limiting case where the system approaches infinity. The relations derived in that section give a reasonable estimate for the numerical results of Sect. \[sec:num\_results\]. However, while $M_{\mathrm{rms}}$ is constant in Sect. \[sec:anal-scaling\_2d\], the numerical simulations show a gradual decrease in $M_{\mathrm{rms}}$ already for small CDLs, $\ell_{\mathrm{cdl}} \lapprox \mathrm{Y}/2$ (15% decrease of $M_{\mathrm{rms}}$ as $\ell (N)$ increases from 10 to 70, Sect. \[sec:symmetric\_nocdl\]). We have no firm explanation for this difference. We sketch three possible effects in the following, but stress that the available data do not allow us to clearly distinguish between them. A first, obvious reason could be the finite y-extent of the computational domain, $\mathrm{Y}$. It sets an upper limit on the total energy input into the CDL, thus on the amount of mass within the CDL that can be driven. Once the CDL has accumulated too much mass, the driving per unit mass weakens and the turbulence starts to weaken. The spatial growth of the CDL slows down while the average density increases. The following considerations on time scales may illustrate this point further. An upper limit to the time at which $\mathrm{Y}$ starts to affect the solution is given by the time $t_{\mathrm{y}}$ at which $\ell_{\mathrm{cdl}} = \mathrm{Y}$. At later times structures may still grow in the x-direction (up to $\ell_{\mathrm{cdl}}$ at most) but cannot grow any more in the y-direction (where $\mathrm{Y}$ sets an upper limit). For the runs in Sect. \[sec:symmetric\_nocdl\], $\ell_{\mathrm{cdl}} = \mathrm{Y}$ corresponds to $\ell ( N ) \approx 120$ or $t_{\mathrm{y}} = 12 \mathrm{Y}_{\mathrm{0}}/v_{\mathrm{rms}}$. A lower limit for the decay time scale of the turbulence may be obtained as follows. For the case of uniformly driven isothermal hydrodynamic turbulence in a 3D periodic box, @maclow:99 has shown that the typical decay time once the driving is turned off, $t_{\mathrm{0}}$, and the initial driving wave length, $\lambda_{\mathrm{drv}}$, are related by $t_{\mathrm{0}} \approx \lambda_{\mathrm{drv}} / v_{\mathrm{rms}}$. Assuming that this result also holds for our slab, that $\lambda_{\mathrm{drv}} = \mathrm{Y}$, and that driving is turned off completely, it follows that $t_{\mathrm{0}} \approx \mathrm{Y} / v_{\mathrm{rms}}$, or $t_{\mathrm{0}} \approx 4 \mathrm{Y_{\mathrm{0}}} / v_{\mathrm{rms}}$ for the runs in Sect. \[sec:symmetric\_nocdl\]. However, driving continues in our simulations and so the effective decay time scale of the turbulence is likely to be much longer than $t_{\mathrm{0}}$. Finally, for the runs in Sect. \[sec:symmetric\_nocdl\], and a typical integration time of $\ell ( N ) = 60$ corresponds to about $\tau = 6 \mathrm{Y}_{\mathrm{0}} / v_{\mathrm{rms}}$, a typical turbulent crossing time at $\ell ( N ) = 60$) is $\tau_{\mathrm{cross}} = \ell_{\mathrm{cdl}} / v_{\mathrm{rms}} \approx 2 \mathrm{Y}_{\mathrm{0}}/ v_{\mathrm{rms}}$. Comparing these different time scales makes it seem likely that at $\ell ( N ) =60$, turbulence in the center of the CDL is still essentially driven, not essentially decaying. Our simulation data do not allow us to either clearly confirm or reject the hypothesis that the finite y-extent of the computational domain is responsible for the slight decrease in $M_{\mathrm{rms}}$ that we observe at early times, $\ell ( N ) \lapprox 70$. If the finite domain size were responsible, $M_{\mathrm{rms}}$ should decay differently on different domains. Comparison of simulations on different domains up to $\ell ( N ) \approx 70$ (Sect. \[sec:diff\_y\_ext\]), however, gives no clear picture. The data are rather noisy, and simulations on domains $2 \mathrm{Y}_{\mathrm{0}}$ and $4 \mathrm{Y}_{\mathrm{0}}$ show no systematic differences as long as $\ell ( N ) \lapprox 30$ ($\ell_{\mathrm{cdl}} < \mathrm{Y}/2$ on the smaller domain). Only for much later times, $\ell ( N ) >> 70$, well beyond the range for the results in Sect. \[sec:symmetric\_nocdl\], does $\mathrm{Y}$ have a clear effect and $M_{\mathrm{rms}}$ decreases faster on smaller domains (Fig. \[fig:m\_tis\_y\_3y\]). A second, more speculative, reason might be numerical dissipation, provided that its effect were to increase with $\ell_{\mathrm{cdl}}$. While we have no evidence that the latter is really the case, it may also be hasty to discard this possibility right away. @porter-woodward:94 found, by observing how simple 2D hydrodynamical flows (shear flows and sound waves of definite wave number, their section 3.3) damp with time, that the decay rate due to numerical dissipation alone is a non-linear function of the wave number. Their results are certainly not directly applicable to the present case. But in view of these results, and given the change in structure size with $\ell_{\mathrm{cdl}}$ as suggested by Fig. \[fig:pheno\_dens\], it might be possible that the effect of numerical dissipation indeed changes with $\ell_{\mathrm{cdl}}$. Note that this would also imply that the MILES approach, outlined in Sect. \[sec:simulating\], were not strictly valid for the problem we consider. The currently available data do not allow us to clearly reject or confirm the effect. A third reason, or rather an amplifying mechanism, could be back-coupling between $M_{\mathrm{rms}}$ and the driving efficiency. Once the turbulence within the CDL is slightly reduced (for whatever reason), the reduction is further amplified by the back-coupling between turbulence and driving, $f_{\mathrm{eff}} = (1 - M_{\mathrm{rms}}^{-0.6})$. The decrease in $M_{\mathrm{rms}}$ results in larger inclination of the shocks with respect to the direction of the upstream flows, more energy is dissipated at the confining shocks of the CDL, and less driving energy enters the CDL. For the observed 15% reduction of $M_{\mathrm{rms}}$, the reduced driving may, in fact, play a dominant role: as $\ell ( N )$ increases from 10 to 70, $\dot{\cal E}_{\mathrm{drv}} / \dot{\cal E}_{\mathrm{drv}}^{\mathrm{th}}$ decreases by 13% (Sect. \[sec:driving\_efficiency\]). But to really estimate the relative importance of the three effects just sketched, further studies are certainly necessary. Two more points seem noteworthy to us in this section. One concerns the near independence of $\dot{\cal E}_{\mathrm{diss}}$ on $\ell_{\mathrm{cdl}}$. From Fig. \[fig:pheno\_dens\] (increase in structure size with increasing $\ell_{\mathrm{cdl}}$), we take that it is rather the increasing average distance between shocks that allows $\dot{\cal E}_{\mathrm{diss}}$ to be essentially independent of $\ell_{\mathrm{cdl}}$ and not so much the, on average, decreasing strength of shocks (Sect. \[sec:energy\_dissipation\]). Whether this is indeed true, only a closer analysis of the structure within the CDL along the lines of @maclow-ossenkopf:00 can tell, which is, however, beyond the scope of the present paper. Such an analysis could also shed light on whether (or in which sense) $\ell_{\mathrm{e_{\mathrm{kin}}}}$ (see Sect. \[sec:driving\_efficiency\]) is indeed a measure of the average distance between shocks. It would also allow us to quantify our impression that small scale structures are preferably located close to the confining interfaces. If true, this would fit with the result by @smith-et-al:00 that the high-frequency part of the shock spectrum is lost most efficiently. The other point concerns run R5\_0.2.4. With corr$(\rho,v) \approx -0.4$ $M_{\mathrm{rms}} \approx 0.9$, it violates two of the basic assumptions we made in Sect. \[sec:anal-scaling\_2d\]. Its mean density is close to the isothermal value for strong shocks, $\rho_{\mathrm{m}} \approx 22 \rho_{\mathrm{u}} \approx 0.9 \rho_{\mathrm{u}} M_{\mathrm{u}}^{2}$. Both $\dot{\cal E}_{\mathrm{diss}}$ and $\dot{\cal E}_{\mathrm{drv}}$ increase with $\ell_{\mathrm{cdl}}$. With these characteristics, R5\_0.2.4 may mark the transition from compressible supersonic turbulence, the topic of this paper, to compressible subsonic turbulence. CDL and confining shocks: a coupled system ------------------------------------------ The turbulence within the CDL is ‘naturally driven’ in the sense that we control neither what fraction of the total upstream kinetic energy, $\rho_{\mathrm{u}} M_{\mathrm{u}}^{2}$, really enters the CDL nor the spatial scale on which this energy input varies. Both are directly determined by the confining shocks instead and indirectly depend on the system as a whole. The driving efficiency at each confining shock scales with $M_{\mathrm{rms}}$, even for situations where $M_{\mathrm{l}} \ne M_{\mathrm{r}}$ (see Sect. \[sec:results\_asym\]). The auto-correlation length of the confining shocks and the characteristic length scale of the turbulence within the CDL are proportional to each other, both scaling as $\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6}$. We take these facts as evidence that the CDL as a whole, its interface and volume properties, forms a tightly coupled, quasi-stationary, and self-regulating system. Back coupling between post shock flow and shock is also described in other contexts, for example by @foglizzo:02 for the case of Bondi-Hoyle accretion. An aspect that remained elusive in Sect. \[sec:num\_results\] is the spatial scale on which the energy input varies, the energy injection scale. To really tackle this issue, it would be necessary to analyze the energy spectrum of the CDL. This task requires, however, some caution because of the highly irregular boundary of the CDL, and we postpone it for the moment. Nevertheless, we would like to present a few thoughts on the subject. A first question is whether it is justified to speak at all of only one injection scale, of monochromatic driving. The homogeneous upstream flow is modulated by the confining shocks. These are wiggled on a variety of spatial scales at any given moment. This strongly suggests that the kinetic energy input into the CDL is most likely not monochromatic but occurs at a whole spectral range instead. Consequences of such non-monochromatic driving have been studied, for example, by @norman-ferrara:96. It also seems worthwhile to briefly look at monochromatically-driven turbulence, in particular at the numerical simulations by @maclow:99. For the case of artificially, monochromatically driven hydrodynamic turbulence in a 3D box with periodic boundaries, he found that the characteristic length of the turbulence is proportional to the driving wave length, independent of the Mach-number: $\lambda / \ell^{\mathrm{3d}}_{\mathrm{e_{\mathrm{kin}}}} = 1.42$, where $\lambda$ is the (known) driving wave length and $\ell^{\mathrm{3d}}_{\mathrm{e_{\mathrm{kin}}}}$ is the 3D analogon of $\ell_{\mathrm{e_{\mathrm{kin}}}}$ in Eq. \[eq:lambda\_ekin\]. In addition, @maclow:99 observed that $\ell^{\mathrm{3d}}_{\mathrm{e_{\mathrm{kin}}}}$ increases with $\lambda$, which is mirrored in the apparent increase in the structure size (patches, filaments). Although our setting clearly differs from that of @maclow:99, two thoughts come to mind. The first is an actual observation, namely that we also observe an increase in structure size with $\ell_{\mathrm{e_{\mathrm{kin}}}}$. The second thought is more of a question or speculation. @maclow:99 determines the proportionality constant between the characteristic scale of the turbulence and the monochromatic driving wave length. One may wonder about the implications of this finding if not one driving wave length is present but a whole spectrum. How will the characteristic length scale of the turbulence, which can still be determined following Eq. \[eq:lambda\_ekin\], depend on this spectrum? And, given our finding that $\ell_{\mathrm{e_{\mathrm{kin}}}} \propto \ell_{\mathrm{corr}}$, what does $\ell_{\mathrm{corr}}$ tell us about this spectrum? Both questions should become tractable once the energy spectrum of the CDL is determined. A glimpse at astrophysics ------------------------- With regard to astrophysics, the presented work basically suggests that, within the frame of isothermal hydrodynamics and a roughly plane parallel setting, larger Mach-numbers of the colliding flows results in a finer and finer network of higher and higher density contrast within the interaction zone. In different types of wind-driven structures, this connection between Mach-number and structure may be directly observable. For the clumping of line-driven hot-star winds, our results suggest that the sheets or clumps formed by the instability of the line-driving are not homogeneous but possess fine-scale substructure with high density contrast. Concerning molecular clouds, we first mention that recent arguments support the idea, originally brought forward by @hunter:79 and  @larson:81, that molecular clouds result from the collision of large-scale flows in the ISM. @basu-murali:01 make the point that small-scale driving ($\approx$ 0.1 - 1 pc) of molecular clouds is incompatible with observed total luminosities, unless the energy dissipation rates derived from MHD simulations are seriously overestimated. Using a principal component analysis of $^{\mathrm{12}}$CO (J=1-0) emission, @brunt:03 identifies large-scale flows of atomic material in which the globally turbulent molecular clouds are embedded. Similar observational results were reported by @ballesteros-hartmann-vazquez:99. Driven supersonic turbulence as a structuring agent for the interior of molecular clouds was examined by many authors [@hunter-et-al:86; @elmegreen:93; @vazquez-passot-pouquet:95; @maclow-klessen-burkert:98; @ballesteros-hartmann-vazquez:99; @ballesteros-et-al:99; @maclow:99; @hartmann-et-al:01; @joung-maclow:04; @burkert-hartmann:04; @maclow-klessen:04; @audit-hennebelle:05; @heitsch-et-al:05; @kim-ryu:05; @vazquez-semadeni-et-al:06; @ballesteros-paredes-et-al:06]. The driving wave length of the turbulence, and thus the largest structure size [@maclow:99; @ballesteros-maclow:02], is usually a free parameter. Our results show instead that, at least for the case of an isothermal, shock compressed, supersonically turbulent 2D slab, the structure size rather depends on the size of the slab or cloud. Additional physics: an outlook ------------------------------ The model presented in this paper covers only some very basic physics. To obtain results with a more direct relation to reality, additional physics must be included in the future, among these the following. Strongly asymmetric flows, where $M_{\mathrm{l}} \ne M_{\mathrm{r}}$, lead to more complicated dependences, as we will demonstrate in a forthcoming paper. Inclusion of radiative cooling, instead of assuming isothermal conditions, can affect the problem in different ways. Thermal instability can lead to additional dynamical effects [@chevalier-imamura:82; @gaetz-et-al:88; @strickland-blondin:95; @walder-folini:96; @hennebelle-perault:99; @hennebelle-perault:00; @vazquez-semadeni-et-al:00; @koyama-inutsuka:02; @audit-hennebelle:05; @heitsch-et-al:05; @pittard-et-al:05; @mignone:05]. Extended cooling layers, on the other hand, tend to act as a cushion. Simulations by @hyp:98 and @walder-folini:00, which include radiative cooling but have otherwise similar parameters as some of the simulations presented here, show comparatively more small scale structure and even roll-ups at the interfaces confining the CDL. The CDL as a whole evolves less violently, and mean densities are about a factor of four to eight higher that what we found here for the isothermal case. Strongly asymmetric flows, where $M_{\mathrm{l}} \ne M_{\mathrm{r}}$, lead to a qualitatively different solution if radiative cooling is included [@walder-folini:98] and to more complicated dependences on the upwind Mach-numbers in the isothermal case, as we will demonstrate in a forthcoming paper. The role of thermal conduction has only been considered by relatively few publications so far [@begelman-mckee:90; @myasnikov-zhekov:98; @koyama-inutsuka:04]. Global bending of the interaction zone affects the stability properties of the interaction zone as a whole and thus probably also its interior properties. In colliding wind binaries, for example, matter is transported out of the central part of the system and diluted in the outer part. Simulations of bow shocks and colliding winds in binaries show strong traveling waves, together with a systematic change of the mean properties in the flow off from the stagnation point [@stevens-et-al:92; @rolf-doris:95; @blondin-koerwer:98]. Summary and conclusions {#sec:conc} ======================= We looked at symmetric, supersonic ($5 \lapprox M_{\mathrm{u}} \lapprox 90$), isothermal, plane-parallel, colliding flows in 2D. The resulting shock-confined interaction zone (CDL) is supersonically turbulent ($1 \lapprox M_{\mathrm{rms}} \lapprox 10$). We investigated the CDL and its interplay with the upstream flows by dimensional analysis and numerical simulations. The latter we generally stopped when $\ell_{\mathrm{cdl}} \approx \mathrm{Y} / 2$. The results are interesting not only with regard to flow collisions, but also shed new light on the properties of supersonic turbulence in general. The numerical simulations show that the CDL has an irregular shape and a patchy, supersonically turbulent interior. The driving of the turbulence is natural in that it depends on the shape of the confining shocks. The dimensional analysis is based on isothermal Euler equations in infinite space. Within this frame, a self-similar solution may exist that would depend on $M_{\mathrm{u}}$ but must not depend on $\ell_{\mathrm{cdl}}$. Relations for average quantities are obtained under some further simplifying assumptions (Sect.\[sec:expectedrelations\]). Based on both the analytical and numerical results, we arrive at the following conclusions. 1\) Comparison of the numerical and the self-similar solution shows generally good agreement if $M_{\mathrm{rms}} \gapprox 1$. The modest deviation between the numerical and the self-similar solutions increases with $\ell_{\mathrm{cdl}}$. We suggest some explanations for the deviation, but our data do not allow any clear conclusions on the issue. For $M_{\mathrm{rms}} \lapprox 1$, we have but one simulation. It shows clear differences to the other runs and may be more characteristic of compressible subsonic turbulence than of supersonic turbulence. 2\) The CDL is characterized by $M_{\mathrm{rms}} \approx \eta_{\mathrm{1}}^{-1/2} M_{\mathrm{u}}$ and $\rho_{\mathrm{m}} \approx \eta_{\mathrm{1}} \rho_{\mathrm{u}}$. The average compression ratio of the CDL is thus independent of $M_{\mathrm{u}}$. This is in sharp contrast to the 1D case, where $\rho_{\mathrm{m,1d}} = M_{\mathrm{u}}^{2}\rho_{\mathrm{u}}$. From the numerical simulations, we find $\eta_{\mathrm{1}} \approx 30$. 3\) The turbulence within the CDL and the driving efficiency are related by $f_{\mathrm{eff}} = 1 - M_{\mathrm{rms}}^{-0.6}$. The relation also holds for asymmetric settings, where $M_{\mathrm{l}} \ne M_{\mathrm{r}}$, emphasizing the mutual coupling between volume and interface properties. For larger upstream Mach-numbers, the shocks confining the interaction zone are less strongly inclined with respect to the direction of the upstream flows. The driving is more efficient, a larger fraction of the upstream kinetic energy is dissipated only within the CDL and not already at the confining shocks. 4\) The characteristic length scale of the turbulence, $\ell_{\mathrm{e_{\mathrm{kin}}}}$, and the auto-correlation length of the confining shocks, $\ell_{\mathrm{corr}}$, are proportional to each other. Both scale as $\ell_{\mathrm{cdl}} M_{\mathrm{u}}^{-0.6}$, this although the former is based on volume quantities while the latter is derived from interface properties. 5\) The separation of filaments and the size of patches within the CDL both get larger as $\ell_{\mathrm{cdl}}$ increases and/or $M_{\mathrm{u}}$ decreases. For increasing upstream Mach-numbers in summary we thus expect a faster expanding CDL with less strongly inclined confining interfaces with respect to the direction of the upstream flows, similar mean density, finer interior structure relative to the CDL size, and a gradual shift of the energy dissipation from the confining shocks to internal shocks within the CDL. We expect to observe these general dependencies in real objects where shock-confined slabs play a role, like molecular clouds, wind driven structures, supernova remnants, or $\gamma$-ray bursts. The authors wish to thank the crew running the Cray SV1 at ETH Zürich, where the simulations were performed, the system administrator of the institute for astronomy, ETH Zürich, P. Steiner, for steady support, and J. Favre from the Swiss Center of Scientific Computing CSCS/SCSC, Manno, for graphics support. The authors also would like to thank the referee, E. Vazquez-Semadeni, for the detailed and engaged report. Numerical computation of obliqueness angle {#app:alpha} ========================================== While shocks are smeared over approximately 3 grid cells in our simulations, the confining shocks in our analysis are specified as a series of discrete x,y-coordinate pairs only (see Sect. \[sec:confshocks\]). This information is sufficient to compute most shock-related quantities to good accuracy, for example the shock length $\ell_{\mathrm{sh}}$. The only quantity that requires a more careful proceeding is the obliqueness angle $\alpha$. If it were computed directly from the discrete shock positions, only discrete values would be obtained, for example 0$^{\circ}$, 45$^{\circ}$, 63.4$^{\circ}$ etc. for one-sided differences. To compute the obliqueness angle $\alpha_{\mathrm{i}}(y_{\mathrm{j}})$ (see Fig. \[fig:sketch2d\] and Sect. \[sec:confshocks\]) at each position $y_{\mathrm{j}}$, $1\le j \le J$, of the left and right shock ($s_{\mathrm{l}}$ and $s_{\mathrm{r}}$), we proceed as follows. In a first step, we use spline interpolation to double the number of points in the y-direction along the shock front. Next, we smooth the shock front slightly, using a running mean with an averaging window of $\pm 5$ points (this corresponds to an averaging window of $\pm 2.5$ points in the original data. Then we compute the derivative at each point of this smoothed shock front, using a 3-point Lagrangian interpolation. To avoid abrupt changes in the derivative from one point to the next, we smooth it again by a running mean with averaging window $\pm 5$ points. We finally obtain the obliqueness angle $\alpha_{\mathrm{i}}(y_{\mathrm{j}})$, $1 \le j \le 2J$, as the arctan of the derivative. We checked that the size of the averaging window ($\pm 3$ points or $\pm 7$ points) has only a marginal effect on the angle distribution and the driving efficiency. For the latter, which is an integral over both shocks, tests show that $\alpha$ can even be computed directly from the discrete positions. List of runs, their parameters, and naming schemes {#app:list_of_runs} ================================================== [lcccccccc]{} label & $M_{\mathrm{u}}$ & $\ell( N )$ & $\ell_{\mathrm{cdl}}/\mathrm{Y}$ & $M_{\mathrm{rms}}$ & $\frac{\rho_{\mathrm{m}}}{\rho_{\mathrm{u}}}$ & $\frac{\ell_{\mathrm{sh}}}{Y}$ & $f_{\mathrm{eff}}$\ \ R5\_0.2.4 & 5.42 & 91 & 1.07 & 0.90 & 24 & 1.1 & 0.16\ R11\_0.2.4 & 10.85 & 88 & 0.59 & 2.2 & 33 & 1.5 & 0.35\ R22\_0.2.4 & 21.7 & 86 & 0.30 & 4.6 & 30 & 2.6 & 0.59\ R33\_0.2.4 & 32.4 & 86 & 0.50 & 6.9 & 26 & 3.6 & 0.70\ R43\_0.2.4 & 43.4 & 88 & 0.55 & 9.1 & 29 & 4.6 & 0.76\ R87\_0.2.4 & 86.8 & 105 & 0.82 & 15. & 23 & 12.1 & 0.89\ R22\_0.4.4 & 21.7 & 41 & 0.25 & 4.3 & 35 & 2.3 & 0.55\ R22\_0.1.4 & 21.7 & 88 & 0.74 & 5.0 & 26 & 2.7 & 0.62\ R43\_0.1.4 & 43.4 & 90 & 0.59 & 8.9 & 32 & 4.1 & 0.76\ R11\_0.2.2 & 21.7 & 89 & 1.10 & 2.2 & 33 & 1.4 & 0.33\ R22\_0.2.2 & 21.7 & 307 & 0.79 & 4.7 & 28 & 2.6 & 0.59\ R33\_0.2.2 & 32.4 & 82 & 1.45 & 6.7 & 30 & 3.6 & 0.70\ R43\_0.2.2 & 43.4 & 73 & 1.09 & 9.4 & 27 & 4.7 & 0.76\ R22\_0.2.6 & 21.7 & 190 & 0.84 & 4.7 & 29 & 2.6 & 0.60\ \ R22\_1.2.2 & 21.7 & 87 & 0.83 & 3.3 & 61 & 1.9 & 0.50\ R22\_1.2.1 & 21.7 & 111 & 1.33 & 3.2 & 68 & 1.8 & 0.46\ R22\_1.4.4 & 21.7 & 199 & 0.72 & 3.4 & 59 & 1.9 & 0.49\ R22\_1.4.2 & 21.7 & 68 & 0.40 & 2.8 & 91 & 1.6 & 0.39\ R22\_1.1.2 & 21.7 & 115 & 1.21 & 3.9 & 42 & 2.4 & 0.59\ R22\_2.2.2 & 21.7 & 313 & 1.44 & (2.4) & (109) & (1.5) & (0.34)\ R22\_2.4.2 & 21.7 & 186 & 0.37 & (1.8) & (253) & (1.2) & (0.24)\ R22\_2.8.2 & 21.7 & 92 & 0.14 & (1.4) & (281) & (1.2) & (0.21)\ \[tab:list\_of\_runs\] [^1]: AMRCART is part of the A-MAZE code-package [@amaze:00], which contains 3D adaptive MHD and radiative transfer codes. The package, along with a brief description, is publicly available at\ http://www.astro.phys.ethz.ch/staff/folini/folini.html or\ http://www.astro.phys.ethz.ch/staff/walder/walder.html.
--- abstract: 'After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well-defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly to the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.' author: - 'Raphael Bousso, Ben Freivogel and I-Sheng Yang[^1]' bibliography: - 'all.bib' title: Boltzmann babies in the proper time measure --- Introduction ============ Typicality ---------- Every time we interpret an experiment, we assume that we are a typical observer. Suppose, for example, that we are trying to distinguish between two theories $T_1$ and $T_2$. Conveniently, they predict a very different value of the spin of an electron subjected to a suitable sequence of interactions: $T_1$ predicts spin up with probability $\epsilon$, and $T_2$ predicts spin down with probability $\epsilon$. If $\epsilon\ll 1$, then even a single measurement will allow us to rule out one of these theories with considerable confidence. We can improve our confidence by repeating the experiment, but for simplicity, let us suppose that $\epsilon$ is so miniscule that we are satisfied with doing a single experiment. In drawing the above conclusions, we acted as if our laboratory either was the only laboratory in the universe, or was selected at random from among all the laboratories doing the same experiment in the universe. This is the assumption of typicality. Note that we have no direct evidence for this assumption. We do not know whether there are other laboratories performing the same experiment on some far-away planets; and if there are, then our laboratory was presumably not actually selected by anyone from among them. Nevertheless, the overall success of the scientific method so far suggests that this assumption is appropriate. To see this, consider a prescription favored by Hartle and Srednicki [@HarSre07], who decline to assume typicality. They argue that it does not matter whether a given outcome is likely to occur in a randomly chosen laboratory; what matters is whether one is likely to be able to find [*some*]{} laboratory, somewhere in all of spacetime, no matter how atypical, in which that outcome occurs. This probability is given not by $\epsilon$, but by $1-(1-\epsilon)^L$, where $L$ is the number of laboratories in the universe. The effect of using this probability-of-global-existence is most dramatic in the case where $L\gg \epsilon^{-1}\gg 1$. Then we cannot rule out either theory, no matter what we observe. We can still rule out one of the two theories by repeating the experiment sufficiently often. But to know at which point we can reject one of the theories, we would need to know how many other laboratories there are. Since we do not know $L$, the Hartle-Srednicki prescription would put an end to experimental science. It would render all experiments pointless, because we could not reject any theory until we know how many other laboratories there are. Given the success of the scientific method thus far, we may conclude the Hartle-Srednicki prescription is inappropriate.[^2] Here we have argued for the assumption of typicality on empirical grounds: it has served us well as a heuristic tool. If it was wrong, we should not have been successful in devising and rejecting scientific theories on the basis of this assumption. But [*why*]{} does it work so well? This, too, can be understood; elegant discussions have recently been given by Page [@Pag07], and by Garriga and Vilenkin [@GarVil07], who also offer a careful definition of the class of observers among which we may consider ourselves to be typical. The measure problem: a phenomenological approach ------------------------------------------------ In the multiverse, we can use typicality to make statistical predictions for the results of observations. For instance, to predict the cosmological constant, we would first determine the theoretically allowed values, and then count the number of observations of each value. The probability to observe a given value of the cosmological constant is proportional to the number of observations, in the multiverse, of that value. The problem is that under rather generic conditions, the universe will have infinite spacetime volume, even if it is spatially finite (i.e., contains a compact Cauchy surface). Then the number of observations can diverge. The landscape of string theory contains perhaps $10^{500}$ metastable vacua, allowing it to solve the cosmological constant problem [@BP]; see Refs. [@Pol06; @TASI07] for a review. However, divergences would arise even if there was only one false vacuum. For example, suppose that there was a first-order phase transition in our past, by which a long-lived metastable vacuum decayed. The symmetries of the instanton mediating this decay [@CDL] dictate that the resulting true vacuum region is an infinite open FRW universe. It will contain either no observers, or an infinite number of them. Moreover, the parent vacuum will keep expanding faster than it decays, so that an infinite number of true vacuum bubbles (or “pocket universes”) are created over time [@GutWei83]. The measure problem in cosmology is the question of how to regulate these infinities, in order to get a finite count of the number of observations of each type.[^3] The choice of measure is no minor technicality, but an integral part of a complete theory of cosmology. Two different measures often assign exponentially different relative probabilities to two types of observations.[^4] Ultimately, a unique measure should arise from first principles in a fundamental theory [@FreSus04; @FreSek06; @Sus07; @MalSheSus]. In the meantime, however, we may regard the measure problem as a [*phenomenological*]{} challenge. At least in the semiclassical regime, we can hope to identify the correct measure by the traditional scientific method: We try a simple, minimal theory, and work out its implications. If they conflict with observation, we either refine (i.e., complicate) the model, or we abandon it altogether for a different approach. What one may regard as a simple measure is, to some extent, in the eye of the beholder. The same can be said for simple theories; yet, for the most part, we know one when we see one. Only a handful of measures have been proposed (see, e.g, Refs. [@Vil06; @Lin06; @Van06] for overviews and further references), and many of them can be seen to conflict with observation, often violently. This is good news, because it makes it feasible to proceed by elimination. Let us investigate simple proposals, let us ask whether they are well-defined, and let us determine whether they conflict with observation. For example, consider the proposal of Ref. [@GarSch05]. In its original form, it predicted with probability 1 that we should find ourselves as isolated observers (“Boltzmann brains”) resulting from a highly suppressed thermal fluctuation in a late, empty universe [@Pag06; @BouFre06b]. This led to a refinement [@Vil06b], which complicates the measure and seems [ *ad hoc*]{} [@Pag06b]. Depending on the details of the string landscape, the proposal may render most vacua dynamically inaccessible (the “staggering problem” of Refs. [@SchVil06; @OluSch07]). This would also amount to a conflict with observation, namely the prediction that we should observe a much larger cosmological constant with probability very close to 1. Perhaps most importantly, at present the proposal is well-defined only in the thin-wall limit of bubble formation, and if bubble collisions are neglected [@GarGut06].[^5] Another recent proposal [@Bou06], the “holographic” or “causal diamond” measure, has so far fared well. It is well-defined in the semiclassical limit, and it does not have a staggering problem [@BouYan07]. Its prediction of the cosmological constant agrees significantly better with the data than that of any other proposal [@BouHar07], and it continues to agree well even as other parameters are allowed to vary [@CliFre07]. It will be important to test this proposal further, for example, by allowing even more parameters to vary. But it is encouraging that we have at least one well-defined measure that has not been ruled out. In this paper, we consider a much older proposal, the [*proper time measure*]{} [@Lin86a; @LinLin94; @GarLin94; @GarLin94a; @GarLin95]. At present, this measure is not completely well-defined, and we will comment on some issues that will have to be overcome to make it well-defined. But our main focus will be on its well-known conflict with observation, the “youngness paradox”. In particular, we will investigate whether simple modifications of the measure can resolve this problem. The proper time measure and the youngness paradox ------------------------------------------------- To apply the proper time measure, one begins by selecting an (almost arbitrary) finite portion of a spacelike slice in the semiclassical geometry. The congruence of geodesics orthogonal to this initial surface defines Gaussian normal coordinates, and thus a time slicing, at least until caustics are encountered. The number of observations between the initial slice and the time $t$ is finite. Globally, the multiverse reaches a self-reproducing state at late times: its volume expands exponentially, but the ratio of different types of observations remains constant and finite. Therefore, relative probabilities defined by this measure are independent of the initial conditions. Earlier work [@LinLin96; @Gut00a; @Gut00b; @Gut04; @Teg05; @Lin07; @Gut07] has already shown that the proper time measure has a youngness problem: it predicts with essentially 100% probability that we should be living at an earlier time. The reason for this problem can roughly be described as follows. The asymptotic rate of expansion of the multiverse is dominated by the vacuum with the largest Hubble constant $H_{\rm big}$, which defines a microphysical time-scale $H_{\rm big}^{-1}$. (In the string landscape, this would be of order the Planck time.) For simplicity, let us consider only regions occupied by our own vacuum. We may ask about the distribution of the age of such bubbles, i.e., how long before the cutoff $t$ they were formed. In particular, we may ask how many bubbles are at least $13.7$ Gyr old, and thus contain observations like ours; and we may compare this to the number of bubbles that are, say, $13$ Gyr old. The size of the bubble interior is not much affected by these different time choices, but the number of bubbles will be vastly different. For every bubble that is at least $13.7$ Gyr old at the time $t$, there will be of order $\exp \left(3\times 0.7 \,{\rm Gyr}/ H_{\rm big} ^{-1}\right)$ bubbles that are $13$ Gyr old, because of the overall exponential growth of the volume of the multiverse in the extra 700 million years before it has its last chance to nucleate the younger bubbles. Perhaps the younger bubbles contain fewer observers per bubble, but surely not so few as to compensate for a factor $\exp(10^{60})$. This mismatch persists as $t\to\infty$. Thus, typical observers are younger than we are, and the probability for an observer to live as late as we do is $\exp(-10^{60})$. This rules out the proper time measure at an extremely high level of confidence. Of course, our choice of $13$ Gyr observers as a comparison group is arbitrary. Because $H_{\rm big}^{-1}$ is a microphysical scale, even observers just one minute younger (relative to their big bang) are superexponentially more probable than we are. Ultimately, one should consider observers of any cosmological age. Because of the exponential pressure to be young, it pays to arise from a rare quantum fluctuation in the early universe. The most likely observers are such “Boltzmann babies”, and the most likely observations are the phenomena of the hot, dense, early universe they see. Summary and outline ------------------- Our goal in this paper is two-fold. First, we will make the youngness paradox more precise. Traditional treatments have neglected the expansion of new bubbles. We supply a justification for this “square bubble” approximation, by extending Gaussian normal coordinates across an expanding bubble wall, and showing that our exact treatment reproduces the usual youngness problem. We will distinguish carefully between probability distributions for the [*time*]{} when observers live (which is not directly observable), and probability distributions for actual observables, like the temperature of the background radiation measured by observers [@Teg05]. We find that the youngness paradox manifests itself by predicting that we should observe a higher temperature than 2.7 K, with probability exponentially close to 1. Our second goal is to consider possible modifications of the proper time measure. We will argue that it is difficult to resolve the youngness paradox, other than by abandoning the measure altogether. In particular, Linde has proposed a modification in the context of a particular toy model [@Lin07]. Since no general prescription was given, it is not clear how to extend this modification to other settings, and in particular to the probability distribution for the observed background temperature. We consider a number of possible choices, some of which reduce to the prescription of Ref. [@Lin07] for the particular probabilities computed therein. However, we are unable to find any modification that escapes all of the conflicts with observation that arise from the youngness problem. In particular, 2.7 K remains an extremely atypical value of the background temperature under all choices we consider. The structure of the paper is as follows. In Sec. \[sec-proper\], we explain the proper time measure in more detail. In Sec. \[sec-geod\], we compute the paths of geodesics entering bubbles, in order to determine the shape of the proper time cutoff within bubbles. In Sec. \[sec-prob\], we compute the probability distribution for the spacetime location of observers, finding a youngness paradox and conflict with observation. In Sec. \[sec-fixes\], we try a few modifications of the measure, but find no simple modification consistent with observation. The Proper Time Measure {#sec-proper} ======================= The proper time measure (sometimes referred to as the “standard volume weighted measure”) is one of the simplest and most straightforward ways of regulating the infinities of the multiverse. Choose a small three-dimensional patch of space, $\Sigma_0$, orthogonal to at least one eternally inflating geodesic. Then, construct Gaussian normal coordinates [@Wald] in its future. That is, a given event has the time coordinate $t$ if it occurs at proper time $t$ along a geodesic orthogonal to $\Sigma_0$. Such events form a three-dimensional hypersurface $\Sigma_t$. The regularization scheme is to count only observations between proper time hypersurfaces $\Sigma_0$ and $\Sigma_t$. Relative probabilities are defined by ratios, in the limit $t\rightarrow\infty$. ![The relative probability of making different observations, for example two different CMB temperatures (red disks or blue boxes), is determined by simple counting in the finite region between $\Sigma_0$ and $\Sigma_t$. The ratio tends to a finite limit as $t\rightarrow\infty$. The youngness problem is the fact that anomalous early fluctuations producing either observation (Boltzmann babies) turn out to dominate the count. To show this correctly in the figure, one would need to draw an exponentially large number of “young bubbles”, like the one on the right, in which only the Boltzmann babies contribute.[]{data-label="fig-multi"}](multi){width="8"} It is well-known that Gaussian normal coordinates are only locally defined. They break down at [*caustics*]{}, or focal points, where infinitesimally neighboring geodesics in the congruence intersect. Beyond such points, the above definition of the time coordinate $t$ is ambiguous. We sidestep the issue here by considering only expanding spacetime regions and ignoring clustering and inhomogeneities (and thus, strictly speaking, all known observers), so that focusing does not occur. Let $O_1$ and $O_2$ be two mutually exclusive observations. For example, $O_1$ may subsume any observation made in vacuum $A$, while $O_2$ corresponds to vacuum $B$. Or $O_1$ ($O_2$) may be capture information about the observer’s spatial or temporal location within a given vacuum, for example the fact that universe is matter (vacuum) dominated. Let $N_i(t)$ be the number of observations of type $O_i$ made in the four volume between $\Sigma_0$ and $\Sigma_t$. Observations take a finite time, so for definiteness let us demand that an observation must be complete for it to be counted. The relative probability for the two observations is defined to be $$\frac{p(O_1)}{p(O_2)}=\lim_{t\to\infty}\frac{N_1(t)}{N_2(t)}~. \label{eq-prob}$$ Similarly, we can consider a continuous set of possible observations $O_T$, such as the observation of a CMB temperature $T$. In this case, we are interested in the probability density $dp/dT$, which is given by $$\frac{\left.\frac{dp}{dT}\right|_{T_1} }{\left.\frac{dp}{dT}\right|_{T_2}}= \lim_{t\to\infty} \frac{\left.\frac{dN}{dT}\right|_{T_1}(t) }{\left.\frac{dN}{dT}\right|_{T_2}(t)} \label{eq-density}$$ Here, $\left.\frac{dp}{dT}\right|_{T_1}dT$ is the probability of observing $T$ in the interval $(T_1,T_1+dT)$. $\left.\frac{dN}{dT}\right|_{T_1}dT$ is the number of instances of such observations in the four-volume between $\Sigma_0$ and $\Sigma_t$.[^6] At late times, $N_i(t)\propto \exp(3H_{\rm big}t)$. The overall scaling rate $H_{\rm big}$ is set by the most rapidly expanding vacua [@Linde]. \[In the string landscape, one expects that $H_{\rm big}\sim {O}(1)$ in Planck units.\] This exponential growth guarantees that $$\lim_{t\to\infty}\frac{N_1(t)}{N_2(t)}= \lim_{t\to\infty}\frac{N_1'(t)}{N_2'(t)}~,$$ where $N_i'=dN_i/dt$ is the rate at which observations of type $O_i$ are being made, integrated over space but not over time. Thus, it does not matter whether probabilities are computed from the total number of observations until the time $t$, or the rate of observations at the time $t$, or the number of observations made in some recent (fixed width) time interval $(t-\Delta t,t)$. For definiteness, however, we will stick to the first of these definitions. Geodesics crossing bubbles {#sec-geod} ========================== Open FRW time vs. geodesic time ------------------------------- The measure discussed above was first applied to slow-roll models of eternal inflation, without first-order phase transitions [@LinLin94; @GarLin94]. In this case, one keeps track of fluctuations of scalar fields on the Hubble scale, effectively assuming that they decohere every Hubble time (see Ref. [@BouFre06a] for a discussion of the validity of this approach). There is no obstruction to applying the same measure to models with bubble formation, but there is an annoying complication (Fig. \[fig-slic\]). ![The top figure shows slices of constant FRW time, $\tau$ (red, light) and slices of constant geodesic time $t$ (blue, dark) in the vicinity of a bubble wall (green, thick) with initial size $r_0=0.1\, H_{\rm out}^{-1}$. Note that the constant $t$ slices are not defined for geodesics passing through the nucleation region of the bubble. The lower figure shows that geodesics of the congruence (blue, dark) eventually asymptote to comoving FRW worldlines (red, light).[]{data-label="fig-slic"}](slic "fig:") ![The top figure shows slices of constant FRW time, $\tau$ (red, light) and slices of constant geodesic time $t$ (blue, dark) in the vicinity of a bubble wall (green, thick) with initial size $r_0=0.1\, H_{\rm out}^{-1}$. Note that the constant $t$ slices are not defined for geodesics passing through the nucleation region of the bubble. The lower figure shows that geodesics of the congruence (blue, dark) eventually asymptote to comoving FRW worldlines (red, light).[]{data-label="fig-slic"}](geod "fig:") Consider a region of the universe at late times, occupied by a “host” de Sitter vacuum with cosmological constant $3H_{\rm out}^2$. Let us suppose that $H_{\rm out}$ is very large, but far enough below the Planck scale to lend validity to our semiclassical treatment. Moreover, we suppose that a bubble of our own vacuum can form by a Coleman-DeLuccia (CDL) tunneling process inside the host vacuum. Let us suppose, moreover, that the host vacuum has existed for many Hubble times $H_{\rm out}^{-1}$. Then, on the scale about to be occupied by a newly formed bubble, the geodesics emanating from the initial surface $\Sigma_0$ can be treated as comoving in the flat de Sitter metric $$ds^2=-dt^2+ H_{\rm out}^{-2} e^{2H_{\rm out}t} d {\mathbf x} ^2 \label{eq-flat}$$ This follows, in a sense, from the de Sitter no-hair theorem; we will also find that it is consistent with our careful analysis in Sec. \[sec-exact\]. Now suppose that a bubble of our vacuum forms at the time $t_{\rm nuc} $. It will appear at rest, with a proper radius $r_0\ll H_{\rm out}^{-1}$ determined by the CDL instanton. Then it will expand at constant acceleration $r_0^{-1}$, its world-volume asymptoting to a light-cone. Some of the above geodesics will eventually run into the bubble wall and enter our universe. Their behavior will determine the weight of any observations carried out inside the bubble, in the proper time measure. We will not consider the small subset of geodesics that go through the nucleation region, $r\lesssim r_0$, where the classical geometry is not clearly defined. It is unclear how to treat these geodesics. This constitutes a challenge for the sharp formulation of congruence-based measures. The best we can say is that our results show that values of $r\sim O(r_0)$ contribute negligibly to the measure as they are approached from above in a controlled regime. This could be viewed as evidence that the contribution of the uncontrolled regime can also be neglected. The metric inside the bubble is given by an open FRW geometry; ignoring fluctuations, the metric is $$ds^2=-d\tau^2+a(\tau)^2 (d\xi^2+\sinh^2\xi\, d\Omega_2^2)~. \label{eq-FRW}$$ where the scale factor $a(\tau)$ comprises, for anthropically relevant bubbles, a period of inflation followed by radiation, matter, and vacuum domination with very small cosmological constant. Note that $a(\tau)\approx \tau$ for sufficiently small $\tau$. The maximally symmetric and negatively curved spatial slices defined by $\tau = const$ are physically preferred inside the bubble, since they correspond to hypersurfaces of (approximately) constant density. Only at very late times, well into the vacuum-dominated era, do we lose this preferred slicing, as the universe again becomes locally empty de Sitter. The key point is that the preferred surfaces of constant FRW time $\tau$ are [*not*]{} the surfaces $\Sigma_t$ of constant geodesic time $t$. This is a complication, since $\tau$ is what we usually call the age of the universe, the time since the big bang—really, the metric distance from the bubble nucleation event. To the extent that any time variable is directly correlated with the outcome of an observation (such as CMB temperature or the amount of clustering), that variable will be the FRW time $\tau$, and not the global time $t$. Square bubble approximation {#sec-square} --------------------------- The [*square bubble approximation*]{}, which is implicit in Ref. [@Lin06], aims to circumvent this complication. It amounts to a deformation of the metric that allows us to calculate as if constant $\tau$ slices, inside the bubble, coincide with constant $t$ slices. For this we must arrange that the FRW time $\tau$ and the geodesic time $t$ differ only through a constant shift, $$\tau=t-t_{\rm nuc}~.$$ This is possible only if the movement of the bubble wall is neglected. Given this [*ad-hoc*]{} modification, the continuation of the geodesic congruence into the bubble cannot be directly computed. We will simply assume that the internal geometry of the new vacuum is a spatially finite piece of a [*flat*]{} FRW universe $$ds^2=-d\tau^2 + \tilde a(\tau)^2 d {\mathbf y} ^2~. \label{eq-insideflat}$$ To match at $t=t_{\rm nuc} $ ($\tau=0$), we let $ {\mathbf y}$ range over a finite physical volume $\tilde a(0)^3 V_y$. We take the comoving volume to be independent of $\tau$, as if the bubble wall remained at fixed ${\mathbf y} $. Note that both the scale factor and the initial size of the bubble initially differ significantly from their true values, and the matching to the outside fails at late times. However, in inflating vacua the exponential internal growth is more important than the expansion of the bubble forming their boundary. Moreover, inflation locally washes out the difference between a flat and an open universe. After a short time (say, a few e-foldings of inflation) we can take $\tilde a(\tau)\propto a(\tau)$. Nevertheless, the square bubble approximation blatantly contradicts important known features, such as the fact that the constant-density slices inside are actually open and infinite. Indeed it is not even consistent geometrically, making it impossible to match the inside of the bubble to the outside. But one may hope that it gives a reasonable approximation [*for the purpose of computing probabilities*]{}. This will be the case if the approximation does not change the true count of observations of various types. We will find that the square bubble approximation is a good one for many questions. Exact relation {#sec-exact} -------------- The actual relation between the FRW coordinates $(\tau,\xi)$ and the geodesic proper time $t$ is more complicated. We set $$H_{\rm out} = 1 \ \ {\rm in\ this\ subsection,}$$ so that the equations are not quite so ugly. In the outside flat deSitter slicing, $$ds^2 = - dt^2 + e^{2 t } \left(dr^2 + r^2 d \Omega_2^2 \right) ~,$$ the domain wall follows the trajectory $$\begin{aligned} r_w \exp (t_w - t_{\rm nuc})&=&r_0\cosh\eta~, \\ \exp (t_w - t_{\rm nuc})&=& r_0\sinh\eta+\sqrt{1-r_0^2}~. \label{eq-tw}\end{aligned}$$ Here $r_0$ is the size of the bubble at nucleation; it is also the radius of curvature and the inverse proper acceleration of the domain wall. $r_0 \eta$ is the proper time along the domain wall. We need to compute the motion of geodesics as they cross the domain wall and live happily ever after in the interior. The natural coordinates inside the bubble are the open FRW coordinates $(\tau, \xi)$ of Eq. (\[eq-FRW\]) because they respect the symmetry of the bubble nucleation. However, these coordinates do not cover the region containing the domain wall, so it is convenient to use a different coordinate system near the domain wall. Assuming that the Hubble constant in the interior of the bubble is much smaller than the Hubble constant in the exterior, we can find a scale $\tau^*$ such that $H_{\rm in}^{-1}\gg\tau^*\gg H_{\rm out}^{-1}$. The region from the domain wall to the $\tau^*$ surface is much smaller than the characteristic scales of the geometry inside the bubble. As a result, we can approximate it as a piece of Minkowski space. We will use coordinates in which the metric is $$ds^2 = - dT^2 + dR^2 + R^2 d\Omega_2^2~.$$ Because the domain wall is a constant curvature surface with curvature radius $r_0$, its trajectory in the Minkowski coordinates is $$\begin{aligned} R_w&=&r_0\cosh\eta~, \\ T_w&=&r_0\sinh\eta~,\end{aligned}$$ where again $r_0 \eta$ is the proper time along the domain wall. Computing the 4-velocity, we find that $\eta$ is the rapidity of the domain wall. The trajectory of the geodesic after crossing the domain wall is $$\begin{aligned} T&=&r_0\sinh\eta+(t-t_w)\cosh\alpha \label{eq-T}~, \\ R&=&r_0\cosh\eta+(t-t_w)\sinh\alpha \label{eq-R}~,\end{aligned}$$ where $\alpha$ is the rapidity of the geodesic. We will determine $\alpha$ by demanding that the angle between the domain wall and the geodesic is continuous across the domain wall[^7]. If $u$ and $v$ are the 4-velocities of the geodesic and the domain wall, we demand $$u\cdot v|_{\rm out} =u\cdot v|_{\rm in} ~.$$ Since $\alpha$ is the rapidity of the geodesic and $\eta$ is the rapidity of the domain wall, $$u\cdot v|_{\rm in} =\cosh(\eta - \alpha)~.$$ Geodesics outside the domain wall have a simple 4-velocity $u_{\rm out} = (1, 0, 0, 0)$, and since we have identified $r_0 \eta$ as the proper time along the domain wall, the 4-velocity of the domain wall is $v = ({1 \over r_0} ~ d t_w /d \eta, ...)$. Using the equation (\[eq-tw\]) for the trajectory of the domain wall, $$u\cdot v|_{\rm out} ={1 \over r_0} \frac{d t_w }{d \eta} =\frac{\cosh\eta}{r_0\sinh\eta+\sqrt{1-r_0^2}}~.$$ Thus the equation determining $\alpha$ is $$\frac{\cosh\eta}{r_0\sinh\eta+\sqrt{1-r_0^2}} =\cosh(\eta-\alpha)~. \label{eq-angle}$$ It is convenient to combine Eq. (\[eq-angle\]) and (\[eq-tw\]) to get $$\cosh \eta \exp \left[-(t_w - t_{\rm nuc})\right] = \cosh(\eta - \alpha)~.$$ Simplifying we find $$(t_w - t_{\rm nuc})=\alpha-\ln(1+\varepsilon)~, \label{eq-raptw}$$ where $$\varepsilon=\frac{e^{2 \alpha} - 1}{e^{2 \eta} + 1}~. \label{eq-eps}$$ This is a convenient rewriting because one can show that $\varepsilon \ll 1$ for all geodesics as long as the critical bubble size is small in Hubble units, $r_0 \ll 1$. We want to rewrite the geodesics in terms of the open FRW coordinates which will be adapted to the cosmological evolution inside the bubble. For $\tau\ll H_{\rm in}^{-1}$, where the geometry is approximately Minkowski space, the relationship is $$\begin{aligned} \tau &=& \sqrt{T^2-R^2} \label{eq-tau} \\ \xi &=& \tanh^{-1}\frac{R}{T} \label{eq-xi}~.\end{aligned}$$ Using the trajectory in $(R,T)$ given by Eq. (\[eq-T\]), (\[eq-R\]), we find the trajectory in FRW coordinates $$\begin{aligned} \xi&=&\alpha+{O}(\frac{1}{\tau})~, \label{eq-xia} \\ \tau &=&\sqrt{(t - t_w)^2+2r_0(t-t_w)\sinh(\eta-\xi)-r_0^2}~. \label{eq-taua}\end{aligned}$$ Our goal is to manipulate all of the above equations in order to find a single equation for the geodesic time since nucleation, $ t - t_{\rm nuc}$, as a function of the natural coordinates $\tau, \xi$ inside the bubble. We will be interested in events which occur a reasonable distance away from the domain wall, so that $\tau, t-t_{\rm nuc}, t-t_w \gg 1 $. So we can drop the subleading terms in (\[eq-xia\]) and just set the rapidity of the geodesic equal to the comoving coordinate, $\alpha = \xi$. Physically, the point is that the final comoving position of the geodesic is determined only by its velocity and not by its initial location. The nontrivial statement in (\[eq-xia\]) is that the geodesics become comoving in a time set by the Hubble scale outside the bubble, $H_{\rm out}^{-1}$. In Eq. (\[eq-taua\]) we can now set $\alpha = \xi$ and expand for large $t-t_w $ to get $$\tau =t-t_w + r_0 \sinh(\eta - \xi)~.$$ Going back to (\[eq-angle\]) and solving for $\sinh(\eta - \xi)$ we find $$\sinh(\eta - \xi) = \frac{\sqrt{1-r_0^2}\sinh\eta-r_0} { r_0 \sinh\eta+\sqrt{1-r_0^2}}~.$$ Using the relation (\[eq-tw\]) between $\eta$ and $t_w$ this can be rewritten as $$\sinh(\eta - \xi) = {1 \over r_0} \left[\sqrt{1 - r_0^2} - \exp \left[-(t_w-t_{\rm nuc})\right] \right]~.$$ So we have an equation relating the geodesic time since nucleation to the FRW time $\tau$ and the time $t_w$ the geodesic crosses the domain wall: $$\tau = t - t_w + \left[\sqrt{1 - r_0^2} - \exp[-(t_w-t_{\rm nuc})] \right]~.$$ Now we can use the relation (\[eq-raptw\]) between the rapidity $\alpha$ and $t_w$, together with $\alpha= \xi$, to get $$\tau = t - t_{\rm nuc}-\bigg[\xi + e^{- \xi} - \sqrt{1-r_0^2}+ \varepsilon e^{-\xi} -\ln(1+\varepsilon)\bigg]~, \label{eq-exact}$$ where $\varepsilon$ is given by (\[eq-eps\]). Expanding in $\varepsilon$ and restoring the factors of $H_{\rm out}$, we get the final formula relating the geodesic time to the natural coordinates inside the bubble: $$t - t_{\rm nuc} = \tau + H_{\rm out}^{-1} \left[ \xi + e^{- \xi} - 1 + ... \right]~. \label{eq-app}$$ As expected, the difference between the geodesic proper time and the open FRW time depends non-trivially on the radial FRW coordinate $\xi$. The spacetime location of a typical observer {#sec-prob} ============================================ The proper time measure makes nontrivial and interesting predictions for vacuum selection, which do not appear to contradict anything we know [@CliShe07]. However, as soon as we ask about the probabilities of different observations in the same vacuum, the measure wildly conflicts with observation. It has two properties that result in a squeeze. On the one hand, for an observation to be counted, it must occur before the cutoff $t$. On the other hand, the multiverse as a whole is expanding exponentially on a microscopic characteristic time scale. This makes it favorable to wait as long as possible until creating a low-energy, slowly expanding region like the one in which we are making our observations, and it strongly favors observations that happen soon after the fastest expanding vacuum has decayed. This is the general idea of the youngness paradox  [@LinLin96; @Gut00a; @Gut00b; @Gut04; @Teg05; @Lin07; @Gut07]. We will present one explicit calculation to show the fact that, within bubbles identical to ours, the probability to live at $13.7$ Gyr is vanishingly small compared to the probability to live at $13$ Gyr. There is nothing new in this calculation, but it will be easier to see how the exact geometry we found goes into the paradox in Sec. \[sec-bubble\], and how to analyze possible modifications in Sec. \[sec-fixes\]. Another manifestation of the youngness paradox is that if a number of tunneling events are necessary to get from the fastest inflating vacuum to our host vacuum, these successive tunneling events will tend to be separated by only the Planckian time interval $H_{\rm big}^{-1}$. Since the tunneling events are not well-separated, this renders it difficult to compute semiclassically. However, since such a quick succession of tunneling events does not obviously contradict observation, we sidestep this difficulty here by assuming that our vacuum is produced directly from the fastest inflating vacuum. Hence we set $$H_{\rm out} = H_{\rm big}~.$$ This simplification makes the problem more well-defined; however, every indication is that the characteristic time scale appearing in the youngness paradox is $H_{\rm big}^{-1}$, regardless of this simplification. The youngness paradox in the square bubble approximation -------------------------------------------------------- We begin with an analysis in the square bubble approximation defined in Sec. \[sec-square\]. By assumption, each bubble appears as a flat patch of the same physical size. Hence, the comoving volume $V_x$ taken up by a bubble in the outside metric goes like $\exp(-3H_{\rm big} t_{\rm nuc})$. Note, however, that we have rescaled the Euclidean spatial coordinates inside the bubble, $d{\mathbf y}=\exp( H_{\rm big} t_{\rm nuc}) d{\mathbf x}$, so as to make the metric in each bubble explicitly the same (not just equivalent by diffeomorphism). In the sequel, “comoving volume” will usually refer to the inside metric and will accordingly be denoted $V_y$. It will be the same for each bubble, and so will drop out of probabilities. Let $n(t)$ be the total number of bubbles of our type produced prior to the time $t$. This grows exponentially with time: $$n(t)=C \exp(3 H_{\rm big}t)~. \label{eq-nour}$$ Here $C$ is a fixed constant, which depends on the size of $\Sigma_0$, the initial state, and the rate at which our vacuum is produced (directly or indirectly) by the fastest inflating vacuum. This constant will drop out in all ratios. The nucleation of a bubble like ours will be followed by the formation of observers. Let $dN^{(1)}$ be the number of observations of some type, in a single bubble of our type, in a comoving volume of size $dV_y$ during the proper time interval $(\tau, \tau+d\tau)$ after the formation of the bubble. By the homogeneity of the FRW universe, $dN^{(1)}$ will depend only on the FRW time, $\tau$, so we can write $$dN^{(1)} = f(\tau)\, d\tau \, d V_y ~.$$ The function $f(\tau)$ can be thought of as an observer density. As long as these observations involve looking out into the sky, they will usually be different at different times $\tau$. For simplicity, we begin by treating $\tau$ itself as an “observable”, and computing the probability density $$\frac{dp}{d\tau}\propto \lim_{t\to\infty}\frac{dN}{d\tau}(t)~.$$ This is the probability for an observer to find themselves living a time $\tau$ after the big bang of their bubble. Both the observer distribution, $f(\tau)$, and the volume per bubble, $V_y$, are the same for all bubbles of our type, by the above assumptions. Therefore, the total number of $\tau$-observations made by the time $t$ depends on $t$ only through the total number of bubbles $n(t-\tau)$ produced prior to the time $t-\tau$: $$\begin{aligned} \frac{dN}{d\tau}(t) &=& n(t - \tau) \frac{dN^{(1)}}{d\tau} (\tau) = f(\tau)\, V_y\, n(t-\tau) \nonumber \\ &=& f(\tau)\, V_y\, \exp \left[3 H_{\rm big} (t-\tau) \right]~.\end{aligned}$$ Since the $t$ dependence of the answer is just an overall normalization, it drops out of the probability distribution and we get the simple answer $$\frac{dp}{d\tau}\propto f(\tau) \exp(-3H_{\rm big}\tau) ~. \label{eq-dpddt}$$ In our universe, it is reasonable to assume that $f(\tau)$ has a broad (at least Gyr-scale) peak at some $\tau_{\rm peak}\sim O(10$ Gyr$)$, since at early times, there was no structure, and at late times, there will be no free energy. In any case, there will be no features in $f(\tau)$ that can possibly compete with the exponential factor in Eq. (\[eq-dpddt\]), which suppresses the probability of late-time observations at a characteristic rate set by the microphysical scale $H_{\rm big}$. For example, with Planckian $H_{\rm big}\sim O(1)$, there are at any time $t$ $$\frac{f(13\, {\rm Gyr})}{f(13.7\, {\rm Gyr})} \exp(10^{60})\approx \exp(10^{60})$$ observers who live 13 Gyr after their local big bang, for every observer like us. Thus, the probability of seeing a 13.7 Gyr old universe with a 2.7 K background temperature is vanishingly small compared to the observation of a warmer CMB and a somewhat younger universe. This obviously contradicts experiment. Note that the probability for what we do see is so small that our observations so far are, by any standards applied in science, perfectly sufficient to rule out the theory—or in this case, the measure. Explicit conflict with observation {#sec-mod} ---------------------------------- A possible objection to the above analysis is the fact that $\tau$, the time since the big bang in our bubble, is not a physical observable. Therefore, following Tegmark [@Teg05], let us verify explicitly that the youngness paradox manifests itself in the probability distribution for physical observables. Some observational consequences of the youngness pressure in this measure were described more than ten years ago by Linde, Linde, and Mezhlumian [@LinLin96]. There, the authors note that the proper time measure predicts that we are living at the center of an underdense region they refer to as an “infloid”. The effect they discuss arises because regions which spend less time in slow roll inflation (hence regions which reheat sooner and therefore are underdense) are rewarded. We focus here on a different effect which is more clearly in conflict with observation: the fact that typical observers see a different temperature than we do. The probability distribution for the temperature is $${dp \over dT} = \int d\tau {dp \over d \tau} g(T |\tau)~,$$ where $g(T |\tau)$ is the probability distribution for temperatures at a fixed FRW time. For temperatures not too far from the average value, $$g(T |\tau) \propto {1 \over T_{\rm av}(\tau)} \exp \left[ - 10^{10} \left({T - T_{\rm av}(\tau) \over T_{\rm av}(\tau)} \right)^2 \right]~,$$ where $T_{\rm av}(\tau)$ is the average temperature at time $\tau$, and the factor of $10^{10}$ arises due to the magnitude of the density perturbations. The probability distribution becomes $${dp \over dT} \propto \int d\tau {f(\tau) \over T_{\rm av}(\tau)} \exp\left[-3 H_{\rm big} \tau - 10^{10} \left({T - T_{\rm av}(\tau) \over T_{\rm av}(\tau)} \right)^2 \right]~.$$ For the moment, let us ignore observations occurring before 10 Gyr, because for early enough times these formulas will break down. For the times under consideration, the average temperature satisfies $${T_{\rm av}(\tau_1) \over T_{\rm av}(\tau_2)} = \left({\tau_2 \over \tau_1}\right)^{2/3}~.$$ The probability distribution for temperature becomes $${dp \over dT} \propto \int_{10 ~{\rm Gyr}} d\tau\, \tau^{2/3} f(\tau) \exp \left[ - 3 H_{\rm big} \tau - 10^{10}\left( \left(T \over 3.3 K \right) \left(\tau \over 10 ~{\rm Gyr}\right)^{2/3} - 1 \right)^2 \right]~.$$ The dominant factor in the integrand is $\exp(- 3 H_{\rm big} \tau )$, since this factor varies over the microphysical time scale $H_{\rm big}$, while the other factors vary on much larger time scales. Thus the integral is dominated by the lower limit. So the probability distribution for the temperature, once fluctuations are taken into account, is just equal to the distribution at the early time cutoff. Dropping a $T$-independent normalization factor, we find $${dp \over d T} \propto g(T |\tau = 10 ~{\rm Gyr}) \propto \exp \left[ - 10^{10} \left({T - 3.3 K \over 3.3 K} \right)^2 \right]~. \label{eq-uf}$$ It is easy to see that this prediction is ruled out at great confidence by our observation that $T=2.7$ K. The conflict only becomes worse as the early time cutoff is reduced. Exact treatment of the bubble geometry {#sec-bubble} -------------------------------------- In this subsection, we will improve on the above analysis by taking into account the actual dynamics and shape of bubble walls. Our treatment will clarify the extent to which the square-bubble approximation is justified, and confirm that the youngness paradox arises in the proper-time measure. Inside a single bubble, as before we define a function $f(\tau)$ giving the number of observations per unit comoving volume per proper time, $$dN^{(1)} = f(\tau)\, d\tau\, d V_c = f(\tau)\, 4 \pi \sinh^2\xi ~d\xi\, d\tau ~.$$ To get the total number of observations, $dN$, at given $(\tau, \xi)$, we must sum over all bubbles. We can organize this sum in terms of the time $t_{\rm nuc}<t$ when each bubble was nucleated. Note that given the coordinates $(\tau, \xi)$ inside the bubble, there is an upper limit $ t_{\rm nuc}^{\rm max}(\tau, \xi)$ on the nucleation time so that the region of interest can be produced before the time $t$. This relationship was derived in Sec. (\[sec-exact\]). The sum becomes $${dN \over d\tau d\xi}=\int_{0}^{ t_{\rm nuc}^{\rm max}} d t_{\rm nuc} \left.\frac{dn}{dt}\right|_{t_{\rm nuc}} {dN^{(1)} \over d\tau d\xi}~, \label{eq-gg}$$ where the bubble production rate $dn/dt$ is still given by Eq. (\[eq-nour\]). Plugging in, we get $${dN \over d\tau d\xi} = \int_{0}^{ t_{\rm nuc}^{\rm max}} d t_{\rm nuc}\, C \exp \left(3 H_{\rm big} t_{\rm nuc} \right) f(\tau)\, 4 \pi \sinh^2\xi$$ where, as derived in (\[eq-app\]), $$t_{\rm nuc}^{\rm max} = t - \tau-H_{\rm big}^{-1}\left( \xi + e^{- \xi} - 1 + ... \right)~.$$ Performing the integral and dropping constant factors, we get $$\begin{aligned} & &{dN \over d\tau d\xi} = \left[ e^{3 H_{\rm big} t^{\rm max}_{\rm nuc}} -1 \right] ~ f(\tau) \sinh^2 \xi \\ \nonumber &=&\left[e^{3 H_{\rm big} \left[t - \tau -H_{\rm big}^{-1}(\xi +e^{-\xi} - 1 + ...)\right] }- 1 \right] f(\tau)\, \sinh^2\xi \end{aligned}$$ Taking the limit $t \to \infty$, we can ignore the “$-1$” coming from the lower limit of integration; in this limit the $t$ dependence is only an overall multiplicative factor which vanishes upon normalization. Thus we obtain a simple formula for the probability distribution $${d N \over d \xi d \tau} = f(\tau) e^{-3 H_{\rm big} \tau}~ \sinh^2\xi e^{-3 (\xi+e^{-\xi} - 1 + ...)}$$ The striking feature of this probability distribution is that it factorizes into a function of the spatial coordinate $\xi$ times a function of the FRW time $\tau$. This is exactly true, because the “$\ldots$” appearing in the formula is a function of $\xi$ only. The distribution as a function of $\tau$ is given by $$\frac{dp}{d\tau}\propto f(\tau) e^{-3 H_{\rm big} \tau}~. \label{eq-eyp}$$ This distribution is exactly the same as Eq. (\[eq-dpddt\]), which was derived in the square bubble approximation. So the youngness paradox appears in exactly the same way in the true geometry. The spatial distribution of observers at a fixed FRW time $\tau$ is $$\frac{dp}{d\xi}\propto \sinh^2\xi \exp\left[-3 (\xi+e^{-\xi} - 1 + ...)\right]$$ This distribution peaks at $\xi$ of order one, and falls exponentially for large $\xi$. So most observers live within a few curvature radii of the “center of the universe.” The center is defined by the geodesic piercing the bubble nucleation point. Why did the square bubble approximation work? {#sec-location} --------------------------------------------- The main effect of the correctly computed probability distribution over $\xi$ is to allow only an effective comoving volume $$V_{\xi, {\rm eff}}=4\pi\int_0^\infty d\xi \sinh^2\xi \exp\left[-3(\xi-1+e^{-\xi})\right]\approx 15.75$$ to contribute for every bubble. The above result could not have been computed in the square bubble approximation, but it explains why that approximation worked for computing the [*temporal*]{} distribution of observers. The point is that in a large regime, it is possible to identify the [*finite*]{} spatial region containing typical observers with the finite flat patch of “new vacuum” inserted by hand in the square bubble approximation. This identification is not a true match, because of the different spatial curvature. But during and after inflation, there is a long period where curvature is negligible and the scale factor would be the same function of time in a flat FRW universe. If the observations contributing to the measure occur in this regime, then the use of spatially flat time-slices in the square-bubble approximation will be legitimate. The effective physical volume at large FRW time $\tau$ is $15.75\, a^3(\tau)$. During inflation, $a(\tau)=H_{\rm inf}^{-1}\sinh H_{\rm inf}\tau$. To match this to the square bubble physical volume, $V_y H_{\rm inf}^{-3} \exp\left(3H_{\rm inf} \tau \right)$, at $\tau \gg H_{\rm inf}^{-1}$, requires the choice $$V_y=V_{\xi, {\rm eff}}/8\approx 2~. \label{eq-vy}$$ Note that in a problem involving different types of bubbles, the physical volume of a new bubble will be of order $H_{\rm inf}^{-3}$. Generically $H_{\rm inf}$ will be smaller than the outside Hubble constant. If we took Eq. (\[eq-vy\]) literally, the square bubble approximation would involve replacing a large number of outside Hubble volumes with the new vacuum. This contradicts the geometric fact that asymptotically, the new bubble takes up the comoving volume occupied by only one outside Hubble volume at the time of nucleation. Of course, the choice of $V_y$ dropped out of ratios, so it could be reduced without affecting relative probabilities. In any case, while the square bubble approximation turned out to be a useful shortcut under the above assumptions, it is just as simple, and much more reliable, to use the exact geometry, as encoded in Eq. (\[eq-exact\]), to compute probabilities. Modifications of the proper time measure {#sec-fixes} ======================================== Obviously, the result that practically all observers live at a much earlier time, and see a very different universe, than we do, is fatal for the proper time measure. Perhaps the measure can be modified in some way, so as to avoid this problem? Don’t ask, don’t tell --------------------- Linde advocates a simple resolution to the youngness paradox in Ref. [@Lin06] (see also references therein). One should simply not ask how long after reheating the typical observers form, but merely compute the rate at which reheating hypersurfaces of different inflating vacua are produced. This restriction has a number of problems. If we cannot ask about the temperature measured by a typical observer, the measure is not complete. Moreover, if we cannot ask about observers, then we cannot count them, and so we cannot condition on their number. This would eliminate the anthropic solution to the cosmological constant problem. And finally, as noted in Ref. [@Lin07], this restriction does not fully solve the youngness problem in any case. It merely confines the problem to effects before reheating. In particular, it gives overwhelming weight to vacua with a shorter period of inflation, and thus predicts a wide open universe. Thus, a different modification is needed. A general idea for such a fix was outlined in Ref. [@Lin07]: “One should compare apples to apples, instead of comparing apples to the trunks of the trees.” In other words, we should assign a correction factor $e^{3 H_{\rm big} \Delta t_i}$ to the probability $p_i$ for the observation $O_i$, where $\Delta t_i$ is the amount of time it takes to produce such an observation, in some relative sense to be defined below. The corrected relative probabilities are thus: $$\begin{aligned} \frac{P(O_1)}{P(O_2)} &=& \frac{p(O_1)}{p(O_2)} \exp[3 H_{\rm big} (\Delta t_1-\Delta t_2)] \\ \nonumber &=&\lim_{t\to\infty}\frac{N_1(t)}{N_2(t)} \exp[3 H_{\rm big} (\Delta t_1-\Delta t_2)]~. \label{eq-fix}\end{aligned}$$ Compared to Eq. (\[eq-prob\]), this bolsters mature folk like ourselves, by compensating for the enormous volume growth that Boltzmann babies can take advantage of. No general, sharp definition of $\Delta t_i$ was attempted in Ref. [@Lin07], where explicit calculations were carried out only for a model containing two vacua with different lengths of inflation; $\Delta t_i$ was defined to be the duration of inflation in each vacuum. While Ref. [@Lin07] claimed that the procedure also resolves other aspects of the youngness paradox, such as the overwhelming probability for a hotter universe, it offered no definition of $\Delta t$ in that context, nor did it display an explicit computation of the corrected probability. In fact, we have been unable to come up with a general definition of $\Delta t$ that succeeds in fixing the youngness problem. This does not mean that it cannot be done. But perhaps it will help sharpen the challenge if we discuss a few proposals that may come to mind.[^8] Spatial averaging ----------------- The prediction that we should observe a warmer CMB temperature arose from the fact that it takes longer to produce observers who see low CMB temperatures. To get a more reasonable probability distribution for the CMB temperature, we want to eliminate the enormous cost of waiting for the universe to cool off. It seems reasonable to assign $\Delta t(T)$ as the amount of FRW time until the average background temperature is $T$. Since we will only be comparing observations within the same bubble and after reheating, an additive constant in $\Delta t$ is unimportant and so we can start our clock at any time we like. We explore this proposal mostly because it seems like the most straightforward and naive fix. In fact, it is unclear how the above definition would generalize to observables that can take on the same value at very different times. Even for the temperature, small perturbations render the relation between its average value and a particular time slice ambiguous at the $10^{-5}$ level—much too large to define $\Delta t$ with the required Planckian precision. We will disregard all these issues, since the modification fails even in the idealized special case we consider. For temperatures and times close to those we observe, the average temperature on $\tau=$ const slices satisfies $${T(\tau_1) \over T(\tau_2)} = \left({\tau_2 \over \tau_1}\right)^{2/3}~.$$ Thus, $\Delta t(T)$ is given by $$\Delta t(T) = (13.7~ {\rm Gyr}) \left( 2.7~{\rm K} \over T \right)^{3/2}~. \label{eq-classical}$$ The modification fails because it is comparatively easy to find deviations from the average temperature. To see this, let us begin by considering a further idealization: Let us exclude fluctuations of $T$. In other words, we will assume that the CMB temperature, at fixed $\tau$, is given everywhere precisely by the same value. With this additional idealization, the modification actually works! There is now a one-to-one correspondence between $\tau$ and $T$, so we can use Eq. (\[eq-dpddt\]) to obtain the (unmodified) probability distribution for $T$: $${dp \over dT} = {dp \over d\tau} {d\tau \over dT} \propto f(\tau(T)) {d\tau \over dT} \exp \left[-3 H_{\rm big} \tau(T) \right] \label{eq-cluf}$$ where $f(\tau)$, as before, is the rate of observations per comoving volume per unit time per bubble. The quantity $f(\tau(T)) {d\tau \over dT} $ is naturally identified as $f(T)$, the rate of observations per comoving volume per unit background temperature per bubble. Using the previous formula relating time to temperature, we find $${dp \over dT} \propto f(T) \exp\left[- 3 H_{\rm big} \cdot (13.7~{\rm Gyr}) \left(2.7~{\rm K} \over T\right)^{3/2} \right] ~.$$ Still working in the idealization of exactly homogeneous background temperature, let us now compute the modified probability distribution for temperature. It is $${dP \over dT} \propto f(T) \exp \left[-3 H_{\rm big} (\tau(T) - \Delta t(T)) \right]$$ We have defined $\Delta t$ so that the exponent is zero, so the modified probability distribution for temperature is simply proportional to the number of observations at each temperature, $${dP \over dT} \propto f(T)~. \label{eq-dream}$$ This answer seems intuitive and has no youngness problem. (See, however, the discussion at the end of Sec. \[sec-anticipate\].) Once we allow for fluctuations of the temperature, $\Delta t(T)$ can still be defined in terms of the average temperature. But our recipe for repairing the probabilities no longer works. Now the starting point is the (unmodified) probability distribution obtained in Sec. \[sec-mod\], Eq. (\[eq-uf\]). After applying Eq. (\[eq-fix\]), with $\Delta t(T)$ given by Eq. (\[eq-classical\]), we obtain the modified distribution $${dP \over d T} = \exp \left[3 H_{\rm big}\Delta t(T) \right] g(T |\tau = 10 ~{\rm Gyr})$$ Using $$\Delta t(T) = (10 ~{\rm Gyr}) \left( 3.3 K \over T \right)^{3/2}~,$$ and assuming Planckian $H_{\rm big}$, we get $${dP \over d T} = \exp \left[ 10^{61} \left( 3.3 K \over T \right)^{3/2} - 10^{10} \left({T - 3.3 K \over 3.3 K} \right)^2 \right]$$ The temperature is now driven to the [*lowest*]{} possible value. It is still favorable to live early, and because of the primordial density fluctuations, it is not all that hard to find an anomalously cool region even at early times. Our modification factor rewards us for this as if we had honestly waited until the average temperature becomes so low. Thus, it overcompensates. This new distribution is also ruled out, at enormous confidence level, by our observation of 2.7 K. Waiting for the first time -------------------------- Another possibility is to define $\Delta t_i$ for the observation of type $O_i$ as the time it takes the universe, starting from the beginning of time (the slice $\Sigma_0$), to produce the first such observation. Thus defined, $\Delta t_i$—and hence, the corrected probabilities—will depend on the initial conditions on $\Sigma_0$. This dependence may be mild, and in any case we can see no reason why probabilities (at least for some observables) should not depend on the initial conditions of the universe. However, if we define $\Delta t_i$ as the time when $N_i$ jumps from 0 to 1, then it will also depend on accidents of the semiclassical evolution at early times, such as the time when a particular tunneling event happens to take place, and we would not be able to compute it directly from the theory. This problem can be resolved by defining $\Delta t_i$ to be the time when the expectation value $\langle N_i(\Delta t)\rangle$ becomes 1.[^9] This still depends on initial conditions but can be computed from the theory in the semiclassical regime. However, this definition conflicts with an important property of probabilities. Consider the special case that $O_1$ and $O_2$ are mutually exclusive outcomes of an experiment. For example, outcome $O_1$ ($O_2$) may be up (down) when the spin of a single electron is measured by a man in a penguin suit. In general there may be additional possible outcomes $O_3,\ldots$, but in any case, it must be true that $$p_1+p_2=p_{12}~,$$ where $p_{12}$ is the probability for the outcome “1 or 2”. Indeed, this property will be satisfied by the original probabilities defined in Eq. (\[eq-prob\]). However, $\Delta t_{12}<\Delta t_i$, $i=1,2$, because the expected time when “1 or 2” is first observed is simply the time when the experiment is first likely to be performed. This is sooner than the expected time when, say, 1 is first observed, since the very first experiment can only have one outcome. Therefore, the corrected probabilities do not add up correctly: $$\begin{aligned} P_1+P_2&=& p_1 e^{3 H_{\rm big} \Delta t_1} +p_2 e^{3 H_{\rm big} \Delta t_2} \\ \nonumber &>&(p_1+p_2) e^{3 H_{\rm big} \Delta t_{12}} = P_{12}~,\end{aligned}$$ Another way of saying this is that we can change the total probability for a set of alternative outcomes by whether we view the alternatives separately and add probabilities, or group the alternatives together and directly compute the probability for this compound outcome. This is clearly absurd. A particularly simple and striking result obtains if we assume initial conditions that are already in the stationary regime. Then $$\langle N_i(t)\rangle = V p_i (e^{3 H_{\rm big} t}-1)~. \label{eq-nstat}$$ The uncorrected probabilities $p_i$ are dynamically determined by the attractor behavior; the overall scaling $V$ depends on the volume of $\Sigma_0$. We find for the correction factor $$e^{3 H_{\rm big} \Delta t_i}= \frac{Vp_i+1}{Vp_i}~,$$ and hence $$\frac{P_i}{P_j}=\frac{V p_i + 1}{V p_j+1}. \label{eq-vp}$$ In the large volume limit, there is no correction and the youngness paradox persists. For finite $V$, the corrected probabilities do not obey $P_1+P_2=P_{12}$. In the limit $V\to 0$, all alternatives become equally likely, $P_i/P_j=1$, no matter how they were defined! Growing together ---------------- A different definition for $\Delta t_i$ may be motivated by another quote from Ref. [@Lin07]: $\Delta t_i$ is “the time when the stationary regime becomes established” for the observation $O_i$. Mathematically, we may attempt to capture this idea as follows. At late times, we know that the number of observations of any type will grow as $e^{3 H_{\rm big} t}$, so $\dot N_i/N_i\to 3 H_{\rm big} $. At any finite time, there will be a small correction to this time dependence, so we may define $\delta t_i$ as the earliest time when $$\left|1-\frac{\dot N_i(t_i)}{3 H_{\rm big} N_i(t_i)}\right| \leq \epsilon~.$$ It would seem arbitrary to specify an particular small deviation $\epsilon$ beyond which we consider the stationary regime established. Therefore, let us take the limit $\epsilon\to 0$. In this limit each $\delta t_i$ contains the same additive divergence, $-\log \epsilon$, which we discard and define $$\Delta t_i=\lim_{\epsilon \to 0}\delta t_i + \log\epsilon~.$$ To see that this measure does not work, let us focus again on the CMB temperature in our own vacuum. For the sake of argument, suppose that no observers exist prior to some cutoff FRW time, say, $\tau_{\rm min}=10$ Gyr. By the results of Sec. \[sec-mod\], for any finite geodesic time $t$, practically all observations of [*any*]{} value of $T$ are made within a time of order $H_{\rm big}^{-1}$ after $\tau_{\rm min}$. Therefore, to accuracy $H_{\rm big}^{-1}$, $|1-{\dot N_T}/(3 H_{\rm big} N_T)|$ will drop below $\epsilon$ at the same time $t$, for any temperature $T$. Hence, $\Delta t_T$ is independent of $T$ to this accuracy.[^10] Therefore, our “modification” does not in fact change relative probabilities at all. To complete this argument, we should now take the cutoff FRW time, $\tau_{\rm min}$, earlier and earlier, until it is removed altogether. However, this introduces only information about early universe physics into our modification of the proper time measure. It cannot possibly restore a reasonable probability distribution for the CMB temperature measured by observers in the present era. It is interesting that like “spatial averaging”, the present modification [*would*]{} have worked for (fictitious) observables that are in one-to-one correspondence with the FRW time. In this case, it would not have been true that at any time $\tau$, there is a nonzero amplitude for any temperature $T$. Instead, $$N_{T_1}(t)= \frac{f(T_1)}{f(T_2)}N_{T_2}(t+\tau(T_2)-\tau(T_1))~,$$ and hence, $$\frac{\dot N_{T_1}}{N_{T_1}} (t)= \frac{\dot N_{T_2}}{N_{T_2}}(t+\tau(T_2)-\tau(T_1))~.$$ Then we would have found $$\Delta t_1-\Delta t_2=\Delta t(T_1)-\Delta t(T_2)~,$$ and we would have recovered the intuitive result of Eq. (\[eq-dream\]). Apparently, the problem with both of these modifications is that the [*value*]{} of an observable does not give us enough information about the FRW [*time*]{} when the observation is made—but this is precisely the time we would like to use for $\Delta t$. This motivates our final attempt at modifying the proper time measure, in which $\Delta t$ is defined not as a function of observables, but directly as a function of the time when the observation is made, regardless of its outcome. Anticipation {#sec-anticipate} ------------ Instead of tying $\Delta t$ to a specific observable, we can go back and fix the time shift directly for the geodesics. Effectively, it is like “anticipating” all observations that will happen in a given bubble. More precisely, let us project every observation $O_i$ inside a bubble back to the most recent bubble wall along the geodesics of the congruence, and count it toward $N_i(t)$ as soon as the relevant portion of the wall lies below $\Sigma_t$. This amounts to choosing $\Delta t_i$ to be the geodesic time between the domain wall and the observation. It is not difficult to see that this choice eliminates any pressure to make observations very early, and that it reduces to the $\Delta t$’s used in the specific example of Ref. [@Lin07]. However, in general it suffers from two major problems. First, it is not sharply enough defined. The prescription involves projecting onto domain walls. These objects have an inherent thickness, which can be microscopic, but need not be Planckian. This is a problem because we need a proposal which is well-defined at the length scale $H_{\rm big}^{-1}$, which may be Planckian. Moreover, an approximately defined object like a domain wall has no place in a fundamental definition of probabilities for all observations. There is a smooth interpolation between objects that appear obviously recognizable as domain walls, and general field configurations. (This objection could be raised also against other measures that involve domain walls in their definition, such as Ref. [@GarSch05].) On the observational side, the projection method suffers from the “Boltzmann brain” problem [@Pag06; @BouFre06b]. The reason is that we are now completely indifferent to when observations inside the bubble are made. By the results of Sec. \[sec-location\], we may focus on a single comoving volume at the center of any metastable de Sitter bubble with sufficiently small cosmological constant (such as, presumably, our own vacuum). An infinite number of observers are formed at late times in this volume, due to rare thermal fluctuations [@Pag06]. All of these Boltzmann brains will be projected back, and so will dominate over other observers. Thus, with probability 1, we should be Boltzmann brains, which is in conflict with observation [@DysKle02]. (Alternatively, we can interpret this infinity as telling us that projection defeats the most basic purpose of the measure, which is to regulate the infinities occurring in eternal inflation.) Mathematically, the Boltzmann brain problem shows up as follows. The effect of the “anticipation” modification is to render the temperature distribution apparently well-behaved: we have finally succeeded in producing the hoped-for Eq. (\[eq-dream\]). But this equation is a poisoned chalice: it is not as harmless at it looks. Boltzmann brains arise at a fixed rate per unit time and unit physical volume, not per comoving volume. Thus, the comoving observer density $f(\tau)$ grows exponentially with the scale factor at extremely late times, so $f(T)$ diverges at the Hawking temperature of the de Sitter space. We are grateful to Andrei Linde for extensive discussions. This work was supported by the Berkeley Center for Theoretical Physics, by a CAREER grant of the National Science Foundation, and by DOE grant DE-AC03-76SF00098. [^1]: bousso@lbl.gov, freivogel@berkeley.edu, jingking@berkeley.edu [^2]: We cannot conclude that our laboratory is the only one, since we could simply build a second one. Note that in the Hartle-Srednicki prescription, this would be inadvisable, since it would render the experiments performed at either laboratory less conclusive. [^3]: In general, the question of what constitutes one observation is a difficult problem. For instance, it is not obvious precisely how many observations of the CMB temperature should be assigned to our local efforts on Earth. (See Ref. [@Bou06] for a recent proposal using entropy production.) However, these considerations are orthogonal to the issue at hand, which is the regularization of the infinite spacetime four-volume arising in eternal inflation. Here, we will assume that the local counting of observations is unambiguous. [^4]: A simple example arises even if there is only one false vacuum. Each true vacuum bubble collides with an infinite number of other such bubbles, so one may ask whether we are likely to live in a collision region. Leaving aside fatal effects of collisions, this probability is nevertheless exponentially small in the proper time measure considered here, and also in the causal diamond measure [@Bou06]. But if one averages over worldlines emanating from the nucleation point, all but a set of measure zero of them will immediately enter a collision region [@GarGut06]. [^5]: The proposal cuts off the infinite number of observers in different vacuum bubbles by restricting to a “unit comoving volume”, defined by appealing to the universality of the open universe metric inside every bubble at early times [@GarSch05]. But universality holds only if the thickness of the wall, and its collisions with other bubbles, are both neglected. These two assumptions cannot both be satisfied to a good approximation. [^6]: In the continuous case, one must take care with the order of limits. First we pick a finite $dT$ and define ratios as usual, by taking $t\to\infty$. Then we repeat the procedure while taking $dT\to 0$. [^7]: At the domain wall, the first derivative of the metric is discontinuous, resulting in a delta function in $R_{\mu\nu}$ in the thin wall approximation. However because the connection only depends on first derivatives, we can find a local coordinate system where the connection $\Gamma^\gamma_{\mu\nu}$ is finite. This proves that the angle (inner product) between the geodesic and the domain wall is continuous across the wall. [^8]: We thank Andrei Linde for discussions that influenced some of the definitions explored below. However, we make no claim that any of them reflect his views accurately. [^9]: More generally, one could consider defining $\Delta t$ to be the time when the expectation value reaches some fixed value $N_{\rm min}$. In Eq. (\[eq-vp\]) below, the small volume limit is equivalent to taking $N_{\rm min}\to\infty$ at fixed $V$. [^10]: A finite width $dT$ is implicit; see Eq. (\[eq-density\]) and the footnote thereafter. We use $N_T$ as a short notation for $\frac{dN}{dT} dT$. Our conclusion becomes strictly true only in the $\epsilon \to 0$ limit, when the total volume of the FRW cutoff surfaces between $\Sigma_0$ and $\Sigma_t$ becomes large enough to contain all possible values of $T$.
--- author: - 'M. Kappes, J. Kerp' title: 'A window to the Galactic X-ray halo: The ISM towards the Lockman hole' --- Radiative transfer model for soft X-rays ======================================== The diffuse soft X-ray emission ($E<1$keV) of the Milky Way is a sensitive tool to study the distribution of the photoelectric absorbing ISM. Our field of interest encloses the Lockman hole. It represents the absolute minimum of $N_\ion{H}{i}$ on the whole sky, accordingly it is considered as the “window to the distant universe”. To test this hypothesis we correlate the ROSAT all-sky survey energy bands [**R1**]{}, [**R2**]{}, [**C**]{} and [**M**]{} (Snowden et al. 1994) with the Leiden/Dwingeloo 21cm–line survey (Hartmann & Burton 1997). We choose a large portion of the sky ($60^{\circ} \times 60^{\circ}$) to model the ROSAT data across a high dynamic range in X–ray intensity and $N_\ion{H}{i}$. Deviations in plasma emissivity ($\propto n_{\rm e}^2$) or absorption ($N_\ion{H}{i}$) can be identified in the difference map between model and observational X–ray intensity distribution. Following Kerp et al. (1999) we solve the X-ray radiative transfer equation: $$\label{eqn_radiation} I = I_{\rm LHB} + I_{\rm HALO} \cdot {\rm e}^{- \sigma_\mathrm{h} \cdot N_{\ion{H}{i},h}} + I_{\rm EXTRA} \cdot {\rm e}^{- \sigma_\mathrm{e} \cdot N_{\ion{H}{i},e}}$$ Here, $I_{\rm LHB}$ is the X–ray intensity of the Local Hot Bubble, $I_{\rm HALO}$ the diffuse X–ray emission of the Milky Way halo, and $I_{\rm EXTRA}$ the extragalactic X–ray background. The amount of the absorbing material is traced by the column density distribution ($N_\ion{H}{i}(l,b)$), $\sigma$ denotes the photoelectric absorption cross section. Equation \[eqn\_radiation\] has four free ($I_{\rm LHB}; I_{\rm HALO}; T_{\rm LHB}; T_{\rm HALO}$) and two fixed parameters ($I_{\rm EXTRA}$, Barber et al. 1996; $\Gamma$, Hasinger et al. 2001). To determine the four free parameters we use a fitprocedure (Kappes, Pradas & Kerp 2002) which approximates the C, M, C/M, and R1/R2 vs. $N_\ion{H}{i}$ diagrams [*simultaneously*]{}. This approach allows us to determine the plasma temperatures and $n_{\rm e}^2$ as a function of $N_\ion{H}{i}$ very accurately. [![[**left:**]{} The observed ROSAT C-band intensity of the selected field. [**right:**]{} The distribution towards the field of interest. Solid contours encircle regions where the modeled X–ray intensity is too faint, dashed contours mark too bright regions ($2.5\sigma$). Only the Lockmann hole region is enclosed by the dashed lines, indicating that the $N_\ion{H}{i}$ does not trace the whole ISM.[]{data-label="xrhi"}](c-band.eps "fig:")]{} [![[**left:**]{} The observed ROSAT C-band intensity of the selected field. [**right:**]{} The distribution towards the field of interest. Solid contours encircle regions where the modeled X–ray intensity is too faint, dashed contours mark too bright regions ($2.5\sigma$). Only the Lockmann hole region is enclosed by the dashed lines, indicating that the $N_\ion{H}{i}$ does not trace the whole ISM.[]{data-label="xrhi"}](HI-dev.eps "fig:")]{} Results ======= One of the main results is that $T_{\rm HALO} = 10^{6.2}\,$K$\,> T_{\rm LHB} = 10^{6.0}\,$K. We find a quantitative agreement between the expected photoelectric absorption and the observed X–ray intensity distribution across tens of degrees (see areas encircled by the contour lines in Fig. \[xrhi\], right). However, towards the Lockman hole our model fails to reproduce the X–ray intensity distribution. It appears that the Lockman hole is not as transparent as the data suggest. We propose that about half of the X–ray absorbing ISM towards the Lockman hole is in form of [*ionized*]{} hydrogen rather than . In a forthcoming paper (Kappes et al. in prep) more details will be presented. Barber, C. R., Roberts, T. P., Warwick, R. S. 1996, MNRAS, 282, 157-166 Hartmann, D., Burton, W.B. 1997, Cambridge University Press Hasinger, G. et al. 2001, A&A, 365, 45 Kappes, M., Pradas, J., Kerp, J. 2002, in Proc. of “New Visions of the X-ray Universe in the XMM-Newton and Chandra Era”, ESA SP-488, Eds. Jansen F. et al., in press Kerp, J. , Burton, W. B., Egger, R., Freyberg, M. J., Hartmann, D., Kalberla, P. M. W., Mebold, U., & Pietz, J. 1999, A&A, 342, 213 Snowden, S. L., McCammon, D., Burrows, D. N., & Mendenhall, J. A. 1994, ApJ, 424, 714 The authors like to thank the Deutsches Zentrum für Luft- und Raumfahrt for financial support under grant No. 50 OR 0103.
--- abstract: 'Style transfer is a problem of rendering image with some content in the style of another image, for example a family photo in the style of a painting of some famous artist. The drawback of classical style transfer algorithm is that it imposes style uniformly on all parts of the content image, which perturbs central objects on the content image, such as faces or text, and makes them unrecognizable. This work proposes a novel style transfer algorithm which automatically detects central objects on the content image, generates spatial importance mask and imposes style non-uniformly: central objects are stylized less to preserve their recognizability and other parts of the image are stylized as usual to preserve the style. Three methods of automatic central object detection are proposed and evaluated qualitatively and via a user evaluation study. Both comparisons demonstrate higher quality of stylization compared to the classical style transfer method.' author: - 'Alexey A. Schekalev, Victor V. Kitov' title: Style transfer with adaptation to the central objects of the scene --- Introduction ============ ![Style transer task[]{data-label="common_task"}](pic/slide1.jpg){width="80.00000%"} Image stylization [@adobe] is a classical problem in computer vision of rendering a content image in the style of another style image, as shown on Fig. \[common\_task\]. Earlier approaches used hard-coded rules to impose predefined style. Recently, a method of Gatys et al.[@Gatys] was proposed to impose arbitrary style on arbitrary content image using deep convolutional networks. The main task is to transfer style from one image to another. This algorithm should work with any content and style images. In 2016 Leon Gatys proposed a method [@Gatys] of stylization, based on deep neural networks, which solved this problem. The main idea was to optimize in the space of images to find a picture semantically reflecting content from the content image and the style of the style image. These two contradicting goals were regulated by minimizing simultaneously content loss and style loss: $$x = \operatorname*{arg\,min}\limits_{x}\{\alpha\mathcal{L}_{\text{content}}\left(x, x_c\right) + \mathcal{L}_{\text{style}}\left(x, x_s\right)\} \label{optim}$$ Coefficient $\alpha$ determines the strength of stylization (Fig \[ratio\].a). Lower $\alpha$ imposes more style and vice versa. The shortcoming of this approach is that style is imposed uniformly onto the whole content image, distorting important central objects of the image, which are critical for perception. For example, it’s hard to say what bird sits on the tree (Fig. \[ratio\]b), because small details of birds silhouette were lost during stylization. One may improve preservation of content by increasing $\alpha$ coefficient in (\[optim\]). However this solution decreases stylization strength globally, thus giving less expressive stylization. The paper proposes a new solution to this problem. First, central objects are detected and selected using automatically generated spatial importance mask for the content image. Next, this mask is used to impose style with spatially varying strength, controlled by the importance mask. This allows to achieve two contradicting goals - stylization is gentle on the central objects of the image, critical for perception, such as human faces, houses, cars, etc. And stylization is strong in the rest of the image, thus expressing a vivid style. The paper is organized as follows. Section \[method\] gives a description of the proposed method and provides qualitative comparisons with the baseline stylization method of Gatys et al. Section \[Evaluation\] provides the details of the user evaluation study and summarizes its results, highlighting the superiority of the proposed solution. Section \[conclusion\] concludes. Method ====== Non-uniform Stylization ----------------------- Consider the loss function in the optimization problem (\[optim\]). In the original paper [@Gatys] content loss is formalized as follows: $$\mathcal{L}_{\text{content}}\left(x, x_c\right) = \alpha\sum\limits_{{i,j}}\left(F^{{l}}_{{i,j}} - P^{l}_{{i,j}}\right)^2$$ where $F^{l}$ and $P^{l}$ are internal representations in pre-trained convolutional neural network [@Cnn], which is selected to be VGG [@VGG]. Instead of using constant $\alpha$, we propose to use a matrix with different $\alpha_{i, j}$ values for each spatial location $(i,j)$: $$\mathcal{L'}_{\text{content}}\left(x, x_c\right) = \sum\limits_{{i,j}}\alpha_{i,j}\left(F^{{l}}_{{i,j}} - P^{l}_{{i,j}}\right)^2 \label{nonuniform}$$ Making variable $\alpha$ allows to impose less style on central objects of the scene, critical for perception, and more style in all other areas of the image.. Automatic Central Objects Detection ----------------------------------- Consider convolutional neural network pre-trained for image classification. We use VGG [@VGG]. Such model takes input image and outputs probability distribution for each class from the ImageNet set. We detect central objects by filling different parts of the input image with uniform color and measuring change in the output class probabilities. If key object of the image was filled, one would observe drastic change in class probabilities. On the contrary, if background was changes, class probabilities would change only slightly. Overall, the magnitude of change of class probabilities determines the importance of the filled region. This approach was used to visualize convolutional neural networks in classification problems [@Visualization], but in the problem of style transfer, to our knowledge, it is used for the first time. After splitting whole image into a set of regions and filling each region one by one and evaluating its importance, we build a whole importance map $\alpha_{i,j}$ measuring semantic significance of each location of the image. This importance map is passed to the spatially varying style transfer algorithm (\[optim\]) with modified content loss function (\[nonuniform\]). ### Patch-Based Mask Generation In this approach we propose to divide the image by a uniform patch grid (like at Fig \[cheme\].b). Sequentially overwriting the patches and passing the image through the neural network, we rate the importance of the patches by calculating $L_2$ norm of class distributions difference. Visualization of results shows that proposed algorithm could find central object of the scene and separate them from background (Fig. \[importance\].a). After that we use found $\alpha_{i, j}$ matrix in stylization algorithm with changed content loss (\[nonuniform\]). At Fig.\[importance\] (b and c) we could see the difference between baseline approach and proposed model. There are a lot of small details at dog face failed to save in baseline approach and could save in new model. At the example above (Fig. \[importance\].a) we see, that main patch covers not only the central object, but it covers background too. Instead of using fixed patches we additionally propose to use previous algorithm for different position of the grid mesh and combine results together (Fig.\[avg\_patch\]a) by pixelwise averaging. We see the difference between two approaches at Fig.\[avg\_patch\](b and c). Averaging of different matrices allows to obtain more smooth distribution of weights, so it allows to define the boundaries of central objects better. ### Superpixel-Based Mask Generation At the example above (Fig. \[avg\_patch\]) we see that averaging of different $\alpha_{i,j}$ matrices produces boundary of elliptical form. If central objects have more complicated boundaries, the proposed method becomes unsuitable. To improve the results, instead of using a uniform grid, we suggest to split the image into superpixels [@superpixel]. This algorithm divides the image into small segments (superpixels), the boundaries of which are close to the boundaries of the objects in the image (Fig. \[superpixel\]a). Superpixel algorithm has two main parameters, responsible for the number of segments and the shape of boundaries. We choose a set of predefined values of these parameters and run importance mask evaluation algorithm several times, then average the results for better quality (Fig. \[superpixel\]b) Fig \[comparing\] shows qualitative difference between uniform stylization (a) and patch-based (b) and superpixel-based (c) spatially varying stylization. Boundaries of the central object – the glass – are non-convex, thus superpixel-based extracts the boundary of such object better, which improves the quality of final stylization. ### Segmentation-Based Mask Generation Deep learning models are good at image segmentation tasks [@Segmentation]. So we could evaluate $\alpha_{i, j}$ matrix by previous approaches and then correct boundaries by the results of the segmentation algorithm. This approach allow to increase quality of stylization when it’s easy to separate object from background. Example at fig. \[bently\] shows, that stylization algorithm with segmentation locates the car exactly along its border, while superpixel algorithm affect some pixels near the car, which makes final style transfer less sharp along the border of the central object of the image. Results {#Evaluation} ======= To evaluate quantitatively the proposed method we a user evaluation study. For a representative set of content and style images two stylizations were obtained — by the baseline method of Gatys et al. and by the proposed method. Respondents were asked to select a stylization they like more. To omit location bias for each comparison baseline stylization and stylization with the proposed method were shown in random order. 6 respondents were surveyed on 29 stylization outputs. We conducted 3 surveys, comparing baseline stylization algorithm, of Gatys et al. with our method, where importance mask was generated using patches, superpixels and results of image segmentation. Results are shown on table \[tab:surveys\]. Percent of vote -------------- ----------------- Patches 66 Superpixel 72 Segmentation 80 : Comparing baseline algorithm and proposed models.[]{data-label="tab:surveys"} 1em From these results we see that our method outperforms baseline stylization in all cases. Image segmentation modification gives maximum benefit, which can be attributed to the fact that it extracts the boundaries of central objects more accurately. Conclusion ========== A new style transfer method with spatially varying strength was proposed in this work. Stylization strength was controlled for each pixel by automatically generated importance mask. Three methods, namely patch-based, segmentation-based and superpixel-based were proposed to generate importance mask. Qualitative comparisons and conducted user evaluation study demonstrated superiority of the proposed method compared to classical style transfer method of Gatys et al. due to expressive style transfer for the background and more gentle style transfer for the central objects of the content image. Among three proposed importance mask generation methods, segmentation-based showed the highest quality which may be attributed to more accurate boundary estimation of the central objects of the image. [6]{} https://research.adobe.com/news/image-stylization-history-and-future/ Gatys L., Ecker A., Bethge M. Image Style Transfer Using Convolutional Neural Networks // IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, P. 2414-2423. Krizhevsky A., Sutskever I., Hinton G. Imagenet classification with deep convolutional neural networks // Advances in neural information processing systems, 2012, P. 1097-1105. Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition // arXiv preprint arXiv:1409.1556. 2014. Zeiler M., Fergus R. Visualizing and understanding convolutional networks // European conference on computer vision. 2014. P. 818-833. https://www.pyimagesearch.com/2014/07/28/a-slic-superpixel-tutorial-using-python/ Zhou, Bolei and Zhao, Hang and Puig, Xavier and Xiao, Tete and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio. Semantic understanding of scenes through the ade20k dataset // International Journal on Computer Vision 2018
--- abstract: | We show that exclusive double-diffractive Higgs production, $pp\ra p+H+p$, followed by the $H\ra\bb$ decay, could play an important role in identifying a ‘light’ Higgs boson at the LHC, provided that the forward outgoing protons are tagged. We predict the cross sections for the signal and for all possible $\bb$ backgrounds. --- IPPP/02/41\ DCPT/02/82\ 3 July 2002 [**Forward proton tagging as a way to identify a light Higgs boson at the LHC[^1]**]{} <span style="font-variant:small-caps;">A.D. Martin, V.A. Khoze and M.G. Ryskin</span> [Institute for Particle Physics Phenomenology,\ University of Durham, DH1 3LE, UK]{} Introduction ============ The identification of the Higgs boson(s) is one of the main goals of the Large Hadron Collider (LHC) being built at CERN. There are expectations that there exists a ‘light’ Higgs boson with mass $M_H\lapproxeq130$ GeV. In this mass range, its detection at the LHC will be challenging. There is no obvious perfect detection process, but rather a range of possibilities, none of which is compelling on its own. Some of the processes are listed in Table 1, together with the predicted event rates for the integrated luminosity of 30 fb$^{-1}$ expected over the first two or three year period of LHC running. We see that, [*either*]{} large signals are accompanied by a huge background, [*or*]{} the processes have comparable signal and background rates for which the number of Higgs events is rather small. Here we wish to draw particular attention to process (c), which is often disregarded; that is the exclusive signal $pp\ra p + H + p$, where the + sign indicates the presence of a rapidity gap. It is possible to install proton taggers so that the ‘missing mass’ can be measured to an accuracy $\Delta M_{\rm missing}\simeq 1$ GeV [@DKMOR]. Then the exclusive process will allow the mass of the Higgs to be measured in two independent ways. First the tagged protons give $M_H = M_{\rm missing}$ and second, via the $H\ra\bb$ decay, we have $M_H = M_{\bb}$, although now the resolution is much poorer with $\Delta M_{\bb}\simeq10$ GeV. The existence of matching peaks, centered about $M_{\rm missing}=M_{\bb}$, is a unique feature of the exclusive diffractive Higgs signal. Besides its obvious value in identifying the Higgs, the mass equality also plays a key role in reducing background contributions. Another advantage of the exclusive process $pp\ra p+H+p$, with $H\ra\bb$, is that the leading order $gg\ra\bb$ background subprocess is suppressed by a $J_z=0$ selection rule [@KMRmm; @DKMOR]. -------------------------------------------------------------------------- --------- ---------- ---------------------------------------------------------------------------------------- ----------------- Higgs signal signal backgd. $S/B$ signif. \[0pt\]\[7ex\][0.06 ${\displaystyle \left(\frac{1\:{\rm GeV}}{\Delta M_{\gamma\gamma}} \right)}$]{} \[0ex\]\[3.3ex\][b) $t\bar{t}H $]{} \[0pt\]\[2ex\][0.8 ${\displaystyle \left(\frac{10\:{\rm GeV}}{\Delta M_{\bb}} \right)}$]{} $\qquad\, \dr$ \[0ex\]\[-2ex\][$\bb$]{} \[0ex\]\[3.3ex\][c) $gg^{PP}\ra p+H+p$]{} \[0pt\]\[2ex\][3 ${\displaystyle\left(\frac{1\:{\rm GeV}}{\Delta M_{\rm missing}}\right)}$]{} $\qquad\qquad\qquad\ \ \,\dr$ \[0ex\]\[-2ex\][$\bb$]{} \[3ex\]\[1.9ex\][d) WBF ]{} \[0ex\]\[3ex\][$qWWq\ra jHj\ra j\tau\tau j$]{} 25 8 3 $4.4\sigma$ \[3ex\]\[1.9ex\][e) WBF with rap. gaps ]{} [$qWWq\ra j+H+j$]{} [250]{} [1800]{} \[2ex\]\[1.5ex\][0.14 [$\displaystyle [$5.5\sigma$]{} \left(\frac{10\:{\rm GeV}}{\Delta M_{\bb}} \right)$]{}]{} $\qquad\qquad\qquad\ \; $\[0pt\]\[2ex\][$\dr$]{} \[0pt\]\[2ex\][$\bb$]{} -------------------------------------------------------------------------- --------- ---------- ---------------------------------------------------------------------------------------- ----------------- [The number of signal and background events for various methods of Higgs detection at the LHC. The significance of the signal, $S/\sqrt{S+B}$, is also given. The mass of the Higgs boson is taken to be 120 GeV and the integrated luminosity is taken to be 30 fb$^{-1}$. The notation $gg^{PP}$ is to indicate that the gluons originate within overall colour-singlet (hard Pomeron) $t$-channel exchanges. The entries for the various processes are computed from references (a) [@Z], (b) [@TT], (c) [@DKMOR; @INC], (d) [@Z; @WBF] and (e) [@DKMOR; @KMRhiggs]. For (a) we show the CMS value without the NLO $K$ factor. Including the $K$ factors for both the signal and the background increases the $H\ra\gamma\gamma$ significance to about $7\sigma$ [@BDS].]{} Calculation of the exclusive Higgs signal ========================================= The basic mechanism for the exclusive process, $pp\ra p+H+p$, is shown in Fig. 1. Since the dominant contribution comes from the region $\Lambda_{\rm QCD}^2\ll Q_t^2\ll M_H^2$ the amplitude may be calculated using perturbative QCD techniques [@KMR; @KMRmm] $${\cal M} = \frac{(\sqrt{2}G_F)^{\frac{1}{2}}\pi^2\alpha_S}{3} \int \frac{d^2Q_t}{Q_t^4} f_g\left(x_1,x_1^\prime, Q_t^2, \frac{M_H^2}{4}\right) f_g\left(x_2,x_2^\prime, Q_t^2, \frac{M_H^2}{4}\right), \label{eq:M}$$ where the skewed unintegrated gluon densities, $f_g$, are given in terms of the conventional integrated density $g(x)$. The $f_g$’s embody a Sudakov suppression factor $T$, which is effectively the survival probability that the gluon remains untouched in the evolution from $Q_t$ up to the hard scale $M_H/2$. \[Fig.1\] -- -- -- -- -- -- The radiation associated with the $gg\ra H$ hard subprocess is not the only way to populate and to destroy the rapidity gaps. There is also the possibility of soft rescattering in which particles from the underlying event populate the gaps. The probability, $S^2=0.02$, that the gaps survive the soft rescattering was calculated using a two-channel eikonal model, which incorporates high mass diffraction [@KMRsoft]. Including this factor, and the NLO $K$ factor, the cross section is predicted to be [@INC] $$\sigma(pp\ra p+H+p)\simeq 3\:{\rm fb} \label{eq:sigma}$$ for the production of a Standard Model Higgs boson of mass 120 GeV at the LHC[^2]. It is estimated that there may be a factor two uncertainty in this prediction [@DKMOR]. The event rate in entry (c) of Table 1 includes a factor 0.6 for the efficiency associated with proton tagging, 0.6 for $b$ and $\bar{b}$ tagging, 0.5 for the $b,\bar{b}$ jet polar angle cut, $60^\circ<\theta<120^\circ$, (necessary to reduce the $\bb$ QCD background) and 0.67 for the $H\ra\bb$ branching fraction [@DKMOR]. Hence the original $(\sigma=3\:{\rm fb})\times({\cal L}=30\:{\rm fb}^{-1}) = 90$ events is reduced to an observable signal of 11 events, as shown in Table 1. Background to the exclusive Higgs signal ======================================== The advantage of the $p+(H\ra\bb)+p\,$ signal is that there exists a $J_z=0$ selection rule, which requires the leading order $gg^{PP}\ra\bb$ background subprocess to vanish in the limit of massless quarks and forward outgoing protons[^3]. However, in practice, LO background contributions remain. The prolific $gg^{PP}\ra gg$ subprocess may mimic $\bb$ production since we may misidentify the outgoing gluons as $b$ and $\bar{b}$ jets. Assuming the expected 1% probability of misidentification, and applying $60^\circ<\theta<120^\circ$ jet cut, gives a background-to-signal ratio $B/S \sim 0.06$. Secondly, there is an admixture of $|J_z|=2$ production, arising from non-forward going protons which gives $B/S \sim 0.08$. Thirdly, for a massive quark there is a contribution to the $J_z=0$ cross section of order $m_b^2/E_T^2$, leading to $B/S \sim 0.06$, where $E_T$ is the transverse energy of the $b$ and $\bar{b}$ jets. Next, we have the possibility of NLO $gg^{PP}\ra\bb g$ background contributions. Of course, the extra gluon may be observed experimentally and these background events eliminated. However, there are exceptions. The extra gluon may go unobserved in the direction of a forward proton. This background may be effectively eliminated by requiring the equality $M_{\rm missing} = M_{\bb}$. Then we may have soft gluon emission. First, we note that emission from an outgoing $b$ or $\bar{b}$ is not a problem, since we retain the cancellation between the crossed and uncrossed graphs. Emission from the virtual $b$ line is suppressed by at least a factor of $\omega/E$ (in the amplitude), where $\omega$ and $E$ are the energies of the outgoing soft gluon and an outgoing $b$ quark in the $gg^{PP}\ra\bb$ centre-of-mass frame. The potential danger is gluon emission from an incoming gluon, see Fig. 2. The first two diagrams no longer cancel, as the $\bb$ system is in a colour-octet state. However, the third diagram has precisely the colour and spin structure to restore the cancellation. Thus soft gluon emissions from the initial colour-singlet $gg^{PP}$ state factorize and, due to the overriding $J_z=0$ selection rule, QCD $\bb$ production is still suppressed. The remaining danger is large angle hard gluon emission which is collinear with either the $b$ or $\bar{b}$ jet, and therefore unobservable. If the cone angle needed to separate the $g$ jet from the $b$ (or $\bar{b}$) jet is $\Delta R \sim 0.5$ then the expected background from unresolved three jet events leads to $B/S \simeq 0.06$. The NNLO $\bb gg$ background contributions are found to be negligible (after requiring $M_{\rm missing}\simeq M_{\bb}$), as are soft Pomeron-Pomeron fusion contributions to the background (and to the signal) [@DKMOR]. So, in total, double-diffractive Higgs production has a signal-to-background ratio of about three, after including the $K$ factors. Discussion ========== Identifying a ‘light’ Higgs will be a considerable experimental challenge. All detection processes should be considered. From Table 1 we see that valuable information can be obtained from weak boson fusion, where the Higgs and the accompanying jets are produced at high $p_t$. For example, process (d) is based on the $H\ra\tau\tau$ decay for which the background is small [@Z; @WBF], whereas process (f) exploits rapidity gaps so that the larger $H\ra\bb$ signal may be isolated [@KMRhiggs], provided the pile-up problems can be overcome [@DKMOR]. Here we have drawn attention to the exclusive $pp\ra p+H+p$ signal, process (c). The process has the advantage that the signal exceeds the background. The favourable signal-to-background ratio is offset by a low event rate, caused by the necessity to preserve the rapidity gaps so as to ensure an exclusive signal. Nevertheless, entry (c) of Table 1 shows that the signal has reasonable significance in comparison to the standard $H\ra\gamma\gamma$ and $t\bar{t}H$ search modes. Moreover, the advantage of the matching Higgs peaks, $M_{\rm missing} = M_{\bb}$, cannot be overemphasized [^4]. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Albert De Roeck, Risto Orava and Andrei Shuvaev for valuable discussions, and the EU, PPARC and the Leverhulme Trust for support. [xx]{} D. Zeppenfeld et al., Phys. Rev. [**D62**]{} (2000) 013009. V. Drollinger, T. Müller and D. Denegri, CMS note, [hep-ph/0111312]{};\ J. Goldstein et al., Phys. Rev. Lett. [**86**]{} (2001) 1694. A. De Roeck, V.A. Khoze, A.D. Martin, R. Orava and M.G. Ryskin, Durham report, [hep-ph/0207042]{}. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C23**]{} (2002) 311. D. Zeppenfeld, [hep-ph/0203123]{}; N.Kauer, T. Plehn, D. Rainwater and D. Zeppenfeld, Phys. Lett. [**B503**]{} (2001) 113. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C21**]{} (2001) 99. Z. Bern, L. Dixon and C. Schmidt, [hep-ph/0206194]{}. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C14**]{} (2000) 525. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C19**]{} (2001) 477. V.A. Khoze, A.D. Martin and M.G. Ryskin, Eur. Phys. J. [**C18**]{} (2000) 167. [^1]: Presented at the 10th International Workshop on Deep Inelastic Scattering, DIS(2002), Krakow, Poland, 30 April–4 May 2002 [^2]: Cross section (\[eq:sigma\]) at the Tevatron, 0.2 fb, is too low to provide a viable signal. [^3]: In the $m_b\ra0$ limit, the two Born-level diagrams (Figs. 2(a,b) [*without*]{} the emission of the gluon) cancel each other. [^4]: This may be contrasted with the search for a Higgs peak sitting on a huge background in the $M_{\gamma\gamma}$ spectrum, see process (a) of Table 1.
--- abstract: | The paper attempts to integrate the available data for contact binaries of the disk population in a deep galactic field and in old open clusters. The two basic data sets consist of 98 systems in the volume-limited 3 kpc sub-sample of contact binaries detected by the OGLE microlensing project toward Baade’s Window (BW$_3$) and of 63 members of 11 old open clusters (CL). Supplementary data on the intrinsically bright, but spatially rare, long-period binaries are provided by 238 systems in the BW sample to the distance of 5 kpc (BW$_5$). The basic BW$_3$ sample and the CL sample are remarkably similar in the period, color, luminosity and variability-amplitude distributions, in spite of very different selections, for BW$_3$ – as a volume-limited sub-sample of all contact systems discovered by the OGLE project, and for CL – as a collection of contact systems discovered in open clusters which had been subject to searches differing in limiting magnitudes, cluster area coverage and photometric errors. The contact systems are found in the color interval $0.3 < (B-V)_0 < 1.2$ where the turn-off points (TOP) of the considered clusters are located; however, they are not concentrated at the respective TOP locations but, once the TOP happens to fall in the above color interval, they can appear anywhere within it. The luminosity function for the BW sample appears to be very similar in shape to that for the solar neighborhood main-sequence (MS) stars when corrections for the galactic disk structure are applied, implying a flat apparent frequency-of-occurrence distribution. In the accessible interval $2.5 < M_V < 7.5$, the frequency of contact binaries relative to MS stars equals about 1/130 for the exponential disk length scale $h_R = 2.5$ kpc and about 1/100 for $h_R = 3.5$ kpc. The high frequency cannot continue for $M_V < 2.5$ as the predicted numbers of bright systems would then become inconsistent with the numbers of known systems to $V_{lim} = 7.5$ in the sky sample. The previous estimate of the frequency from the BW sample of $1/250 - 1/300$ did not correctly relate the numbers of the contact binaries to the numbers of MS stars. The magnitude limit of the OGLE survey limits the accuracy of the current luminosity function determination for $M_V > 5.5$, but the available data are consistent with a continuation of the high apparent frequency beyond $M_V = 7.5$, i.e. past the current short-period, low-luminosity end, delineated by the shortest-period field system CC Com at $M_V = 6.7$. The current data indicate that the sky-field sample starts showing discovery-selection effects at a level as high as $V \simeq 10 - 11$. author: - | Slavek M. Rucinski\ Electronic-mail: [*rucinski@cfht.hawaii.edu*]{} title: | Contact Binaries of the Galactic Disk:\ Comparison of the Baade’s Window and Open Cluster Samples --- INTRODUCTION {#intro} ============ Contact stars are close binary systems in which components form single entities described by equipotentials of the Roche geometry. The most common among them consist of solar-type stars and are called W UMa-type binaries; their orbital periods are in the range between about one quarter and three quarters of a day. Several reviews have discussed properties of contact binaries; the recent ones, concentrating respectively on theoretical and observational issues have been by Eggleton (1996) and by Rucinski (1993). How these binaries form and evolve is still poorly understood, but it is generally assumed that they are in the penultimate – but possibly long lasting – stage of angular-momentum-loss driven evolution, just before forming single stars. The angular-momentum-loss results from the torque exerted by the magnetized stellar wind on the components, which extracts the angular momentum from the orbit via the tidal synchronism of rotation. Since this process takes relatively long time, of the order of a few Gyrs for solar-type stars, the W UMa-type systems are expected to consist of relatively old stars. The evidence for their advanced age cannot be inferred from spectral signatures of low metal abundance because of the extremely strong broadening of spectral lines, but comes from (1) their relatively large spatial velocities (Guinan & Bradstreet 1988), characteristic for old disk stars, and their presence in (2) old open clusters (Kałużny & Rucinski 1993 = KR93, Rucinski & Kałużny 1994 = RK94) and in (3) the disk component toward Baade’s Window (Rucinski 1997a, see below). For a long time, the statistics of contact binaries was particularly uncertain because of the accidental nature of the sky-field discoveries. One of the indications of incompleteness in the cataloged sky-field sample was the tendency to show only relatively large light-curve amplitudes whereas simple considerations of randomly distributed orbital inclinations suggest that low amplitude systems should be most common. Indeed, systematic searches in open clusters (KR93, RK94), later supplemented by the OGLE microlensing by-product data (Rucinski 1997a = R97a, Rucinski 1997b = R97b; see below), led to the discovery of many low amplitude systems. The cluster searches also permitted to address the question of the ages. A progression in numbers of such systems with the cluster age, in the sense of more systems in older clusters, supported the view that the contact systems form over time from close, detached binary systems. The initial interpretations of the old open cluster data (KR93, RK94) indicated the apparent[^1] frequency in clusters by an order of magnitude higher than had been estimated for the sky field (Duerbeck 1984), reaching perhaps one such a system per hundred main sequence stars. When projected non-members were removed from clusters at low galactic latitudes and data for several clusters were averaged to improve the number statistics, the apparent frequency relative to the number of monitored MS stars of spectral types F to K was estimated to be about one such a system per 250 – 300 ordinary stars (Rucinski 1994 = CAL1). Although these numbers were approximate, they indicated that in a typical old open cluster, where some thousand or so stars could be monitored for variability, only a few contact systems would be normally found, a circumstance making any meaningful statistical inferences difficult. Thus, the cluster data could not offer sound statistics and larger, more uniform data sets were needed. Very recently, massive discoveries of contact systems in microlensing surveys offered an abundant source of statistical data. Two thirds among 933 eclipsing binaries discovered in the OGLE-1 survey (R97a, R97b) have been classified as contact binaries. While the discoveries yielded unprecedented material for studies of contact binaries in various ways (R97a, R97b, Rucinski 1998 = R98), one aspect has been of particular importance in the present context: The OGLE data permitted to define a volume-limited sample of contact binaries, leading to the first unbiased estimates of frequency distributions for such parameters as orbital periods, colors or luminosities. The contact systems were found to appear in unexpectedly narrow ranges of periods, mainly within $0.25 < P < 0.7$ day, and colors, $ 0.2 < (V-I)_0 < 1.5$ (several observational effects could make the observed color range slightly broader than in reality, see Section \[col\]). The range in colors – coinciding with location on the color–magnitude diagrams where solar-type, old-disk stars start showing evolutionary effects – led to a suggestion (R97a) that the properties of these systems have an evolutionary relation to the Turn-Off Point (TOP) stage of evolution, when the components expand and enter into physical contact. This suggestion is testable since the cluster data of the CL sample permit to relate positions of contact systems and TOP’s in color–magnitude diagrams. This paper discusses properties of contact binaries toward Baade’s Window (BW) and in old open clusters (CL), to uncover similarities and differences in the two sets of data. The two samples are complementary, as the BW sample contains objects observed in a uniform fashion, but with absolute magnitudes and implied distances determined through one particular calibration, while the CL sample consist of objects with independent information on age, metallicity and distance, but from an group of clusters which were selected in a somewhat non-rigorous fashion. This paper utilizes also a new, very useful tool which had not been available before, the new $M_V = M_V(\log P, B-V)$ calibration which is based on the Hipparcos parallaxes (Rucinski & Duerbeck 1997 = CAL5). The calibration obviates any needs to resort to absolute magnitude calibrations based on open clusters, whose use could imply an obvious risk of a circular reasoning. We stress that this paper utilizes the Hipparcos-based [*calibration*]{}, but does not utilize the Hipparcos [*sample*]{} of the contact binaries in the solar vicinity. The reason is the biased character of the Hipparcos sample which, being magnitude-limited, consists primarily of intrinsically luminous systems. We note, that the Hipparcos observed all stars of the sky only to slightly beyond $V \simeq 7$ and only six contact binaries actually exist to this magnitude limit. The paper is organized as follows: Section \[samples\] describes the BW and CL samples. The next sections compare various properties and statistics for the two samples: the period (Section \[per\]) and color (Section \[col\]) distributions, the color – magnitude diagrams (Section \[cmd\]), the period – color relations (Section \[PC\]) and the variability amplitudes (Section \[ampl\]). The important matter of the frequency of occurrence of the contact systems is addressed in Section \[freq\], after a discussion of the related issue of the luminosity functions in Section \[LF\]. The last Section \[concl\] contains conclusions of the paper. THE TWO SAMPLES {#samples} =============== The Baade’s Window sample (BW) ------------------------------ The volume-limited BW sample has been defined in R97a. It consists of 98 systems with orbital periods shorter than one day and distances smaller than 3 kpc, which passed the Fourier light-curve shape filter. Comparison of the number densities for the volumes defined by the distances of 2 kpc and 3 kpc indicated that the 3 kpc sample is complete to its limiting absolute magnitude of $M_I = 4.5$, which – for typical values of reddening and intrinsic colors – translates into an approximate limit of $M_V \simeq 5.5$. Among the BW-sample systems, most are genuine W UMa-type systems with approximately equally-deep eclipses, indicating perfect thermal contact between components. However, two systems did pass the light-curve shape filter used to select contact systems, yet had eclipse depth differences large enough to suspect poor thermal contact, or more likely, very close semi-detached configurations; the accompanying asymmetries of the maxima suggest in such cases an on-going mass transfer (R97b). Thus two, among 98 systems, i.e.  only about 2 percent of all contact systems appear to be of this type, although – by being intrinsically more luminous than typical W UMa-type systems due to longer periods – they are much more common in the sky-field (or any other magnitude-limited) sample. For simplicity, we will call them Poor Thermal Contact (PTC) systems, remembering that these could be either contact systems with inhibited energy transfer or very close semi-detached binaries. The intrinsically bright, long-period systems can be observed deeper in space. By selecting a deeper sample, we can analyze the long-period and high-luminosity ends of the respective distributions, sacrificing the statistics at the faint, short period end. For that purpose, a second BW sample to 5 kpc has been considered in this paper. It is based on a 4.6 times larger volume than the 3 kpc sample and contains 238 system, 8 of them showing the PTC light curves. Its statistical properties are sound only for systems with periods longer than about 0.55 day as the short-period ones are eliminated by their low absolute magnitudes (see Figure 13 in R97a). The expected completeness limit of the 5 kpc BW sample is $M_V \simeq 4.2$ (thus the $3.5 < M_V < 4.5$ bin is partly affected in various statistics presented later). Whenever relevant, we will distinguish the two Baade’s Window sub-samples by subscripts, BW$_3$ and BW$_5$, but normally, for most considerations, the basic sample BW$_3$ will be used. Only one system with a period longer than one day, BW5.009 with $P=1.59$ day, from the sample discussed in R98 could be formally included in the BW$_5$ sample (its distance is about 4.3 kpc). However, its luminosity and distance are poorly known due to lack of the luminosity calibration for very blue contact systems (the observed $V-I=1.01$ and the estimated $E_{V-I}=0.84$). This system is disregarded in the current paper. The original data in the OGLE catalog consist of the maximum brightness $I$ and $V-I$ magnitudes and colors, orbital periods $P$, amplitudes in $I$ and coordinate positions. The analysis presented in R97a and R97b added to these data the light-curve-decomposition Fourier coefficients as well as the distances, absolute magnitudes $M_I$ and reddening corrections $E_{V-I}$. These data for all contact binaries in the OGLE survey are available in the form of extensive tables via Internet at: http://www.cfht.hawaii.edu/$\sim$rucinski/rucinski.html (the major tables of this paper are also in this location). Since our basic 3 kpc volume-limited sample has not yet been published, but may have a more general use, we present it in Table \[tab1\]. The table contains the original data from OGLE as well as the derived quantities in the $V$, $B-V$ photometric system. The transformation of the photometric data, rather than the use of the original $I$, $V-I$ data (which would have many advantages because of usually higher accuracy and weaker sensitivity of that color to entirely unknown metallicities), has been mandated by the fact that most open clusters were observed in the $V$, $B-V$ system, and that the Hipparcos calibration is available only in this system. The transformations were: $M_V = M_I + (V-I) - E_{V-I}$ for the absolute magnitudes; the reddening-corrected colors $(V-I)_0$ have been transformed into $(B-V)_0$ with the main-sequence relations of Bessell (1979, 1990). The cluster sample (CL) ----------------------- The open cluster sample (CL) has been obtained by combining the data published in several papers, most of them by Kałużny and collaborators. The assumed cluster properties are listed in Table \[tab2\] which is arranged in the age progression, from the oldest to the youngest clusters. To obtain the best uniformity of the material, the values of reddening corrections $E_{B-V}$ and apparent distance moduli, $(m-M)_V$, were taken from the recent tabulation of Twarog et al. (1997). The ages have been taken mostly from the original publications, and then adjusted slightly for consistency of the color-magnitude diagrams (Section \[cmd\]). The ages are only approximate and used here mainly to arrange the clusters into an age progression. References to the sources of the photometric data are given in the last column of Table \[tab2\]. Table \[tab3\] lists the data for the contact systems detected in the cluster fields. The systems are ordered by the new variable-star designations, following the 71st, 72nd and 73rd Variable-Star Name lists (Kazarovets et al.  1993, Kazarovets & Samus 1995, 1997). Since these designations are used for the first time for many of the listed systems, the original names in the discovery papers are also given. Most of the photometric data are available in the $V$, $B-V$ system. For those clusters which were observed in the $V-I$ system and for those with both colors available, the main-sequence color–color transformations to $B-V$ were used, as the $V-I$ data were usually of better quality than those in $B-V$. The Fourier light-curve shape filter was not applied to the CL systems to verify their W UMa-type characteristics. The main reason was the partial phase coverage for many systems which frequently resulted in erroneous values of the Fourier coefficients and rejection of otherwise apparently genuine cluster members. For this reason, the CL sample may contain a small admixture of other short-period variable stars; in this sense, the internal consistency of the CL sample is poorer than that of the BW sample. The cluster membership was verified using the absolute–magnitude Hipparcos calibration CAL5: $M_V^{cal} = -4.44\, \log P + 3.02\, (B-V)_0 + 0.12$, where $(B-V)_0=(B-V)-E_{B-V}$. The absolute magnitudes listed in Table \[tab3\] have been obtained using the observed maximum-light magnitudes, $V_{max}$, and the cluster distance moduli, $(m-M)_V$: $M_V = V_{max} - (m-M)_V$. Large deviations $\Delta M_V = M_V - M_V^{cal}$ permitted to identify systems located in front or behind the respective clusters. The deviations form a distribution which shows long tails of non-members on both sides of the maximum, but which within $-2 < \Delta M_V < +2$ can be rather well described by a Gaussian with the dispersion $\sigma = 0.47$ and with the mean at $-0.04$. The small shift in the mean value is gratifying as it shows that the Hipparcos and cluster samples are mutually consistent. However, the dispersion of $\Delta M_V$ is large when compared with that for the Hipparcos sample which showed an intrinsic scatter of $\sigma_{HIP} = 0.22$. We suspect that a large fraction of the scatter comes from the uncertainties in the cluster data. This is indicated by systematically smaller deviations for some better observed clusters such as M 67. But notice also that some apparently genuine cluster members (such as ER Cep in NGC 188) in well-observed clusters do show large deviations. For some clusters, the deviations may have been increased because no account was made for differing, but usually poorly known metallicities (as in the case of Tom 2). Instead of inventing a system of weights for individual clusters, it was decided simply to widen the range for the membership acceptance in $\Delta M_V$ to $\pm 1.0$, that is to $\pm 2.1 \sigma$ of the $\Delta M_V$ distribution. While this way some non-members may have entered to spoil our statistics, we note that the application of the Hipparcos calibration resulted – in most cases – in smaller deviations in $M_V$ than in the discovery papers where individual membership criteria were first discussed. Thus, many systems which would not pass the $\pm 1$ magnitude deviation filter on the basis of the older calibrations CAL1 and CAL2 (Rucinski 1994, 1995) can now be considered as cluster members. In two cases, V514 Lyr (NGC 6791–V8) and IK CMa (Be 33–V2), the deviations are slightly larger than the adopted threshold; in both cases $\Delta M_V = -1.05$. An inconsistency has been committed here by removing V514 Lyr from the CL sample, but retaining IK CMa. The basic photometric data for NGC 6791 are sufficiently well known to verify that the membership of V514 Lyr to the cluster is quite unlikely. In contrast, the large and poorly known reddening ($E_{B-V} \simeq 0.7$) and, possibly, the low metallicity of Be 33 leave a large margin of uncertainty in the cluster properties to retain IK CMa as a probable member. This is the only system in Be 33 which can be considered as a member (the other one, II CMa = Be 33–V1 is definitely not). Be 33 is the youngest of the clusters in the sample, so that it is easy to keep IK CMa apart and see if it deviates in any other sense. As far as we can see, the system belongs to this cluster. The cluster surveys for variability have different depth and reach different levels of absolute magnitudes. An attempt has been made to estimate roughly the levels at which detection of variability would become impossible because of the rapid increase of errors for fainter stars. These limiting levels, $M_V^{lim}$, were estimated on the basis the cluster distance moduli and the photometric-error data given in most of the papers by Kałużny and collaborators as the points where the errors reached about 0.05 mag.; for such errors, variables with amplitudes of about 0.1 mag should be still detectable. The limiting $M_V^{lim}$ are given in Table \[tab2\] and shown later in the color – magnitude diagrams shown in Section \[cmd\]. Typically, the nominal depths are $M_V^{lim} \simeq 6 - 7$, with the exception of the distant and photometrically difficult clusters Tom 2 and Be 33, where the limits are at the level of about $M_V \simeq 5$. It should be stressed that these limits are approximate and somewhat subjective so that the CL sample is much less rigorously defined, especially at its faint end, than the BW sample. In forming the CL sample, no account has been made of the fact that some searches of open clusters gave no discoveries of contact systems. The number of failed searches is not known. Only one case has been published of such a failed search in 6 clusters (NGC 2360, 2420, 2506, 6802, 6819 and Mel 66) by Kałużny & Shara (1988); in one of these clusters, NGC 6802, a very low amplitude W UMa system was subsequently found (Vidal & Belmonte 1993). We have no explanation for the lack of contact systems in some clusters and we do not know if this is a real phenomenon or some statistical or observational effect. Whatever is the cause, it clearly shows the limitations of the CL sample which is less rigorously defined than the BW$_3$ sample. PERIOD DISTRIBUTION {#per} =================== A comparison of the BW and CL samples is shown in Figure \[fig1\]. Since we have 98 objects of the BW$_3$ sample and 63 objects of the CL sample, this figure and the following similar ones have the left and right side vertical scales scaled in proportion of 3:2 to take into account approximately the difference in sizes of the two samples. The histograms in Figure \[fig1\] and in the following similar figures are not normalized in order to show the numbers of the W UMa-type systems in each bin and thus permit a direct judgment on the Poisson uncertainties involved. We can see that whatever statistical properties we would like to analyze in this paper, the results will be relevant to the most common contact binaries only; objects appearing at frequencies lower than a few percent of the totality are expected to be missed. The BW$_5$ sample consisting of 238 objects has been added to improve the statistics for the intrinsically rare, but bright, long-period systems which can be seen to large distances. Taking into account the large per-bin uncertainties in the histograms, the period distributions for the BW$_3$ and CL samples (Figure \[fig1\]) are surprisingly similar, especially when plotted in linear units of the orbital period (the left panel in the figure). Both samples show sharp cutoffs at the orbital periods of about 0.22 – 0.25 day, and maxima at about 0.35 – 0.4 day. Systems with periods longer than about 0.7 day are absent in BW$_3$ but do appear in BW$_5$ at the level of about 0.5 percent of all systems. The short period cutoffs are located at 0.228 day, defined by BW4.040[^2], and 0.225 day, defined by V702 Mon in Be 39; both are very close to the current record of 0.221 day for the field system CC Com (Rucinski 1977). Application of the two-distribution Kolmogorov – Smirnov test gives only 0.6 percent significance to the hypothesis that the BW and CL distributions are different, when binned in linear units of the orbital period. The same significance with the logarithmic binning is 3.5 percent. The period distribution for the BW$_5$ sample indicates that contact systems with periods longer than 0.7 day are very rare, but not absent, as could be perhaps erroneously inferred from the BW$_3$ sample. They become detectable when sufficiently large volume is searched. While Figure \[fig1\] shows the numbers for BW$_5$ simply scaled by 5 relative to BW$_3$, to allow for the difference in the search volumes in an approximate way, Figure \[fig2\] and Table \[tab4\] give the exact relation in the form of what we call the [*period function*]{}, PF. It is an analogue of the luminosity function and gives the number of contact binaries in constant intervals of $\log P$ per unit of volume. The respective volumes of the samples to the distances of 3 kpc and 5 kpc used to derive these functions were $1.22 \times 10^6$ pc$^3$ and $5.64 \times 10^6$ pc$^3$. Errors of the PF’s can be obtained by scaling by the $1/\sqrt{N}$, where $N$ are the numbers of systems in the respective period bins. The period function derived from BW$_5$ can be used only above $P \simeq 0.55$ day (or $\log P \simeq -0.25$) because distant, short-period, low-luminosity systems are eliminated from it by the magnitude limit of the OGLE survey at $I = 17.9$. The entries of PF$_5$ which are affected by this selection effect are taken in Table \[tab4\] in square brackets. The period function are based on the apparent numbers of systems and are not corrected for the systems missed because of the low orbital inclination angles. COLOR DISTRIBUTION {#col} ================== The color distribution for the BW$_3$ sample in the $I$, $V-I$ system was discussed in R97a. For comparison of the BW$_3$ and CL samples, the BW$_3$ data have been transformed here to the $B-V$ color index. The BW$_5$ sample is not used in the comparison because its color distribution is affected by elimination of faint, red systems. Figure \[fig3\] shows a comparison of histograms representing the distributions for BW$_3$ and CL. The agreement is not as close as in the case of the period distributions. The two-distribution K–S test gives a 30 percent significance for rejection of the null hypothesis of identical distributions. The difference in the distributions is caused mostly by the spike in the CL distribution at $(B-V)_0 = 0.75$. However, the end points of the color distributions coincide well, with the distribution for BW$_3$ being a bit wider – as expected – because of the reddening correction uncertainties. In fact, four observational effects are expected to broaden the BW$_3$ color distribution; these are: (1) uncertainties in the reddening in Baade’s Window, mostly from the spotty character of the reddening, (2) the crude model of the reddening adopted in R97a, (3) photometric blending of stars in the extremely dense BW field and (4) the $(V-I)_0$ to $(B-V)_0$ color transformations. Because the BW$_3$ sample is expected to be complete and statistically better defined than the CL sample, we will tolerate the possibly larger color uncertainties for the BW$_3$ systems and use this sample in the next Section \[cmd\] as an external reference for comparison with individual clusters. The total range of colors observed for the BW$_3$ sample is $0.19 < (B-V)_0 < 1.54$. Because of the possibility of errors for individual systems, we will also consider the 90 percent range, here defined as the interval where 90 from among 98 systems of the BW$_3$ sample are located. This range extends over $0.3 < (B-V)_0 < 1.2$ and almost perfectly coincides with the full range observed for all binaries of the CL sample, which extends over $0.31 < (B-V)_0 < 1.21$. We note that while relatively red systems are seen in the BW$_3$ sample, none of the stars in the CL sample is as red as CC Com with $(B-V)_0 = 1.24$ (Rucinski 1977); this is partly expected as $M_V = 6.7$ of CC Com is close to, or perhaps even beyond the limiting levels of the cluster searches. COLOR – MAGNITUDE DIAGRAMS {#cmd} ========================== The color – magnitude diagram for the BW$_3$ sample is shown in Figure \[fig4\]. The thin lines in the figure give the observed isochrones for Praesepe and NGC 6791, which are used for reference. The former is a moderately old cluster with age about 0.9 Gyr while the latter is one of the oldest clusters known with age of about 6 – 8 Gyr. Only one cluster in the CL sample is younger than Praesepe. It is Be 33, at 0.7 Gyr. However, as was commented in Section \[samples\], we are not sure if its only member, IK Lyr, really belongs to it so it has been decided to use Praesepe as a case of a “young” old open cluster with a contact system. The band of the contact systems in Baade’s Window in Figure \[fig4\] extends along the main sequence with a width of about 1 magnitude, and shows a concentration of systems in the region of the TOP of the oldest galactic disk population. The width of the sequence may be due to observational errors and spots on the stars, but also to a spread in the mass-ratios. The latter is entirely unaccounted in the absolute magnitude calibration, but its influence can be predicted by considering how total luminosity and total radiating area change with variation in the mass ratio. A small insert in the lower left corner illustrates how changes in the mass-ratio can modify the position of a contact system in the color–magnitude diagram (for details, see CAL5). For identical stars ($q=1$), the shift is upward by $-0.75$ mag, but for less massive secondary components ($q \rightarrow 0$), the secondaries provide always relatively more radiating area than luminosity, so that the color becomes redder. The color shift is the largest for moderate mass-ratios around $q \simeq 0.5 - 0.6$. Figures \[fig5\] – \[fig7\] show the positions of contact systems in the individual clusters. For each cluster, the approximate run of the respective observed isochrone is shown, together with positions of the isochrones for Praesepe and NGC 6791. Close examination of the figures shows that the contact systems are [*not*]{} concentrated in the immediate vicinity of the respective TOP’s. In fact, in those clusters where many systems were detected, such as Cr 261 or Be 39, the systems appear on both sides of each TOP, among the MS systems as well as among the Blue Stragglers. In all cases, except (marginally) for Praesepe and for Be 33 (where the association of the only system to the cluster may be questioned), the locations of the TOP’s themselves fall within the 90 percent range of the BW$_3$ sample, that is $0.3 < (B-V)_0 < 1.2$. What we see is not that the positions of the systems are related to the location of the TOP, but rather, that once the cluster TOP falls into the above range, the systems can then appear anywhere within it. Compare, for example, the diagrams for populous clusters such as Cr 261, Be 39 or M 67. In the last case, only 3 systems are known, but they span the whole width of the color range. The last panel of Figure \[fig7\] contains the main sequence of Praesepe with superimposed marks giving masses according to a $M_V$–mass calibration for disk stars by Kroupa et al.  (1993). As was discussed above, and as the inserts in the figures illustrate, the unknown mass-ratios can modify the luminosities and colors to some extent, but we can expect that these would be the primary components which would define positions of the systems in the color–magnitude diagrams. Thus, the last panel of Figure \[fig7\] gives a rough idea about the primary-component masses involved. They are apparently concentrated in the range of about 0.65 – 1.6 M$_\odot$, with the maximum close to the 1 M$_\odot$. Thus, as has been known for some time, the contact binaries of the W UMa-type are typically composed of solar-type stars. PERIOD – COLOR RELATION {#PC} ======================= The period–color (PC) relation is a useful tool for studies of contact binaries. Effectively, it is a relation similar to the color–magnitude diagram, but with one of the photometric parameters replaced by the orbital period, which is known with an accuracy several orders of magnitude higher than either brightness or color. The PC relation for the BW$_3$ sample (with $V-I$ as the base color) was presented in R97a, where the special significance of the short-period blue-envelope (SPBE) was also stressed. The concept of the SPBE is similar to that of the Zero-Age main sequence, in the sense that a system can move only in certain directions away from the SPBE. Here, in the period – color plane, the directions are down and right (see Figure \[fig8\]). A system can be redder and larger (i.e. can have a longer orbital period) because of the evolutionary effects, while its color can be also redder because of the interstellar reddening. Location of the SPBE does depend on metallicity, and for low \[Fe/H\] it is shifted to bluer colors and shorter periods (CAL2). The PC relation for the BW$_3$ sample using the $(B-V)_0$ color is shown in Figure \[fig8\]. A new fitting formula for the SPBE which is over-plotted in the figure was found by matching the previously used expression, $(V-I)_{SPBE} = 0.053 \times P^{-2.1}$, after its transformation into $B-V$ using the MS transformations of Bessell (1979, 1990). It is: $(B-V)_{SPBE} = 0.04 \times P^{-2.25}$. It must be stressed that the numerical values in both formulae have no physical significance. A few systems located slightly above the SPBE may be low-metallicity objects or cases of poor/blended photometry. What is unusual in Figure \[fig8\] is that we do not see a well-defined [*period-color relation*]{}, because the scatter is large, primarily due to the presence of some red, long-period systems filling the lower right of the figure. While their locations are not a priori impossible, we do not see such systems in the sky-field sample (eg. the Hipparcos sample, CAL5), or in the CL sample (see Figure \[fig9\]). We suspect that photometric blending of images leading to wrong colors or wrong reddening corrections and/or period aliases may have resulted in populating this part of the PC diagram. Inspection of the OGLE light-curve data indicates that only two systems in this region have well defined light curves, whereas most of the curves show small amplitudes and a large photometric scatter, which is possibly due to the use of period aliases[^3]. Because the SPBE is not expected to differ much within the range of metallicities observed for the clusters of the CL sample, all cluster systems are included in Figure \[fig9\]. They have been divided into two groups to avoid congestion of the symbols. The four oldest, most populous clusters are shown in the upper panel of the figure, while the younger ones are collected in the lower panel. We see signatures of low metallicity for the systems in Tom 2 and Be 33, but the rest conform to the expected tendency of confinement below the SPBE for normal-metallicity contact systems. Open symbols signify Poor Thermal Contact systems. Two among them, one in Cr 261 and one in NGC 188, are clearly more evolved, showing longer orbital periods than systems of similar colors. AMPLITUDES OF LIGHT VARIATIONS {#ampl} ============================== As was discussed in Section 5 of R97b, distributions of the light-curve amplitudes contain information about the mass-ratio distribution. Because the light curves are dominated by geometrical effects of the strong distortion of the components, rather than by properties of stellar atmospheres (such as limb and gravity darkening laws), it is relatively easy to predict distributions of the amplitudes of light variations assuming random orbital inclinations and some plausible mass-ratio distributions. When mass-ratios are large ($q \rightarrow 1$), large and small amplitudes can be observed, depending on the orbital inclination, but when the mass-ratios are small, only small amplitudes are possible, irrespectively of the inclination. However, the inverse problem of [*determination*]{} of the mass-ratio distribution, $Q(q)$, from the amplitude distribution, $A(a)$, is not an easy one as it would involve a solution of an integral equation representing a convolution of distributions (R97b). Such a determination could be contemplated for a sample of the order of one thousand objects or more. Here, we limit ourselves to a comparison of the amplitude distributions for the BW$_3$ and CL samples which is shown in Figure \[fig10\]. The distributions are apparently slightly different in that the CL sample appears to contain more large-amplitude systems than the BW$_3$ sample. The numbers per bin are small so the differences are not really significant. Besides, the difference can be explained by the use of the $I$-band amplitudes for the BW$_3$ sample which are expected to be systematically slightly smaller that the amplitudes in the $V$ band. A simple scaling would not be prudent as conversions depend on combinations of geometrical parameters, but the effect is not expected to be larger than 3 to 8 percent. Thus, taking into account the per-bin uncertainties, we conclude that the amplitude distributions for the BW$_3$ and CL samples have basically identical shapes. However, as can be seen in Figure \[fig10\], both distributions are very different from that for the bright systems of the sky field (R97b), the latter being heavily biased by the large-amplitude systems which tended to be preferentially detected in non-systematic searches of the sky. In the paper on Cr 261, Mazur et al. (1995) pointed out an interesting property of those contact systems which occur among the blue stragglers of the cluster: All of them were found to have small amplitudes, which raises a possibility that all of these systems have small mass-ratios. A meaningful analysis of the above effect could be done only for those clusters which have contact systems on both sides of the TOP. The amplitudes have been shown schematically for the clusters of the CL sample in Figures \[fig5\] – \[fig7\]. There exist some differences between individual clusters, eg. in NGC 6791, all systems have small amplitudes, while all in NGC 188 have large amplitudes, but these may be due to small number statistics. Generally, we do not see any clear tendency for small amplitudes above the respective TOP’s and Cr 261 remains the only cluster where the effect is rather clearly visible (the two PTC systems excepted). The tendency is not so obvious in Be 39 which is the next cluster in terms of the number of contact systems. As expected, the BW$_3$ sample does not show any segregation in the amplitudes along the main sequence, but this can be explained by a mixture of ages in the BW$_3$ sample. The amplitudes for that sample have been shown symbolically in Figure \[fig4\]. LUMINOSITY FUNCTION {#LF} =================== The absolute magnitudes $M_V$ for the BW and CL samples come from two entirely different determinations. The ones for the BW sample have been estimated via the $M_I = M_I (\log P, (V-I)_0)$ calibration in R97a (this involved an iteration in $E_{V-I}$), and then adjusted via $M_V = M_I + (V-I)_0$; the ones for the CL sample result from the observed magnitudes $V$ and assumed distance moduli of the clusters. In spite of coming from very different sources, the $M_V$ absolute magnitude distributions turn out to be again similar, as can be seen in Figure \[fig11\], in that both show maxima in the interval $3 < M_V < 6$. The CL sample shows some deficiency at the faint end relative to BW$_3$, but is remarkably similar to BW$_5$ which we know to be more affected by the magnitude limit of the OGLE survey than BW$_3$. For the limiting magnitude of the OGLE search of $I_{lim} = 17.9$, taking into account the interstellar extinction and the typical colors, as they vary along the absolute-magnitude sequence, the expected limits for BW$_3$ and BW$_5$ are $M_V \simeq 5.5$ and $M_V \simeq 4.2$, respectively. Nominally, the CL sample should for some of the clusters reach depths of $M_V \simeq 6-7$, that is even deeper than the BW$_3$ sample; however, its low luminosity limit is, by necessity, a rather fuzzy one, being defined by the increases in the photometric errors for the contributing clusters rather than by a fixed distance, as in the case of the BW samples. Because of this deficiency, the CL sample is not considered in the discussion of the luminosity function. We have a good reasons to think that the BW$_3$ sample is fully complete to $M_V \simeq 5.5$, as the star number densities estimated to 3 kpc were found in R97a to be identical to those for 2 kpc. A few faint systems that populate the very tail of the BW distribution in Figure \[fig11\] to $M_V \simeq 9$ must be nearby objects. They have been checked in the OGLE data for anomalies, but appear to have well defined light curves (but errors in colors are obviously possible); these are the variables BW3.053, 4.040 5.114, 7.112, 8.072. All of them, except BW7.112, have well defined light curve with large amplitudes. The absolute magnitude distributions shown in Figure \[fig11\] have been converted into the luminosity functions (LF’s) by simply dividing the system numbers by the total volumes of the BW$_3$ and BW$_5$ samples, $1.22 \times 10^6$ pc$^3$ and $5.64 \times 10^6$ pc$^3$, for the depths of 3 kpc and 5 kpc, and for the $40' \times 40'$ field of view. The limiting absolute magnitudes for the BW$_3$ and BW$_5$ samples (assuming constant interstellar absorption beyond 2 kpc, see R97a) are $M_V = 5.5$ and 4.2, respectively. Beyond these completeness limits, the numbers of stars are expected to decrease because of the shrinking search volumes. These decreases should follow the standard 4-times per magnitude volume-size relation and thus can be accounted for by the volume corrections. Obviously, an application of such corrections magnifies the increasing Poisson errors so that an extension to fainter magnitudes can be done only slightly beyond the completeness limits of the survey. In our case, this extension was made into only two or three bins beyond the respective completeness limits of the BW samples. The luminosity functions derived for the total volumes of the BW$_3$ and BW$_5$ samples are listed in Table \[tab5\]. In this table, $N_3$ and $N_5$ are the numbers of contact systems in the BW$_3$ and BW$_5$ samples in one magnitude wide bins, centered on $M_V$. LF$_3^{obs}$ and LF$_5^{obs}$ are the corresponding observed luminosity functions, in units of $10^{-5}$ pc$^{-3}$, with entries which have been corrected by the volume correction of 3.981 times per one magnitude increment taken in square brackets. In addition to the observed luminosity functions for the BW$_3$ and BW$_5$ samples, Figure \[fig12\] shows the luminosity function for the solar neighborhood MS stars LF$_{MS}$ (Wielen et al.  1983), which has been arbitrarily scaled down by a factor of 130. This factor was selected to approximately match both LF$_{BW}^{obs}$ in the interval $3 < M_V < 5$ where the statistics should be the most reliable, i.e. it should not be affected by small-number fluctuations at the bright side and discovery-selection effects at the faint side. Obviously, the same factor of 130 gives the inverse apparent frequency for the W UMa-type systems and directly shows that these binaries are indeed very common. We discuss the frequency of occurrence more fully in the next section. A comparison of the luminosity functions in Figure \[fig12\] reveals surprisingly close similarities between the contact binary and MS functions (note in particular the dip at $M_V = 7$ for BW$_3$). However, there exist also some obvious differences between the shapes of LF$_{BW}$ and LF$_{MS}$; in particular, we see relatively fewer high-luminosity systems than low-luminosity ones, an effect which is stronger for the BW$_5$ sample. These differences can be ascribed to the fact that LF$_{MS}$ is based on the local volume defined by the distance of less than 20 pc from the Sun, while the BW functions were obtained from a pencil-beam search reaching deep into the galactic space. If the contact binaries follow the distribution of disk stars, then we can expect changes in their numbers due to the structure of the galactic disk. This links the luminosity function and frequency of occurrence of contact binaries with the description of the galactic disk structure. FREQUENCY OF OCCURRENCE OF CONTACT SYSTEMS {#freq} ========================================== Influence of the galactic disk structure ---------------------------------------- The models of Bahcall & Soneira (1981) were used by Paczynski et al.  (1994) to find out the numbers of disk stars along the OGLE line of sight. The same approach has been followed here with some minor modifications. The ratio of the star number density $n(d)$ at a distance $d$ to the local density $n_0$ can be expressed as a product of two exponentials: $c(d) = n(d)/n_0 = \exp(d/h_R)\, \exp(-|z|/h_z)$, where $h_R$ and $h_z$ are the galactic disk length and height scales, respectively. $h_z$ depends on the absolute magnitude of the MS stars as the galactic-plane concentration is spectral-type dependent. Bahcall (1986) suggested the following relations: $h_z = 90$ pc for $M_V < 2.3$ and $h_z = 325$ pc for $M_V > 5.1$, with a linear interpolation between these values: $h_z = 90 + 83.9 \, (M_V - 2.3)$ pc. Paczynski et al.  (1994) noted that for the galactic coordinates of the OGLE search ($b \simeq -4^\circ$, implying $z \simeq 0.068\,d$) and for $h_R = 3.5$ kpc, the two exponential terms practically cancel out and the density stays approximately constant. However, the newest discussion of the galactic disk by Sackett (1997) suggests a shorter disk length scale, $2.5 < h_R < 3.0$ kpc, so that the planar term may win leading to an increase in the numbers of stars at large distances. Since, as we argued in R97a, the contact binaries are apparently genuine members of the old disk population, the increase in star numbers along the OGLE line of sight for the shorter $h_R$ could possibly explain the high numbers of contact binaries in the BW sample. The density change factors $c(d) = n(d)/n_0$ are shown in Figure \[fig13\] for two values of $h_R$, 2.5 and 3.5 kpc. The shorter value of $h_R$ results in a larger differentiation between the star number densities for various values of $M_V$. The decrease in numbers of early-type systems with distance is clearly visible for both values of $h_R$. Comparison of the luminosity function for contact binaries with the local MS function requires knowledge of the mean weighted values of $c(d)$ to the limits of 3 and 5 kpc, obtained by taking into account the increasing volume with the distance. Because of this weighting, the systems at large distances contribute more to the mean densities obtained from the BW samples than the local systems. The weighted values of the factors $c(d)$ for each bin of $M_V$ can be calculated from: $\bar{c}(d_l, h_R, h_z(M_V)) = \int_0^{d_l} c(\rho)\,\rho^2\,d\rho / \int_0^{d_l} \rho^2\,d\rho $. The luminosity functions can now be related through LF$_{BW}^{corr}$ = LF$_{BW}^{obs} / \bar{c}(d_l, h_R, h_z)$, and LF$_{BW}^{corr}$ = LF$_{MS} \times f$, where $f$ is the frequency of occurrence of contact binaries. $d_l$ is the depth of the sample equal to respectively 3 or 5 kpc, while $h_R$ have been assumed to be equal to 2.5 and 3.5 kpc; $h_z(M_V)$ is given by the interpolation formula of Bahcall (1986) cited above. The corrected luminosity functions are given in Table \[tab6\]. The above formulation permits a comparison of the LF’s for the contact binaries and the MS stars on the per $M_V$-bin basis. The frequencies $f$ derived in such a way are shown in Figure \[fig14\] and are also listed in Table \[tab7\]. The table gives the inverse apparent frequency of occurrence of contact binaries, $1/f$, expressed as the number of MS stars per one contact binary. The line WM gives the weighted mean values of the inverse frequencies over the available $M_V$. Because of the large errors in the first bin at $M_V = 1$, there are no changes in these frequencies if this bin is excluded from averaging. This bin is in fact affected by the bright limit of the OGLE survey. The bright limit of the sample at $I \simeq 14.1$ (R97a) translates, for the average interstellar absorption in this direction, into the absolute magnitude limits for BW$_3$ and BW$_5$ of $M_I \simeq 0.8$ and $-0.3$, respectively. Taking into account typical colors along the contact binary sequence, these upper limits correspond to $M_V \simeq +1.0$ and $-0.1$. Thus, almost nothing can be said about the frequency of occurrence for $M_V < 1.5$. As we will see below in Section \[bright\], the sky sample of bright systems gives us information that the frequency of contact binaries must fall down for such systems. Except for the uncertainty at the bright end, Figure \[fig14\] shows that the corrections for the galactic structure produce the frequency distributions which are remarkably flat. The systematic differences in the luminosity function in Figure \[fig12\], which are best visible for intrinsically bright systems, are taken into account by the $M_V$-dependence of the disk height scale. It is possible that the corrections are too large for the bin at $M_V =2$, but the data are consistent with the flat frequency distribution even for this bin. The resulting apparent frequency of the contact binaries in the BW direction is one system per about 130 MS stars for $h_R = 2.5$ kpc and one system per about 100 MS stars for $h_R = 3.5$ kpc. Unfortunately, at this stage, we cannot decide which number is the correct one. On one hand, uncertainties in the galactic disk structure are too large to make a preference with respect to $h_R$. On the other hand, we cannot use an argument that the contact binary frequency should have a flat distribution in $M_V$ to determine $h_R$. Although our BW samples are perhaps among the first volume-limited ones in this particular galactic direction, we would not like to over-interpret the results as we feel that our crude interstellar-absorption model (R97a) may couple with the inferred spatial distribution of contact systems along the OGLE line of sight. Frequency 1/130 or 1/100; why so high? -------------------------------------- The frequency of occurrence of contact binaries in the BW samples of 1/130 or 1/100 that we have estimated above is some two times higher than the previous estimate of 1/250 – 1/300 that we obtained from the same BW material in R97a and for the old open clusters in CAL1. We will try to find explanations for these discrepancies in turn. First of all, we should stress that the space density of contact binaries derived in R97a for BW$_3$, to $M_I = 4.5$ (or equivalently to $M_V \simeq 5.5$) of $7.6 \times 10^{-5}$ systems per pc$^3$ is a correct one. Paradoxically, the problem is with relating this number to the number of MS stars in the same volume. Since the numbers of stars that had been analyzed for variability by the OGLE project in successive apparent magnitude bins were not available, the numbers of stars with good photometry were used in R97a instead. For fainter magnitudes, the quality of photometry drops and the blending becomes more severe. We made therefore an assumption that the OGLE sample of stars with good photometry and the sample analyzed for variability had similar biases. The counting corrections for the good-photometry sample were quite large for fainter magnitudes, of the order of a factor of 2 or more, and this could be a source of a potentially large error. It is quite possible that the assumption of the identical character of biases in both samples was incorrect. The approach presented here is simpler: We find the BW luminosity function by simply counting the numbers of the contact systems, then correct it for the galactic disk structure and compare it with that for the MS stars from Wielen et al.  (1983) by taking the ratio of the functions. If, for some reason, some contact systems are missed, we can only under-estimate their frequency of occurrence. As we discussed above, the main difficulty here is our insufficient knowledge of the disk length scale $h_R$, but the frequency comes out large for any of the two possible choices. Thus, we feel that the frequency for the BW sample is indeed high, about two times higher than estimated in R97a. Concerning the frequencies observed in old open clusters: An earlier preliminary estimate of the apparent frequency in the clusters (CAL1) gave one contact system per $275 \pm 75$ MS stars. This estimate was based on seven clusters of considerable spread in age from among the eleven that contribute to the present CL sample. As we know, we have good reasons to suspect that numbers of contact binaries increase with time. Therefore, the difference between the above estimate and the new determination for the BW sample may indicate an older – on the average – age of the latter sample. Estimates of the apparent frequency were published for for two of the four clusters studied subsequently to CAL1, Cr 261 (Mazur et al.  1995) and NGC 7789 (Jahn et al.  1995). Depending how the cluster membership of the systems is established, the frequencies were found to be 1/140 – 1/88 for Cr 261 and 1/178 – 1/150 for NGC 7789. These determinations are in full agreement with our new value for the BW sample and with the frequency showing an increase with age because Cr 261 is an older cluster than NGC 7789. Is there a low-luminosity end of the contact binary sequence? ------------------------------------------------------------- The result on the high apparent frequency of incidence of contact systems in the BW sample is based primarily on the moderately bright among them, mostly within $2.5 < M_V < 5.5$. The volume corrections are large for systems fainter than $M_V = 5.5$, and there are no data for $M_V > 7.5$. However, we have no basis to assume that the contact binary sequence stops at $M_V \simeq 7.5$ because we appear to see a few even fainter systems in the BW$_3$ sample (provided these are not artifacts of too red colors). Probably the only good argument for the existence of the sudden drop in the sequence is the constancy of the star number density when the limiting depth of the BW sample is evaluated for the depths of 3 and 2 kpc (R97a); apparently, no intrinsically faint systems (whose existence would produce a density increase for the smaller volume) have been detected in the solar neighborhood. However, the 2 kpc sample consist of only 27 objects, so that this argument is not very strong. Otherwise, the fact that we do not see faint contact systems may fall into the class of the [*absence of evidence*]{} versus [*evidence of absence*]{} reasoning. We do see a sharp end of the period distribution for both, BW$_3$ and CL, samples at about 0.225 day, but this is not an argument for a sharp cutoff in the absolute magnitude sequence as $M_V$ changes very rapidly for short periods and red colors, where the period-color relation becomes almost vertical (see Figures \[fig8\] and \[fig9\]). The period distribution is stretched for short periods when logarithmic units are used (see Figure \[fig1\]) and there is more room for short-period systems. Arguably, the logarithmic units are the proper ones in view of the power-law dependencies governing the angular momentum loss. The predictions based on the full convection limit (Rucinski 1992) place the expected low luminosity limit of the contact-binary sequence at $B-V \simeq 1.5 - 1.6$, that is at spectral types M2 – M4, leaving a large gap in the parameter space between the location of current “record holder”, CC Com[^4] at $(B-V)_0 = 1.24$ and $P = 0.221$ day and the expected full-convection limit. We note that the close pair of M-type dwarfs, BW3.038 (Maceroni & Rucinski 1997) with the period of 0.1984 day, is on its way to becoming a contact system. Perhaps corresponding contact systems already exist and we simply have not found them? The fact that such faint systems have not been detected in the sky field is not an argument as the sky has been searched very poorly and unsystematically. Comparison with the sky-field and cluster data {#bright} ---------------------------------------------- The variability amplitude distribution (Section \[ampl\]), which is apparently biased to large values of the amplitudes, gives us a strong indication that many low inclination systems remain to be discovered in the solar neighborhood. The pioneering study of Duerbeck (1984) attempted to correct for the orbital inclination discovery biases in the sky sample, leading to an estimate of the apparent frequency of occurrence of contact binaries of one system per about one thousand MS stars. In view of the subsequent work (CAL1, R97a), this estimate seemed too low by a factor of 3 – 4 times. Now, a new increase in the apparent frequency is postulated to the level of one contact system per about one hundred MS stars, a change which may be considered as quite drastic. Therefore, we must inquire whether the current results based on the BW sample are [*consistent*]{} with the sky field statistics. With the luminosity functions in Table \[tab6\] or in Figure \[fig12\], one can easily calculate the number of stars in the “contact binary sky” by considering the space volume accessible for discoveries for a given limiting magnitude $V_{lim}$. We can use either the luminosity functions LF$_3^{corr}$ or LF$_5^{corr}$ or the scaled main-sequence function, recognizing that the latter has a smaller statistical uncertainties. Given a LF ($M_V$), as in Table \[tab5\], one can calculate for each bin of $M_V$ the total number of stars observable to a given apparent limiting magnitude $V_{lim}$ from: $n (M_V, V_{lim}) = {\rm LF}(M_V) \times 4/3 \pi d^3(M_V, V_{lim})$, where $d(M_V, V_{lim}) = 10^{1+0.2(V_{lim}-M_V)}$. The examples for $V_{lim} = 7.5$ (left vertical axis) and $V_{lim} = 12.5$ (right vertical axis) are shown in Figure \[fig15\] for the MS luminosity function scaled by the factor of 130; for other $V_{lim}$ the numbers can be obtained by the usual uniform spatial density scaling ($10^3$ times per a five magnitude difference or 3.981 times per one magnitude). The last column of Table \[tab5\] gives $n (M_V)$ for $V_{lim} = 7.5$. How do the results compare with the data for the sky field? The predicted numbers of the faint end of contact systems are low, but – still – of the order of one hundred faint systems similar to CC Com are expected over the whole sky to $V_{lim} = 12.5$, in contrast to a dozen or so currently known. While there is no question that substantial contributions for a resolution of this discrepancy should come from large scale, yet simple surveys of the sky, similar to that currently conducted by Pojmanski (1997, 1998), a survey similar to OGLE, but deeper would probably offer a more efficient approach to learn about the faint end of the sequence. If the limit of OGLE were not $I_{lim} = 17.9$, but 19.9, we would already know the luminosity function beyond the position of CC Com. The situation is very different at the bright end. The BW samples give us practically no information as only one, the same, contact system appears in both BW samples in the $0.5 < M_V < 1.5$ bin, so that the sequence really starts with the bin $1.5 < M_V < 2.5$. This dearth of the systems is due to the bright limit of the OGLE survey at $M_I \simeq 0.8$ and $-0.3$, for the BW$_3$ and BW$_5$ samples, respectively. Thus, we are forced to use the scaled MS data at the bright end and then check if the frequency scaling does apply here. We should note at this point that the CL sample has a bright end which is defined by the two brightest systems, HQ Mus and V732 Cas, at $M_V = 2.0$. Figure \[fig15\] shows the well known fact that the visibility of stars in the sky is heavily biased toward intrinsically bright objects. The predictions based on the scaled MS data give $40 \pm 7$ contact systems in the whole sky to $V_{lim} = 7.5$, but in that number as many as $23 \pm 5$ would be contributed by the bin $1.5 < M_V < 2.5$. The additional $10 \pm 2$ systems would come from the next bin $2.5 < M_V < 3.5$ (see the last column of Table \[tab5\]). If we eliminate the first bin at $M_V =2$, the total number of systems with $M_V > 2.5$ should be $17 \pm 2$. These predictions, when confronted with the observed numbers of contact binaries in the sky directly tell us that their frequency must decrease at high luminosities as we simply do not see that many bright contact binaries. At present, we know of one system at $V = 4.7$ ($\epsilon$ CrA), one system at $V = 5.9$ ($44i$ Boo B) and six further systems (S Ant, V535 Ara, RR Cen, VW Cep, AW UMa and HT Vir) are brighter than $V_{lim} = 7.5$. We know from the Hipparcos survey (CAL5, Duerbeck 1997) that most of them are indeed intrinsically luminous: Among those 8 systems, 6 fall in the interval $1.5 < M_V < 3$, while two ($44i$ Boo B and VW Cep) have $M_V > 5$. Thus, we see about one half of the number of systems predicted by the high frequency of occurrence of 1/130, but most of this discrepancy comes from the high luminosity end where the frequency must be definitely lower. Let us assume that the number of contact binaries to $V_{lim} = 7.5$ is indeed 8. We can learn about the discovery selection effects at fainter magnitudes by considering the numbers of systems predicted to various $V_{lim}$. For each magnitude increase, we expect an increase in the numbers of contact binaries by 3.981. Then, the sequence for the progression in the magnitude limits, $V_{lim} = 7.5, 8.5, 9.5, 10.5, 11.5, 12.5$, should lead to the predicted numbers of the systems to be 8, 32, 127, 505, 2009, 8000. Since we know some 600 contact binaries in the sky (some fraction of that in localized deep-search areas), we have a direct indication of discovery selection effects appearing at the level of about $V_{lim} \simeq 10 - 11$. Only wide field surveys can confirm or disprove this conjecture. Comparison of the frequency of contact binaries to that of other MS binaries ---------------------------------------------------------------------------- When comparing the contact systems with other binaries we must remember that the former are located at the very end of the angular momentum and period sequences and we do not necessarily expect a perfect continuity over the whole range of orbital periods spanning several orders of magnitude. The currently best data on the period distribution for MS binaries are those by Duquennoy & Mayor (1991) who found that the distribution can be approximated, in the logarithm of the period, by a wide Gaussian with a maximum at $\log P = 4.8$ and $\sigma \log P = 2.3$, with the period $P$ expressed in days. Various techniques contributed to this result and the normalization of the distribution is somewhat uncertain. The recent results on the binarity of solar-type stars in the range where this distribution has a maximum (5 – 50 AU) offer a way of relatively reliable normalization of the distribution in the sense of spatial frequency (i.e. the frequency free of geometrical effects of unknown inclination). Patience et al.  (1998) found that in the range of orbital periods, $3.7 < \log P < 5.2$, the frequency of incidence is $0.14 \pm 0.03$ (this means that one among about 7 solar-type stars is a binary with a period in the range 14 – 430 years). This normalization has been used in Figure \[fig16\]. In plotting the contact systems, it has been assumed that their total spatial frequency of occurrence is 1/80 which was (somewhat conservatively) estimated to correspond to the apparent frequency of 1/130. We clearly see in Figure \[fig16\] that the contact binaries with periods [*shorter*]{} than 0.6 – 0.7 day are very common forming a sharp peak extending well above the main-sequence relation. However, we should note the significant under-representation of contact systems with periods [*longer*]{} than 0.6 – 0.7 day. The latter can be detected to very large distances and are present in the OGLE sample, but they are very rare in terms of the spatial density. One would expect that, in a diagram like Figure \[fig16\], their place is occupied by close, short-period, detached binaries which have not yet lost enough angular momentum to enter into direct contact; however, no statistics similar to that available for contact binaries in Baade’s Window exists for such systems. Any attempts to find a trough in the period distribution on the long-period side of the contact-binary peak will obviously confront very different discovery selection effects for detached and contact eclipsing systems. CONCLUSIONS {#concl} =========== The main conclusion of this paper is that the two samples of disk population contact binaries give very similar results in almost every respect, in spite of very different origins of the samples. The similarities are observed in practically all distributions: those of the orbital periods and colors, of the luminosity functions and of the variability amplitudes. This is surprising and unexpected, as many observational effects would tend to make the distributions different. The CL sample would be expected to be particularly inhomogeneous as it was obtained by combining data for 11 different clusters ranging in age roughly by an order of magnitude, within 0.7 to 7 Gyr. Selection of these clusters was not systematic. In addition, they were observed to different limiting magnitudes, with various equipment and differing search areas. If any mass segregation would take place in a cluster, the CL sample should contain preferentially more massive systems, as typically only central parts of the clusters are only observed. Thus, we conclude that in spite of the small statistics – and possibly partly by coincidence – the available mixture of 11 clusters in the CL sample has been representative in the sense that it has not introduced its own observational biases. Thus, it would be hard to avoid a conclusion that it is the formation process of the contact systems which creates those same distributions irrespectively of the age of the population. It has been found that contact systems typically appear in the period interval $0.23 < P < 0.7$ day and the color interval $0.3 < (B-V)_0 < 1.2$. The turn-off points (TOP) of the clusters forming the CL sample all fall into the same color interval. However, the systems do not appear close to the respective TOP’s, but can appear anywhere in the above color range. By comparing the galactic-disk corrected luminosity function derived from the BW sample with that for the MS stars in the solar neighborhood by Wielen et al. (1983), the apparent frequency of occurrence of contact systems in the interval $2.5 < M_V < 7.5$ was found to be surprisingly high, at one contact system per about 130 main sequence stars of a given absolute magnitude for the galactic disk exponential length-scale $h_R = 2.5$ kpc; the contact binaries would be even more common, with the apparent frequency of one per about 100 MS stars for a longer scale of $h_R = 3.5$ kpc. This high frequency is observed in the oldest among the open clusters such as Cr 261 or NGC 188, which suggests an advanced age of the BW sample systems. Total absence of contact systems in clusters younger than 0.7 Gyr and their low numbers in clusters younger than about 2 Gyr suggest a frequency of occurrence strongly dependent on the age of the stellar system. The reduction in the frequency for younger ages is difficult to quantify due to the low numbers of systems involved. The frequency of occurrence is at present the only property which appears to be different for the contact systems in old open clusters and in Baade’s Window. The BW luminosity function determinations suffer from large search-volume corrections for $M_V > 5.5$, but they do cast doubt on the location of the faint end to the contact binary sequence, currently defined by the K5-type system CC Com at $M_V = 6.7$. The BW data do not extend above $M_V \simeq +1.5$ so that the luminosity functions and frequencies of occurrence cannot be determined for intrinsically bright systems. However, the sky-field sample of bright stars to $V_{lim} = 7.5$, which presumably has been fully screened for the presence of contact systems, indicates a clear decrease in the apparent frequency for the high luminosity systems. This decrease cannot be quantified due to the small number statistics for the bright systems in the sky-field sample; a factor of two or three drop at $M_V = 2$ is quite likely with much larger reductions for still brighter systems. The limitations of this work are twofold: Because we are uncertain about the completeness for faint systems, we have not directly addressed the matter of the mass distribution of the contact systems, but this can be roughly estimated from the available $M_V$ distributions and the diagram in the last panel in Figure \[fig7\]. Also, since we have no idea about the mass-ratios of individual systems, we cannot say anything about the orbital angular momenta, which are dominated by the mass-ratio dependent term in: $H \propto M^{5/3} P^{1/3} \frac{q}{(1+q)^2}$. This work does not include Population II contact systems of the type recently found in large numbers among blue stragglers of globular clusters (for most recent references, see Mateo (1996) and the new discoveries in $\omega$ Cen and M4 by Kałużny et al.  (1997a, 1997b, 1997c)). However, halo-population stars are exceedingly rare in the solar vicinity, at the level of 0.125 - 0.15 percent of all stars (Bahcall 1986, Reid & Majewski 1993), so that no contact systems, even at high frequency of occurrence, would be expected among 98 members of our basic sample BW$_3$. Thus, the results presented here are relevant solely to the most common contact binaries of the galactic disk field and of the old open clusters. This work is dedicated to Janusz Kałużny, my friend and colleague for over 20 years. Without his hard work, this study would have been entirely impossible. Special thanks are due to Carla Maceroni and Hilmar Duerbeck for extensive and useful suggestions and comments on the first version of the paper. Bahcall, J.N. 1986, , 24, 577 Bahcall, J.N. & Soneira, R.M. 1980, , 44, 73 Bessell, M.S. 1979, , 91, 589 Bessell, M.S. 1990, , 83, 357 Duerbeck, H.W. 1984, , 99, 363 Duerbeck, H.W. 1997, Inf.Bull.Var Stars, 4513 Duquennoy, A. & Mayor, M. 1991, , 248, 485 Eggleton, P.P. 1996, in [*The Origins, Evolution, & Destinies of Binary Stars in Clusters*]{}, eds. E.F. Milone & J.-C. Mermilliod, ASP Conf., 90, 257 Gilliland, R.L., Brown, T.M., Duncan, D.K., Suntzeff, N.B., Lockwood, G.W., Thompson, D.T., Schild, R.E., Jeffrey, W.A. & Penprase, B.E. 1991, AJ, 101, 541 Guinan, E.F. & Bradstreet, D.H. 1988, in [*Formation and Evolution of Low Mass Stars*]{}, eds. A. K. Dupree & M. T.Lago (Kluwer, Dordrecht), p. 345 Jahn, K., Kałużny, J. & Rucinski, S.M. 1995, , 295, 101 Kałużny, J. 1990, AcA, 40, 61 Kałużny, J. & Shara, M.M. 1987, , 314, 585 Kałużny, J. & Shara, M.M. 1988, , 95, 785 Kałużny, J. & Rucinski, S.M. 1993a, in [*Blue Stragglers*]{}, ed. R. A. Saffer (San Francisco, ASP), ASP Conf.Ser. 53, 164 (KR93) Kałużny, J. & Rucinski, S.M. 1993b, , 265, 34 Kałużny, J., Mazur, B. & Krzemiński, W. 1993, , 262, 49 Kałużny, J., Krzemiński, W. & Mazur, B. 1996, , 118, 303 Kałużny, J., Thompson, I. & Krzemiński, W. 1997a, , 113, 2219 Kałużny, J., Kubiak, M., Szymański, M., Udalski, A. & Krzeminśki, W. 1997b, , 120, 139 Kałużny, J., Kubiak, M., Szymański, M., Udalski, A., Krzeminśki, W. & Stanek, K. 1997c, , 122, 471 Kazarovets, E.V., Samus, N.N. & Goranskij, V.P 1993, Inf.Bull.Var.Stars, 3840 Kazarovets, E.V. & Samus, N.N. 1995, Inf.Bull.Var.Stars, 4140 Kazarovets, E.V. & Samus, N.N. 1997, Inf.Bull.Var.Stars, 4471 Kroupa, P., Tout, C.A. & Gilmore, G. 1993, , 262, 545 Kubiak, M., Kałużny, J., Krzemiński, W. & Mateo, M. 1992, AcA, 42, 155 Maceroni, C. & Rucinski, S.M. 1997, , 109, 782 Mateo, M. 1996, in [*The Origins, Evolution, & Destinies of Binary Stars in Clusters*]{}, eds. E.F. Milone & J.-C. Mermilliod, ASP Conf., 90, 21 Mazur, B., Kałużny, J. & Krzemiński, W. 1993, , 265, 405 Mazur, B., Krzemiński, W. & Kałużny, J. 1995, , 273, 59 Milone, E.F., Stagg, C.R., Sugars, B.A., McVean, J.R., Schiller, S.J., Kallrath, J. & Bradstreet, D.H. 1995, AJ, 109, 359 Paczynski, B., Stanek, K.Z., Udalski, A., Szymański, M., Kałużny, J., Kubiak, M., Mateo, M. & Krzemiński, W. 1994, , 107, 2060 Patience, J., Ghez, A.M., Reid, I.N., Weinberger, A.J. & Matthews, K. 1998, astro-ph/9801216 (Jan.98) Pojmański, G. 1997, AcA, 47, 467 Pojmański, G. 1997, astro-ph/9802330 (Feb.98) Reid, N. & Majewski, S.R. 1993, , 409, 635 Rucinski, S.M. 1977, , 77, 888 Rucinski, S.M. 1992, , 103, 960 Rucinski, S.M. 1993, in [*The Realm of Interacting Binary Stars*]{}, editors: J. Sahade, Y. Kondo & G. McClusky, (Netherlands: Kluwer Academic Publ.), p. 111 Rucinski, S.M. 1994, , 106, 462 (CAL1) Rucinski, S.M. 1995, , 107, 648 (CAL2) Rucinski, S.M. 1997a, AJ, 113, 407 (R97a) Rucinski, S.M. 1997b, AJ, 113, 1112 (R97b) Rucinski, S.M. 1998, AJ, 115, 1135 (R98) Rucinski, S.M. & Duerbeck, H.W. 1997, PASP, 109, 1340 (CAL5) Rucinski, S.M. & Kałużny, J. 1994, Mem. Soc. Astr. Ital., 65, 113 (RK94) Rucinski, S.M., Kałużny, J. & Hilditch, R.W. 1996, , 282, 705 Rucinski, S.M., Whelan, J.A.J. & Worden, S.P. 1977, , 89, 684 Sackett, P.D. 1997, , 483, 103 Twarog, B.A., Ashman, K.M. & Anthony–Twarog, B.J.1997, AJ, 114, 2556 Whelan, J.A.J., Worden, S.P. & Mochnacki, S.W. 1973, ApJ, 183, 133 Wielen, R., Jahreiss, H. & Krüger, R. 1983, in The Nearby Stars and the Stellar Luminosity Function, IAU Coll. 76, edt. A.G. Davis Philip and A.R. Upgren, (L. Davis Press, Inc., Schenectady, New York), p. 163 Figure captions: \[tab1\] \[tab2\] \[tab3\] \[tab4\] \[tab5\] \[tab6\] \[tab7\] [^1]: Unless specifically noted, the frequency of occurrence of contact systems discussed in this paper is [*apparent*]{}, i.e.  it is not corrected for missed systems with low orbital inclination angles. It is expected that the correction factor is of the order of 1.5 to 2.0, but it cannot be predicted a priori as its value crucially depends on the mass-ratio distribution, which is unknown, see R97b. In the same sense, when we discuss “complete” samples later on, we mean samples of those systems which are discoverable for a given minimum amplitude threshold. [^2]: We use the same convention as in R97a in that the first digit gives the OGLE field, and the number of the variable in the field is given after the period. [^3]: The systems with good light curves, but then possibly wrong colors, are BW3.053 and 7.147, while the low amplitude systems showing large light-curve scatter are: BW3.022, 3.053, 5.075, 5.143, 5.157, 6.123, 7.112. Note that 3.053 and 7.112 are also among systems appearing in the faint tail in the $M_V$ distribution in Figure \[fig11\]. [^4]: CC Coma, with its $M_V = 6.7$ determined from the combined photometric and spectroscopic study of Rucinski et al.  (1977) follows perfectly, to within 0.1 mag., the Hipparcos calibration CAL5. This is an argument that the calibration can be used for intrinsically faint systems.
--- abstract: 'We present an ion kinetic model describing the ignition and burn of the deuterium-tritium fuel of inertial fusion targets. The analysis of the underlying physical model enables us to develop efficient numerical methods to simulate the creation, transport and collisional relaxation of fusion reaction products ($\alpha$-particles) at a kinetic level. A two-energy-scale approach leads to a self-consistent modeling of the coupling between suprathermal $\alpha$-particles and the thermal bulk of the imploding plasma. This method provides an accurate numerical treatment of energy deposition and transport processes involving suprathermal particles. The numerical tools presented here are validated against known analytical results. This enables us to investigate the potential role of ion kinetic effects on the physics of ignition and thermonuclear burn in inertial confinement fusion schemes.' address: - 'CEA/DIF, BP 12, 91680 Bruyères le Châtel, France' - 'University Bordeaux – CNRS – CEA, CELIA 33405 Talence Cedex, France' author: - 'B. E. Peigney' - 'O. Larroche' - 'V. Tikhonchuk' title: 'Fokker Planck kinetic modeling of suprathermal $\alpha$ particles in a fusion plasma' --- Fokker-Planck equation ,fusion reactions ,kinetic effects ,inertial confinement fusion plasma ,suprathermal particles ,multi-scale coupling ,explicit schemes Purpose of the study {#sec1} ==================== Inertial confinement fusion (ICF) is a process of energy production obtained from the nuclear fusion reaction between deuterium (D) and tritium (T) ions. It is a promising and abundant energy source for future power plants. The fusion reactions $D+T \rightarrow \alpha + n + 17.56$MeV take place in a hot and dense plasma compressed and heated by intense laser radiation. The thermonuclear burn of the deuterium-tritium (DT) fuel is supported by energetic $\alpha$-particles, which are created by fusion reactions at the energy 3.52MeV. Those suprathermal particles subsequently transfer their energy to the fresh fuel through Coulomb collisions. In the case of Inertial Confinement Fusion [@LIN981; @Atzeni], a spherical DT shell is compressed to densities of the order of a few hundred g/cc by the ablation pressure. Fusion reactions start in a central zone characterized by a density $\rho \sim 50$g.cm$^{-3}$ and a high ”ignition“ temperature $T\approx 7-10$keV. The surrounding shell is 10 times colder than the hot spot ($T\approx 0.7$keV). The density of the central ”hot spot“ is such that the mean free path $\lambda_\alpha$ of fast $\alpha$-particles is roughly equal to the hot spot radius $R$[@FRA744]. This allows the self-heating of the hot spot fuel which serves as a spark that subsequently burns the surrounding colder and denser shell. The design of ICF targets and the interpretation of ICF experiments rely on numerical simulations based on hydrodynamic Lagrangian codes where kinetic effects are only considered as corrections included in the transport coefficients [@LIN981; @Atzeni]. The fluid description is relevant if the mean free path of plasma particles, namely electrons and ions, is smaller than the characteristic length scale. Although this condition is reasonably fulfilled during the implosion stage, it does not apply to fast particles, in particular to fusion products near the ignition threshold. Thus, an accurate kinetic modeling is required. The purpose of the present work is to propose an ion-kinetic description of suprathermal fusion products, treated self-consistently with the ion-kinetic modeling of the thermal imploding plasma. The difficulty lies in the coupling of ion populations characterized by two different energy scales: - [Thermal particles $D$,$T$, which form the bulk of the imploding plasma and whose kinetic energy is in the keV range.]{} - [Suprathermal $\alpha$-particles, created at 3.52 MeV by fusion reactions.]{} Such a strong disparity in energy scales makes it difficult to build viable kinetic models of fusion reactions. Existing ion kinetic codes can describe the implosion of DT targets in sub-ignition conditions [@CAS91A; @VID95A; @LAR03A], but the energy release from the fusion reactions is not accounted for in a self-consistent manner. Several simplified methods compatible with hydrodynamic codes have been developed. Haldy and Ligou [@LIGOU1] apply the moment method to model ion energy deposition in a hot and dense homogeneous plasma, but only a stationary case has been considered. A variety of methods based on diffusion models applied to charged-particle transport problems have also been developed. Those methods are of considerable interest, since results on energy deposition profiles can be obtained with a low computational effort. Nevertheless, diffusion methods rely on the assumption that the fast particle mean free path is smaller than the characteristic scale length of the energy deposition zone. This hypothesis does not hold for a typical ICF target near ignition. Corman et al [@COR754] derive a multi-group diffusion model from the Fokker-Planck equation to describe fast ion transport in a fusion plasma. However, they introduce heuristically a flux limiter in order to prevent unphysical behavior when particle flux approaches the free-streaming limit. Pomraning [@POM33] develops a more sophisticated flux limiter scheme based on the Chapman-Enskog expansion. However, the flux limited diffusion smoothes artificially energy deposition profiles, especially in situations where ion sources are localized [@HON931]. This may lead to significant errors in the calculation of ignition thresholds and energy gains. Such diffusion models are employed in all major present-day fluid codes because of their compatibility with the underlying hydrodynamic module. Several exact methods can be employed to solve the Fokker-Planck equation in a general way, but they are too much time consuming. Monte Carlo algorithms are applied to model charged particle transport in Refs. [@LAP1; @SENT1]. In such an approach, distribution functions are represented by a sum of Dirac measures. Monte Carlo particles are characterized by their numerical weight, their position and their velocity. Those quantities evolve in time according to the Vlasov-Fokker-Planck equation while the tracking of Monte Carlo particles is performed through the spatial mesh. The accuracy of Monte Carlo methods is proportional to $N^{-1/2}$, $N$ being the number of Monte Carlo particles, so that $N\gg 1$ and variance reduction techniques are usually employed to reduce numerical noise. A significant deficiency of Monte Carlo methods for the investigation of kinetic effects is that the tails of the distribution functions are not described accurately. Moreover, the coupling between suprathermal particles and the thermal bulk is usually treated in a rough manner, by removing the suprathermal particles that are slowed down below a given energy threshold and injecting the removed particles in the thermal bulk. Therefore, the thermalization process is not described with a sufficient precision. $S_n$ methods are also used to solve the Fokker-Planck equation deterministically. They are based on the determination of the angular flux of suprathermal particles at a set of discrete directions, each one associated with a quadrature weight [@MEHL1; @KIL1; @Duclous]. Although they are more accurate than diffusion methods and can be extended to highly anisotropic particle distribution functions, the weakly collisional limit is not described accurately and the thermalization process is treated approximately with the same strategy as in Monte Carlo methods. $S_n$ methods are usually used to simulate neutron transport and require high computational efforts. For the application of $S_n$ methods to suprathermal $\alpha$-particles transport, we refer to Ref. [@HON931]. In the present paper we develop a kinetic modeling of suprathermal fusion products in the thermal imploding plasma. We extend the existing code <span style="font-variant:small-caps;">FPion</span> [@CAS91A; @VID95A; @LAR03A] so as to treat $\alpha$-particles, for which *two scales of energy* are considered, namely a suprathermal and a thermal one. Since the developments made to reach this goal have been substantial, they have actually lead to the creation of an entirely new kinetic code called <span style="font-variant:small-caps;">Fuse</span> for *<span style="font-variant:small-caps;">FPion</span> Upgrade with two Scales of Energy*. This code is able to investigate kinetic effects related to fusion reaction products on the ignition of the hot spot and on the subsequent propagation of the thermonuclear burn wave through the dense fuel. We present here the numerical methods specially designed for the kinetic modeling of $\alpha$-particles and their validation in several representative tests. Simulations are preformed for a typical ICF DT target, assuming a spherical symmetry in configuration space and axial symmetry in velocity space around the mean velocity. Distribution functions thus depend on one space variable (radius) and two velocity components (radial and azimuthal or perpendicular), depending on the chosen parametrization. The paper is organized as follows: firstly, we present in Sec.\[sec2\] the Vlasov-Fokker-Planck modeling of the fast $\alpha$-particle transport and collisional relaxation. A specific formalism, based on a two-scale approach with respect to energy is then introduced in Sec.\[sec3\]. It provides a self-consistent modeling of the coupling between suprathermal and thermal plasma species. Section\[sec4\] presents the algorithms devised to solve the two-scale coupling. A finite volume method is applied to the Fokker-Planck equation governing the suprathermal $\alpha$-particle distribution function. Fast algorithms are then specially designed to solve the discretized model efficiently. Section\[sec5\] presents some numerical results regarding the $\alpha$-particle distribution function evolution and its coupling with the thermal bulk. We show how the methods developed here provide a refined description of the thermalization process. Simulations are carried out in conditions relevant for typical ICF targets. Conclusions are finally presented in Sec.\[sec6\]. Physical model for the transport and collisional relaxation of $\alpha$-particles {#sec2} ================================================================================= Once created by fusion reactions, suprathermal $\alpha$-particles are transported through an inhomogeneous plasma and slowed down through Coulomb collisions with electrons and thermal ions D and T. Besides, pressure gradients give rise to an electrostatic field $\vec{\mathcal{E}}(\vec r,t)$ that may accelerate or decelerate $\alpha$-particles. To give an accurate description of the particle transport, as well as the non-local energy and momentum exchange that occur between $\alpha$-particles and the thermal bulk, a kinetic modeling based on the Vlasov-Fokker-Planck equation is required. Vlasov-Fokker-Planck equation for the $\alpha$-particles {#sec21} -------------------------------------------------------- The distribution function $f_\alpha(\vec r,\vec v,t)$ of $\alpha$-particles characterized by a charge $Z_\alpha e$ and a mass $m_\alpha$ is governed by the Vlasov-Fokker-Planck equation: $$\label{eq:vfp_alpha} \displaystyle\frac{\partial f_\alpha}{\partial t}+\vec v \cdot \frac{\partial f_\alpha}{\partial \vec r} + \frac{Z_\alpha e \vec{\mathcal{E}}}{m_\alpha}\cdot\frac{\partial f_\alpha}{\partial \vec v} =\sum_{i}\left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha i} + \left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha e} + \left.\frac{\partial f_\alpha}{\partial t}\right|_{\rm fuse}.$$ The first two terms at the right hand side of this equation describe the collisional relaxation of $\alpha$-particles: - [$\partial f_\alpha/\partial t|_{\alpha e}$ stands for the collisions of $\alpha$-particles with electrons,]{} - [$\sum_{i}\partial f_\alpha/\partial t|_{\alpha i}$ describes the collisions of $\alpha$-particles with thermal ion species. Since thermal species densities are significantly higher than the fast $\alpha$-particle density (at least at the beginning of the ignition and burn processes), non-linear term corresponding to fast-$\alpha$/fast-$\alpha$ scattering is neglected. The coupling between the thermalized $\alpha$ particles and the suprathermal ones is naturally included.]{} We focus now on the collisional part of Eq.. The Vlasov part of the equation modeling the transport in space and the acceleration due to the electrostatic field is considered separately in Sec.\[sec4\]. In a fully ionized plasma such as the one considered here, large angle scattering are much less likely than the net large-angle deflection due to a cumulative effect of many small-angle collisions that the projectile experiences along its path [@ROS573]. Each of the collision terms in right hand side of Eq. can then be expressed as a Fokker-Planck operator in velocity space, which amounts essentially to an advection-diffusion form. More precisely, the slowing down of $\alpha$-particles on a thermal ion species $i$ can be written as: $$\label{eq:fp_alpha_i} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha i} = 4\pi\Gamma_{\alpha i} \frac{\partial}{\partial \vec v}\cdot\left(\frac{m_\alpha}{m_i}f_\alpha \frac{\partial \mathcal{S}_i}{\partial \vec v}-\nabla^2_v \mathcal{T}_i\cdot\frac{\partial f_\alpha}{\partial \vec v}\right),$$ where $\mathcal{S}_i$ and $\mathcal{T}_i$ are the so-called Rosenbluth potentials [@ROS573] associated to the target ions $i$. They are defined by a set of Poisson equations in velocity space: $$\label{eq:poisson_rosenbluth} \Delta_v \mathcal{S}_i = f_i, \qquad \Delta_v \mathcal{T}_i = \mathcal{S}_i.$$ The coefficient $\Gamma_{\alpha i} = (4\pi Z_\alpha^2 Z_i^2 e^4/m_\alpha^2)\ln\Lambda_{\alpha i}$ is proportional to the Coulomb logarithm $\ln\Lambda_{ij}$ (for any species $i,j$ including electrons) related to the Coulomb potential screening and taking quantum effects into account: $\Lambda_{ij}=\lambda_D/\max\{\lambda_{\rm bar},\rho_\bot\}$. The Debye length $$\lambda_D =\left(4\pi n_e e^2/T_e+\sum_{j=1}^n 4\pi n_j Z_j^2 e^2/T_j\right)^{-1/2}$$ depends on the temperature $T_j$, which is expressed in energy units. $T_j$ is related to the thermal ion distribution function $f_j$ by the relation: $$T_j = \frac{m_j}{3n_j} \int (v-V_j)^2 f_j(\vec v)\, d^3v,$$ where $n_j = \int f_j(\vec v)\, d^3 v$ is the density of ion species $j$ and $\vec V_j = n_j^{-1} \int \vec v f_j(\vec v)\, d^3v$ is their mean velocity. The characteristic lengths $\rho_\bot$ and $\lambda_{\rm bar}$ are the classical and quantum impact parameters: $$\rho_\bot = Z_a Z_be^2/m_{ij}u_{ij}^2, \qquad \lambda_{\rm bar} = \hbar/m_{ij}u_{ij}$$ where $m_{ij}=m_i m_j/(m_i+m_j)$ is the reduced mass and $u_{ij}= \sqrt{3}(T_i/m_i + T_j/m_j)^{1/2}$ is an average relative velocity between the particle species $i$ and $j$. The Coulomb logarithm is thus a particular function of hydrodynamic quantities. It is symmetric with the respect of particle species, $\Lambda_{ij}=\Lambda_{ji}$, which is related to the energy and momentum conservation during the collision. The effect of electrons on the slowing down of $\alpha$-particles is modeled by another Fokker-Planck term, in which the electron distribution function is approximated by a Maxwellian characterized by a density $n_e$, a mean velocity $\vec u_e$ and a temperature $T_e$: $$\label{eq:fp_alpha_e} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha e} = \displaystyle\frac{1}{\tau_{e\alpha}} \frac{\partial}{\partial \vec v}\cdot \left[(\vec v - \vec u_e) f_\alpha(\vec v) + \frac{T_e}{m_\alpha} \frac{\partial f_\alpha}{\partial v_\alpha}(\vec v) \right],$$ where $\tau_{e\alpha}$ is a characteristic $e-\alpha$ collision time defined by: $$\label{eq:tauei} \tau_{e\alpha} = \displaystyle\frac{3}{4\sqrt{2\pi}}\displaystyle\frac{m_\alpha T_e^{3/2}}{n_e Z_\alpha^2 e^4 m_e^{1/2} \ln\Lambda_{\alpha e} }.$$ Equation is obtained by a truncated expansion of the full ion-electron Fokker-Planck operator with respect to the small constant $\epsilon = (m_e/m_i)^{1/2} \sim 0.022$ [@CAS91A; @LAR03A]. The last term in stands for the creation of $\alpha$-particles by fusion reactions. The source term is supposed to be isotropic and is given by: $$\label{eq:evol_fdh_source} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\rm fuse}= \mathcal{R}_{DT}(\vec r,t)\frac{\delta(v-v_h)}{4\pi v^2},$$ where $v_h= 1.3\times 10^9$cm.s$^{-1}$ is the initial velocity of suprathermal $\alpha$-particles whose initial energy is 3.52MeV. $\mathcal{R}_{DT}$ is the fusion reaction rate expressed as a function of the distribution functions of D and T, respectively: $$\label{eq:tau_reac} \mathcal{R}_{DT}(\vec r,t) = n_D n_T \langle\sigma v\rangle_{DT} = \int\int f_D(\vec r,\vec v_D,t)\, f_T(\vec r,\vec v_T,t)\,|\vec v_D - \vec v_T|\,\sigma_{DT}(|\vec v_D - \vec v_T|)\,d^3v_D d^3v_T.$$ The distribution functions $f_D$ and $f_T$ are solutions of the Vlasov-Fokker-Planck equation written on the deuterium and tritium species, respectively, and they are not necessarily Maxwellian functions. Integrals in Eq. are taken over the three-dimensional velocity space. Dealing with electrons {#sec22} ---------------------- Since the characteristic time of the considered problem is close to the ion-ion collision time $\tau_{ii}>>1/\omega_{pe}$, $\omega_{pe}$ being the electron plasma frequency, and the characteristic length is of the order of the ion collisional mean free path $\lambda_i>>\lambda_{De}$, $\lambda_{De}$ being the electron Debye length, the quasi-neutrality assumption is relevant. We then have: $$\label{eq:quasineut} n_e =\sum_i Z_i n_i+Z_\alpha n_\alpha^{ST}, \qquad\vec V_e = \sum_i Z_i n_i \vec V_i+Z_\alpha n_\alpha^{ST} V_\alpha^{ST},$$ where the contribution of suprathermal $\alpha$-particles is naturally included, $n_\alpha^{ST},V_\alpha^{ST}$ being the density and mean velocity of fast $\alpha$-particles respectively. Besides, due to a very small ratio of the masses of electrons and ions, the electron equilibration time $\tau_{ee}$ is significantly smaller than the mean ion-ion collision time $\tau_{ii}$. According, for example to [@BRA65A], we have the following ordering of characteristic times: $\tau_{ee} \sim \epsilon \tau_{ii}$. As a consequence, the electron kinetic equation reduces to a fluid equation. Only an equation for the temperature (or, equivalently, the energy density) is actually needed since the electron density and velocity are known from the quasi-neutrality conditions (\[eq:quasineut\]). In the one-dimensional spherical problem considered here, the electron energy density $W_e$ is governed by the following conservation equation : $$\frac{\partial W_e}{\partial t} +\frac{1}{r^2}\frac{\partial}{\partial r} \left(r^2u_eW_e\right) + \frac{1}{r^2}\frac{\partial}{\partial r}(r^2u_e)P_e - \frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\kappa_e \frac{\partial T_e}{\partial r}\right)= \sum_{j=1}^n \frac{3n_j}{2\tau_{ej}}(T_j-T_e) + \left.\frac{\partial W_e}{\partial t}\right|_{\rm rad} \label{eqTe}$$ where $\kappa_e$ is Spitzer’s thermal conductivity [@SPI532] in the presence of several ion species (see also [@LAR93] and Appendix in [@CHE979]), the collision time $\tau_{ej}$ has been defined in Eq. where $\alpha$ is replaced by the considered ion species $j$. The electron energy density $W_e$ and pressure $P_e$ are given by an equation of state taking into account Fermi degeneracy [@LAR03A]. The last term on the right hand side of accounts for the radiation losses of electrons. Relative importance of electrons and ions on the slowing down of $\alpha$-particles {#sec23} ----------------------------------------------------------------------------------- 3.52 MeV $\alpha$-particles are created in fusion reactions isotropically, in the system of reference associated with the thermal bulk. Then, they are slowed down through Coulomb collisions with electrons, according to Eq. , and with thermal ions, according to Eq.. The relative importance of electrons and ions on the slowing down of $\alpha$-particles can be estimated by retaining only the dynamical friction terms from the Fokker-Planck equations and . The ratio $R_{i/e}$ between the ion slowing down and the electron one can thus be approximated by: $$R_{i/e} = \left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha i}\left/\right.\left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha e} \sim \frac{T_e^{3/2}}{v^3 m_e^{1/2}m_i} \sim \frac{T_e^{3/2}}{v^3 m_i^{3/2} \epsilon}.$$ The ratio $R_{i/e}$ is thus defined by a characteristic threshold velocity: $$\label{eq:vcoup} v_{c}= \epsilon^{-1/3}(T_e/m_i)^{1/2},$$ so that $R_{i/e} \sim (v_{c}/v)^3$. The beginning of the slowing-down of $\alpha$ particles is thus governed nearly exclusively by electrons. Then, as $v \sim v_{c}$, the effect of ions and electrons on the $\alpha$ relaxation become comparable. Eventually, the final stage of $\alpha$-particle thermalization is essentially influenced by collisions with thermal ions. Supposing $T_i \sim T_e$, we have the following estimate $v_{c} \sim \epsilon^{-1/3} v_i^{th} \sim 3.6\, v_i^{th}$, $v_i^{th}$ being the typical thermal velocity of D and T ions. The effect of thermal ions on the $\alpha$ relaxation dominates when the $\alpha$ velocity is below $v_{c}\sim 3.6 \,v_i^{th}$. We shall refer to such $\alpha$-particles as ”moderately suprathermal“. Two-component description of the $\alpha$ distribution function {#sec3} =============================================================== Physical discussion {#sec31} ------------------- From the previous discussion, we know that 3.52 MeV $\alpha$-particles are firstly slowed down essentially by electrons. The first stage of the $\alpha$ slowing down is thus described by: $$\label{eq:fsure} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\rm coll} = \frac{1}{\tau_{\alpha e}} \frac{\partial}{\partial \vec v}\cdot \left[(\vec v - \vec u_e) f_\alpha(\vec v) + \frac{T_e}{m_\alpha} \frac{\partial f_\alpha}{\partial \vec v}(\vec v) \right].$$ When $v >> u_e$, the dynamic friction term (first term on the right hand side of (\[eq:fsure\])) dominates so that the $\alpha$ distribution evolves with respect to: $$\label{eq:fsurebis} \left(\frac{\partial f_\alpha}{\partial t}\right)_{coll} \approx \frac{1}{\tau_{\alpha e}} \frac{1}{v^2}\frac{\partial}{\partial v}\cdot \left[v^3 f_\alpha(v) \right].$$ The stationary solution of (\[eq:fsurebis\]) behaves as $f_\alpha \sim 1/v^3$, where $v$ is suprathermal $\alpha$-particle velocity. Consequently, as long as fast $\alpha$-particles remain far from the thermal velocity region, their distribution function varies smoothly over the whole suprathermal velocity region. The associated velocity scale $v_\alpha^{ST}$, defined by: $$\label{eq:v_ast} v_\alpha^{ST} \sim f_\alpha^{ST}/{\displaystyle\frac{\partial f_\alpha^{ST}}{\partial v}},$$ is in particular greater than the target thermal velocity $v_i^{th}$. Then, when slowed down $\alpha$-particles get closer to the thermal region but still remain suprathermal, thermal ions tend to dominate the end of the relaxation process, which is then governed by the equation: $$\label{eq:modele_1d} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\rm coll} =\sum_{i} 4\pi\Gamma_{\alpha i}\frac{\partial}{\partial\vec v}\cdot\left(\frac{m_\alpha}{m_i}f_\alpha \frac{\partial\mathcal{S}_i}{\partial \vec v}\right),$$ where only the dynamical friction term is retained for the present discussion. We shall deal with the diffusion part separately. Qualitatively, one can consider that the distribution function of the thermal target species $i$ appears highly localized in velocity space, from the suprathermal $\alpha$-particle point of view. One thus can write: $ f_i(\vec v) = n_i \delta^3(\vec v)$ (assuming that the mean velocity is zero). Besides, the divergence with respect to velocity that appears on the right hand side of Eq. can be expanded as follows: $$\frac{\partial}{\partial \vec v}\cdot \left(\frac{\partial\mathcal{S}_i}{\partial \vec v}f_\alpha\right)\simeq \frac{\partial\mathcal{S}_i}{\partial \vec v}\cdot \frac{\partial f_\alpha}{\partial \vec v} + f_\alpha \Delta_v \mathcal{S}_i.$$ Using the approximation $f_i(\vec v) = n_i \delta^3(\vec v)$, which is valid from the suprathermal $\alpha$-particle point of view, the first Rosenbluth potential associated to the target ions $i$ can be calculated explicitly: $\mathcal{S}_i(v) \sim -n_i/(4\pi v)$. Then, by calculating its derivative, the slowing down of $\alpha$ particles can be modeled by: $$\label{eq:modele_1dbis} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\rm coll} =\sum_{i} 4\pi\Gamma_{\alpha i} \frac{m_\alpha}{m_i}\left( \frac{\partial f_\alpha}{\partial \vec v} \cdot \frac{n_i}{4\pi v^2}\vec{e}_v + f_\alpha f_i \right).$$ The two terms on the right hand side of Eq.(\[eq:modele\_1dbis\]) have a clear physical sense. The first term $\sim \partial f_\alpha/\partial \vec v$ varies slowly and smoothly far from the thermal velocity region. It can be characterized by a suprathermal velocity scale $v_{\alpha}^{ST}$, which is greater than the typical thermal ion velocity $v_i^{th}=(T_i/m_i)^{1/2}$. Actually, the term $\sim \displaystyle\frac{n_i}{4\pi v^2} \displaystyle\frac{\partial f_\alpha}{\partial \vec v}$ corresponds to a conservative convection towards $v=0$. The associated convective rate $\displaystyle\frac{n_i}{4\pi v^2}$ increases as $v$ tends to $0$ so that the solution of: $$\label{eq:convec_mod} \left(\frac{\partial f_\alpha}{\partial t}\right)_{coll} =\sum_{i} 4\pi\Gamma_{\alpha i} \frac{m_\alpha}{m_i}\left[ \frac{\partial f_\alpha}{\partial \vec v} \cdot \frac{n_i}{4\pi v^2}\vec{e}_v \right]$$ tends to a constant $f_0$ corresponding to the stationary state of (\[eq:convec\_mod\]). The part of the $\alpha$ distribution driven by (\[eq:convec\_mod\]) is then stretched and smoothed out as it approaches the thermal velocity region. The second term $\sim f_\alpha f_i$ appears highly localized in the thermal region of velocity space and behaves qualitatively as a $\delta$-function from the suprathermal $\alpha$-particle point of view. This term actually leads to the formation of a condensate of width $v_i^{th} \ll v_{\alpha}^{ST}$. This qualitative analysis shows intuitively how the *two-component feature* of the $\alpha$ distribution function builds up. It is made of a superposition of two components evolving on two different velocity scales, namely: - [a suprathermal component, fed by fusion reactions and evolving on a large velocity scale, greater than the target thermal velocity.]{} - [A thermal component, corresponding to the thermalized part of the $\alpha$ distribution function, evolving on the same velocity scale as the thermal bulk of the plasma. Note that this component is not fully thermalized since the source term is proportional to $\sum_i 4\pi \Gamma_{\alpha i} f_i$. There remains a final stage of collisional relaxation between the thermal components of D,T and $\alpha$ ions respectively.]{} Figure \[fig:model1d\] illustrates schematically those processes. From this phenomenological discussion, we can draw a more formal and more rigorous description of the slowing-down which naturally leads to the building of a new multi-scale algorithm solving the initial problem given by Eq.. Splitting of the Fokker-Planck operator {#sec32} --------------------------------------- From the previous analysis, it seems natural to write the $\alpha$ distribution function as follows: $$\label{eq:split_fd} f_\alpha(\vec v,t) = f_\alpha^{ST}(\vec v,t)+f_\alpha^{T}(\vec v,t),$$ where: $f_\alpha^{ST}$ designates the suprathermal component. It is defined on a large velocity domain, spreading to the MeV range. Its typical velocity variation scale $v_{\alpha}^{ST}$ is greater than the thermal ion velocity $v_i^{th}$; $f_\alpha^{T}$ is the thermal component. It is localized in the region of velocity space corresponding to target thermal ion distribution functions and vanishes in the suprathermal velocity domain. The component $f_\alpha^{T}$ is designed to describe accurately the final stage of thermalization of the slowed down $\alpha$-particles. This final relaxation occurrs on a velocity scale $\sim v_i^{th}$. Let us emphasize that two components defined in Eq. do exist in the whole velocity space, the relevant physical quantity being the full $\alpha$ distribution function $f_\alpha(\vec v,t)$. The idea is then to deal with each component separately. The original Fokker-Planck operator given in Eq. is then transformed into a *system of two coupled equations* governing the two components $f_\alpha^{ST}$ and $f_\alpha^{T}$, respectively: $$\begin{aligned} \label{eq:system_ST_T} && \left.\partial_t f_\alpha^{ST}\right|_{\alpha i} = \Gamma_{\alpha i} \frac{n_i}{v^2} \partial_{v} f_\alpha^{ST} - n_i\Gamma_{\alpha i} f_\alpha^{ST} \frac{\delta(v)}{v^2}, \nonumber\\ && \left.\partial_t f_\alpha^T\right|_{\alpha i} = 4\pi\Gamma_{\alpha i} \partial_{\vec v}\cdot\left(f_\alpha^{T} \partial_{\vec v}\mathcal{S}_i\right) + 4\pi\Gamma_{\alpha i} f_i f_\alpha^{ST}(v=0).\end{aligned}$$ The above equations are written in the system of reference associated with the thermal ions. System describes the coupling between the suprathermal component and the thermal one, the coupling function being $\sim f_\alpha^{ST} f_i$, which is subtracted from the equation on the suprathermal component $f_\alpha^{ST}$ and appears as a source term in the equation governing the thermal component $f_\alpha^T$. The coupling function can actually be approximated for each of the components of the $\alpha$ distribution function in two different ways, depending on the considered velocity scale: - [From the suprathermal component point of view, we have $f_\alpha^{ST} f_i \sim n_i f_\alpha^{ST} \delta^3(\vec v)$ since thermal target ions appear highly localized. The first Rosenbluth potential $\mathcal{S}_i$ associated to thermal ions can then be approximated by its temperature-vanishing form.]{} - [From the point of view of the thermal component, we can consider $f_\alpha^{ST} f_i \sim f_\alpha^{ST}(0) f_i$ since the suprathermal component is almost constant on the thermal velocity scale $v_i^{th}$. The term $\sim f_\alpha^{ST}(0) f_i$ appears as a source term for the thermal component. It corresponds to a feeding by the suprathermal component.]{} In Eq., we have disregarded the process corresponding to a feeding of the suprathermal component by the thermal one, which could be the case if we modeled large angle collisions, such as $\alpha^{ST} + D \to \alpha + D^{ST}$. Such collisions would build up a suprathermal component for species $D$ and $T$. This could be naturally included in the formalism that we describe here, but this is a process of second order since the probability of large angle scattering is $\sim 1/\ln\Lambda$ times smaller than the pitch-angle collisions modeled by the Fokker-Planck operator. Diffusion part of the Fokker-Planck operator {#sec33} -------------------------------------------- We study now the effect of the second term on the right hand side of Eq. corresponding to a diffusion in velocity: $$\label{eq:fp_alpha_i2} \left.\frac{\partial f_\alpha}{\partial t}\right|_{\alpha i} = -\sum_{i} 4\pi\Gamma_{\alpha i}\frac{\partial}{\partial \vec v}\cdot\left( \nabla^2_v \mathcal{T}_i\cdot\frac{\partial f_\alpha}{\partial \vec v}\right).$$ $\mathcal{T}_i$ is the second Rosenbluth potential associated to the thermal target ions. The notation $\nabla^2_v (\,.\,)$ stands for the Hessian $\partial^2_{\alpha\beta}(\,.\,)$. Let us define the field $\vec J_{\alpha i}$, representing the slowing-down current of $\alpha$-suprathermal particles: $$\label{eq:jperp_def} \vec J_{\alpha i}= -\sum_{i} 4\pi\Gamma_{\alpha i} \nabla^2_v \mathcal{T}_i \partial f_\alpha/\partial \vec v,$$ Using the Dirac-function approximation for the thermal target distribution functions, we can approximate $\mathcal{T}_i$ by its temperature-vanishing form, $\mathcal{T}_i(v) \sim -n_i v/(8\pi)$. The approximation is relevant from the suprathermal component point of view. The Hessian $\nabla^2_v \mathcal{T}_i$ can then be calculated explicitly: $$\label{eq:hess_1} \nabla^2_v \mathcal{T}_i \sim -\frac{n_i}{8\pi v} \left(\mbox{Id}-\frac{\vec v \otimes \vec v}{v^2}\right).$$ By taking advantage of a polar representation of the velocity $\vec v=v\vec{e}_v$, where $(\vec e_v, \vec e_\theta)$ is the polar local basis of velocity space, the Hessian simplifies to: $$\label{eq:hess_2} \nabla^2_v\mathcal{T}_i \sim -\frac{n_i}{8\pi v}\,\vec{e}_\theta\otimes \vec{e}_\theta\,.$$ The slowing down current defined in Eq. expresses the diffusion in velocity associated to the slowing-down process. It is essentially transverse, that is, perpendicular to the local velocity $\vec v$. Therefore, one can write: $$\label{eq:jperp_approx} \vec J_{\alpha i} \sim -\displaystyle\frac{\Gamma_{\alpha i}}{2} \frac{n_i}{v^2} \frac{\partial f_\alpha}{\partial\theta} \vec{e}_\theta.$$ The diffusive slowing-down current is thus highly anisotropic in velocity space and it intensifies as $\alpha$-particles approaches the thermal bulk region of velocity space. Qualitatively, the collisional relaxation of $\alpha$-particles on thermal target ions is thus characterized by: - [a pure advection in velocity space at a constant rate, modeled by Eq., which tends to accumulate $\alpha$-particles in the thermal ion velocity region.]{} - [An anisotropic diffusion in velocity space, expressed by Eq., which tends to make the distribution isotropic when slowed-down $\alpha$-particles get closer to the final stage of thermalization.]{} Algorithms for the transport and collisional relaxation of fast fusion products {#sec4} =============================================================================== In this section, we present the numerical methods developed to solve Eq. and Eq.. Those equations govern the time evolution of suprathermal $\alpha$-particles. Firstly, we show how to deal with the two-component nature of the $\alpha$ distribution function. We then develop a finite volume approach to discretize the equation on the $\alpha$ suprathermal component. An efficient explicit algorithm is then applied to model the time evolution of the suprathermal component with relatively low computational time. We finally present how to simulate accurately the complete thermalization process of $\alpha$-particles. Co-existence of two velocity grids {#sec41} ---------------------------------- The two-component nature of the $\alpha$ distribution function naturally leads to the co-existence of two velocity grids, namely: - [A suprathermal grid, designed to represent the evolution of the suprathermal component of the $\alpha$ distribution function $f_\alpha^{ST}$. It covers a large domain in velocity, extending to the range $v\simeq v_h\simeq 1.3\,10^{9}$ cm/s, which is the velocity corresponding to the $\alpha$ particles created by fusion reactions. Moreover, since the suprathermal component varies smoothly, we can use a relatively coarse grid to discretize it. $f_\alpha^{ST}$ varies significantly on a velocity scale $v_\alpha^{ST} \gg v_i^{th}$, so that the suprathermal grid resolution is typically of the order of one thermal velocity $v_i^{th}$.]{} - [A thermal grid, on which the thermal component of the $\alpha$ distribution $f_\alpha^{T}$ is discretized. This grid is designed to capture the final stage of collisional relaxation of the almost-thermalized component of the $\alpha$ distribution on the other thermal ion species D and T. This process entails a velocity resolution much smaller than the local thermal velocity scale $v_i^{th}$. The thermal grid makes use of a cylindrical parametrization $(v_r,v_\bot)$ inherited from the code <span style="font-variant:small-caps;">Fpion</span>[@LAR93].]{} [![\[fig:2mesh\] Schematic representation of the two velocity grids used to model the $\alpha$ suprathermal and thermal components respectively. The suprathermal component evolves on the coarse polar grid, covering a wide domain extending to the MeV region. The thick shell of width $\sim T_i$ corresponds to the source term due to fusion reactions. The thermal component evolves on the small and refined cylindrical grid. Both meshes are centered on the mean local bulk velocity $V_0\sim V_e \sim V_i$. Velocity space is characterized by an axial symmetry around the axis $\vec v_r$. ](twoscale5.eps "fig:"){width="50.00000%"}]{} The two grids that are shown in figure \[fig:2mesh\] are centered on the local mean bulk velocity $V_0(r)$, which is close to the mean electron velocity $V_e(r)$. By using two grids specially-tailored to capture the variations of each component, it is possible to build an efficient algorithm modeling the two components of the $\alpha$ distribution. Dimensionless form of the Vlasov-Fokker-Planck equation {#sec42} ------------------------------------------------------- For numerical purposes, we write the Vlasov-Fokker Planck equation governing the evolution of the suprathermal component of the $\alpha$ distribution function $f_\alpha^{ST}$ in a dimensionless form, based on a specified unit system given in Table \[tabunit\]. It is chosen to manipulate numbers that are close to unity. This prevents computational errors caused by under or overflow floating numbers. As it was shown in Eq., the collision term between suprathermal $\alpha$-particles and ions takes a simple form expressed in polar coordinates. The slowing down currents are co-linear with the local polar basis vectors $\vec{e}_v,\vec{e}_\theta$ of velocity space. In the spherical one-dimensional geometry considered here, it thus seems natural to parametrize the suprathermal distribution function as $f^{ST}_\alpha(r,v,\theta,t)$, with two velocity components $\vec v = v \cos\theta \,\vec{e}_r + v\sin\theta \, \vec{e}_\bot$. Then, the dimensionless equation governing $f^{ST}_\alpha$ reads: $$\begin{aligned} && \frac{\partial f^{ST}_\alpha}{\partial t} + v\,\cos\theta\,\frac{\partial f^{ST}_\alpha}{\partial r} + \frac{ \mathcal{E}_\alpha}{A_\alpha} \cos\theta \frac{\partial f^{ST}_\alpha}{\partial v} = \sum_{i} \widetilde{\Gamma}_{\alpha i} \displaystyle\frac{\partial}{\partial \vec v} \cdot \, \left[\frac{n_i}{ v^2} \left(\frac{A_\alpha}{A_i} f_\alpha^{ST} \vec{e}_v + \frac{1}{2}\frac{\partial f_\alpha^{ST}}{\partial\theta}\vec{e}_\theta \right)\right] \nonumber \\ && + \frac{1}{\widetilde{\tau}_{e\alpha}}\displaystyle\frac{\partial}{\partial \vec v} \cdot \,\left[(\vec v - \vec {u_e})f_\alpha^{ST} + \frac{T_e}{A_\alpha}\frac{\partial}{\partial \vec v}f_\alpha^{ST}\right] - \sum_{i=D,T,\alpha} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha f_i^T + \mathcal{R}_{DT}(\vec r,t)\frac{\delta(v-v_h)}{4\pi v^2}, \label{eq:FP_supra_discr}\end{aligned}$$ where the normalized constant $\widetilde{\Gamma}_{\alpha i}= (4\pi Z_\alpha^2 Z_\beta^2/A_i^2)\ln\Lambda_{\alpha i}$ and the effective electrostatic field $\mathcal{E}_i $ applied to ions of species $i$ is defined by the following expression: $$\label{Ei} \mathcal{E}_i = - (Z_i/\widetilde{n}_e)\,\partial \widetilde{P}_e/\partial r.$$ Here, $\widetilde{n}_e$ and $\widetilde{P}_e$ are the dimensionless electron density and pressure, respectively, and $$\widetilde{\tau}_{e\alpha} = \frac{3\sqrt{\pi}A_\alpha T_e^{3/2}}{2\epsilon \sqrt{2}Z_\alpha^2n_e\ln\Lambda_{\alpha e}}$$ is the dimensionless electron-ion collision time. [l l]{} Quantity & Unit\ density & $n_0$ (arbitrary reference value)\ thermal energy & $T_0$ (arbitrary reference value)\ time & $\tau_0 = T_0^{3/2}m_p^{1/2}/4\pi e^4n_0$\ length & $\lambda_0 = (T_0/m_p)^{1/2}\tau_0 = T_0^2/4\pi e^4n_0$\ velocity & $v_0 = (T_0/m_p)^{1/2} = \lambda_0/\tau_0$\ distribution function & $f_0 = n_0/v_0^3$\ first Rosenbluth pot. & $\mathcal{S}_0 = n_0/v_0$\ second Rosenbluth pot. & $\mathcal{T}_0 = n_0v_0$\ electric field ($\mathcal{E}_i$) & $\mathcal{E}_0=m_pv_0^2/\lambda_0=m_p\lambda_0/\tau_0^2$\ heat flux & $Q_0 = n_0T_0^{3/2}/m_p^{1/2}$\ Let us consider the third term on the right hand side of . From the point of view of suprathermal $\alpha$-particles, it can be approximated by: $$\label{eq:third_term} \sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha f_i \simeq 4\pi\sum_{i} \widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha n_i \delta^3(\vec v),$$ supposing that $v \gg v_i^{th},V_0$. The term is thus highly peaked with respect to velocity in the thermal component region and leads to the formation of a thermalized condensate that cannot be described on the coarse suprathermal grid. That justifies our approach of subtracting this singular term from , so that the variations of $f_\alpha^{ST}$ remain everywhere smooth and may be described on the suprathermal grid. The term is then re-introduced as a *feeding term* in the equation governing the thermal component, so that the original Fokker-Planck equation governing the complete $\alpha$ distribution function $f_\alpha=f_\alpha^{ST}+f_\alpha^{T}$ is recovered. To solve the full Vlasov-Fokker-Planck equation , we use the same general splitting scheme as in the code <span style="font-variant:small-caps;">FPion</span>, namely we treat the advection, the acceleration and the collisional stages separately. We describe now the method developed to solve the collisional part of . Discretization of the collisional term {#sec43} -------------------------------------- The collisional part of (\[eq:FP\_supra\_discr\]) can be written as: $$\label{eq:collis_st_discr1} \left.\frac{\partial f_{\alpha}^{st}}{\partial t}\right|_{\mbox{coll}} = \frac{1}{v^2}\frac{\partial}{\partial v}\left(v^2 J^v\right) + \frac{1}{v\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta \,J^\theta\right),$$ where the polar components of the slowing down current $\vec J$ are given by: $$\label{eq:Jv_st_discr} J^v = f^{ST}_\alpha \left(\frac{v}{\tau_{e \alpha}}+\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} \displaystyle\frac{n_i}{v^2}\right) +\frac{1}{\widetilde{\tau}_{e\alpha}} \frac{T_e}{A_\alpha}\frac{\partial f_\alpha^{ST}}{\partial v},$$ and $$\label{eq:Jth_st_discr} J^\theta = \displaystyle\frac{1}{v}\displaystyle\frac{\partial f_\alpha^{ST}}{\partial\theta}\left(\widetilde{\Gamma}_{\alpha i}\displaystyle\frac{n_i}{2v}+\displaystyle\frac{1}{\widetilde{\tau}_{e\alpha}} \displaystyle\frac{T_e}{A_\alpha} \right),$$ The slowing-down current $\vec J$ takes the general advection-diffusion form in velocity space: $$\begin{pmatrix}J^v \\ J^\theta\end{pmatrix} = f\begin{pmatrix}u_v \\ u_\theta\end{pmatrix} + \begin{pmatrix}K^{vv} & K^{v\theta} \\ K^{\theta v} & K^{\theta\theta}\end{pmatrix}\cdot \begin{pmatrix}\displaystyle\frac{\partial f}{\partial v} \\ \\ \displaystyle\frac{1}{v}\displaystyle\frac{\partial f}{\partial\theta}\end{pmatrix}$$ where the components of the tensors $u$ and $K$ are related to the Rosenbluth potentials $\mathcal{S}$ and $\mathcal{T}$ (associated to the target ion species) as follows: $$\begin{pmatrix}u^v \\ \\ u^\theta\end{pmatrix} = \begin{pmatrix}\displaystyle\frac{\partial \mathcal{S}}{\partial v} \\ \\ \displaystyle\displaystyle\frac{1}{v}\displaystyle\frac{\partial \mathcal{S}}{\partial\theta}\end{pmatrix}\quad\mbox{and}\quad \begin{pmatrix}K^{vv} & K^{v\theta} \\ \\ K^{\theta v} & K^{\theta\theta}\end{pmatrix} = \begin{pmatrix} \displaystyle\frac{\partial^2\mathcal{T}}{\partial v^2} & \displaystyle\frac{\partial}{\partial v}\left(\displaystyle\frac{1}{v}\displaystyle\frac{\partial \mathcal{T}}{\partial\theta}\right) \\ \\ \displaystyle\frac{\partial}{\partial v}\left(\displaystyle\frac{1}{v}\frac{\partial \mathcal{T}}{\partial\theta}\right) & \displaystyle\frac{1}{v^2}\displaystyle\frac{\partial^2\mathcal{T}}{\partial\theta^2} + \displaystyle\frac{1}{v}\displaystyle\frac{\partial \mathcal{T}}{\partial v} \end{pmatrix}$$ which reduces to: $$\label{eq:uK_def} \begin{pmatrix}u^v \\ u^\theta\end{pmatrix} = \begin{pmatrix} v/\widetilde{\tau}_{e\alpha} + \sum_{i=D,T} \widetilde{\Gamma}_{\alpha i} n_i/ v^2\\ 0 \end{pmatrix}\quad\mbox{and}\quad \begin{pmatrix}K^{vv} & K^{v\theta} \\ K^{\theta v} & K^{\theta\theta}\end{pmatrix} = \begin{pmatrix} T_e/\widetilde{\tau}_{e\alpha} A_\alpha & 0\\ 0 & \sum_{i=D,T} \widetilde{\Gamma}_{\alpha i} n_i/(2v) \end{pmatrix}\,.$$ Note the simplifications implied by using a polar parametrization of velocity space: the dynamical friction coefficient $\vec u$ is indeed co-linear with the radial velocity basis vector $\vec e_v$ and the diffusion tensor is diagonal in the basis $\vec e_v,\vec e_\theta$. We then integrate with respect to velocity on a given cell $\delta V_{kj}$ of the polar suprathermal velocity grid, subscripts $k$ and $j$ referring to the $\theta$ and $v$ directions respectively (see figure \[polarmesh\]). [![\[polarmesh\] The suprathermal velocity grid. ](polar_area5.eps "fig:")]{} The cell $\delta V_{kj}$ is defined by its boundaries $\theta_{k-\frac{1}{2}}$, $\theta_{k+\frac{1}{2}}$ and $v_{j-\frac{1}{2}}$, $v_{j+\frac{1}{2}}$, for $1\le k\le k_{max}$ and $1\le j\le j_{max}$. We call $f_{kj}^n = f_\alpha^{ST}(v=v_j,\theta=\theta_k,t=t_n)$ the value of the suprathermal distribution function in the cell $\delta V_{kj}$ at time $t_n$. Integrating Eq. over the cell area $\delta V_{kj}$, we obtain the following conservative discretized form: $$\label{eq:scheme_st_1} \frac{f_{kj}^{n+1}-f_{kj}^n}{\Delta t} = \frac{1}{v_j^2}\frac{v_{j+1/2}^2J^v_{kj+1/2}-v_{j-1/2}^2J^v_{kj-1/2}}{2\delta v^3_j} + \frac{3v_j\delta v_j}{2\delta v^3_j}\frac{\sin\theta_{k+1/2}J^\theta_{k+1/2j}-\sin\theta_{k-1/2}J^\theta_{k-1/2j}}{\delta\mu_k}$$ where discrete elementary volumes are defined by: $$\delta v^3_j = v_{j+\frac12}^3-v_{j-\frac12}^3, \quad \delta v_j = v_{j+\frac12}-v_{j-\frac12},\quad \delta \mu_k = \cos\theta_{k+\frac12}-\cos\theta_{k-\frac12}.$$ The centered radial velocity $v_j$ that appears in Eq. is defined as $ v_j = (v_{j+\frac12}+v_{j-\frac12})/2$. In those notations, the discrete volume of the cell $\delta V_{kj}$ is given by : $$\delta V_{kj} = \int_{\delta V_{kj}}2\pi v^2\sin\theta \,dv\,d\theta = \frac{4\pi}{3}\delta v^3_j \delta \mu_k.$$ Besides, a straightforward centered-difference and explicit discretization of the slowing-down current leads to: $$\begin{aligned} && J^v_{kj+1/2} = \frac{u^v_{kj+1/2}}{2}(f^n_{kj+1}+f^n_{kj}) -\frac{K^{vv}_{kj+1/2}}{\delta v_{j+1/2}}(f^n_{kj+1}-f^n_{kj}) \label{eq:Jv_kj_discr1} \\ && J^\theta_{k+1/2j} = \frac{K^{\theta\theta}_{k+1/2j}}{v_j\delta\theta_{k+1/2}}(f^n_{k+1j}-f^n_{kj}), \label{eq:Jth_kj_discr1}\end{aligned}$$ where the slowing-down coefficient $u$ and the diffusion coefficients $K$ are explicitly given by as functions of velocity. The time varying coefficients in involving thermal ions and electrons are evaluated at the previous time step $t=t_n$. A Locally Split Explicit scheme {#sec44} ------------------------------- ### Need for an explicit approach {#sec441} The slowing-down and diffusion coefficients given in Eq. are thus very inhomogeneous in velocity space, being highly peaked in magnitude near the thermal component region. Besides, the diffusion term is strongly anisotropic (essentially transverse) outside of the thermal component region. In such a situation, the usual implicit schemes may involve the solution of a very large and ill-conditioned linear system that will only give an approximated solution of the non-stationary problem. In this section, we demonstrate how it is possible to take advantage of the strong inhomogeneity of the slowing down current to build an efficient and simple explicit scheme that describes the non-stationary $\alpha$ distribution function time evolution naturally. This approach stems from ideas that were introduced in [@LAR07]. The Von Neumann stability condition for the scheme in the case of constant homogeneous slowing-down coefficient $u$ and diffusion tensor $K$ reads as: $$\label{eq:stabcond1} (u \,\delta t)^2 \le 2\mbox{Tr}(K)\,\delta t \le \delta v^2,$$ where $\delta v$ is the velocity mesh size. When the slowing-down coefficient $u$ and diffusion tensor $K$ are inhomogeneous (which is the case for our problem), we can apply *locally* in each cell $\delta V_{jk}$ of the suprathermal polar velocity grid. Besides, since the scheme is bi-dimensional and parametrized in polar coordinates, actually leads to two stability conditions, corresponding to the radial direction $v$ and the angular direction $\theta$, respectively. Treating these directions separately, the stability condition for can be written for a given cell $\delta V_{jk}$ as: - [in the radial $v$ direction:]{} $$\label{eq:stabcond_v} \left(\displaystyle\frac{u^v_j\delta t}{\delta v_j}\right)^2 \le \frac{2(K^{vv}_{j})\delta t}{\delta v_j^2} \le 1$$ - [in the angular $\theta$ direction:]{} $$\label{eq:stabcond_th} \displaystyle\frac{2(K^{\theta\theta}_{j})\delta t}{v_j^2 \delta\theta_k^2} \le 1.$$ Note that the slowing-down coefficient $u$ as well as the diffusion tensor $K$ given in depend only on $v$. The idea is then to use the explicit scheme (\[eq:scheme\_st\_1\]) with the stability conditions (\[eq:stabcond\_v\]) and (\[eq:stabcond\_th\]) applied **locally** in each cell of the suprathermal grid. Indeed, the discrete scheme (\[eq:scheme\_st\_1\]) corresponds to the finite volume formulation of a conservation equation where the time evolution of the $\alpha$ distribution function defined at the mesh centers is driven by the difference between the numerical fluxes calculated at the boundaries. The fluxes depend on the value of the distribution function in the neighboring cells. If the fluxes are applied during a time step $\Delta t$ which is too large with respect to the absolute values of the fields in the neighboring cells, numerical instabilities occur. The idea is then to apply fluxes during a **limited** time step $\Delta t'$, possibly smaller than the imposed time step $\Delta t$. The time interval $\Delta t'$ is chosen such that the variation of the fields in the neighboring cells remain below their initial absolute values. Fluxes and fields are updated consistently at the frequency $\frac{1}{\Delta t'}$, until the imposed time step $\Delta t$ is reached. ### Stability and positivity {#sec442} These conditions impose the stability of the explicit scheme , but not necessarily its positivity. Indeed, we have noticed that applying the explicit scheme with the stability conditions and may lead to negative values of $f_\alpha^{ST}$ and thus lead to the development of numerical instabilities. This is especially true in the velocity region where the slowing-down coefficient $u$ is large, which may occur for example in the suprathermal region where $\alpha$-particles are created. A possible remedy is to introduce an “adaptative de-centering” in the discretization of the radial slowing-down current. We then go back to Eq. and introduce the parameters $\eta_j$ such as: $$\label{eq:Jv_kj_discr_eta} J^v_{kj+1/2} = \frac12 u^v_{kj+1/2} \left[(1-\eta_j)f^n_{kj+1}+(1+\eta_j)f^n_{kj}\right] -\frac{K^{vv}_{kj+1/2}}{\delta v_{j+1/2}}(f^n_{kj+1}-f^n_{kj}).$$ The choice $\eta_j=0$ leads to the centered scheme , while $\eta_j = 1$ leads to a pure upwind scheme. The decentering defined in may also be seen as a perturbation of the discretized diffusion term. Indeed, Eq.\[eq:Jv\_kj\_discr\_eta\] can be written in the following form: $$\label{eq:Jv_kj_discr_eta2} J^v_{kj+1/2} = \frac12 u^v_{kj+1/2} (f^n_{kj+1}+f^n_{kj}) -\widetilde{K}^{vv} \frac{f^n_{kj+1}-f^n_{kj}}{\delta v_{j+1/2}}.$$ The stability condition applied with the modified coefficient diffusion $\widetilde{K}^{vv} = K^{vv}_{kj+1/2}+\frac12 u^v_{kj+1/2}\eta_j\delta v_{j+1/2}$ instead of the original $K^{vv}$ defined in leads to the stability condition: $$\frac12 |u^v_{kj+1/2}|^2\delta t \le K^{vv}_{kj+1/2}+\frac12 u^v_{kj+1/2}\eta_j\delta v_{j+1/2} \quad\mbox{and}\quad \frac{\delta t}{\delta v_{j+1/2}^2}\left(2 K^{vv}_{kj+1/2}+u^v_{kj+1/2}\eta_j\delta v_{j+1/2}\right)\le 1.$$ Besides the positivity condition written in the case of an initial field $f_\alpha^{ST}$ localized in one velocity cell leads to: $$K^{vv}_{kj+1/2}+\frac12 u^v_{kj+1/2}\eta_j\delta v_{j+1/2}\ge 0 \quad\mbox{and}\quad \frac{1}{\delta v_{j+1/2}}\left(2 K^{vv}_{kj+1/2}+\frac12 u^v_{kj+1/2}\eta_j\delta v_{j+1/2}\right)\ge |u^v_{kj+1/2}|.$$ The minimal value of $u_v\eta$ ensuring positivity is thus: $$\label{eq:mineta} u^v_{kj+1/2}\eta_j = \max\left\{0, |u^v_{kj+1/2}|-2K^{vv}_{kj+1/2}/\delta v_{j+1/2}\right\}.$$ To ensure stability as well as positivity, we calculate the radial flux with respect to with $\eta_j$ given by in each velocity cell. Actually, this amounts to using the scheme with the radial diffusion coefficient $K^{vv}$ replaced by: $$\label{eq:Kvvmod} \widetilde{K}^{vv}=\max\{K^{vv},|u^v|\delta v/2\}$$ and apply the conditions . Note that in , the condition imposed on the slowing-down coefficient $|u_{v}|\delta t\le\delta v$ is automatically fulfilled as soon as the one imposed on the (modified) diffusion coefficient $\widetilde{K}^{vv}$ is satisfied. Applying the stability condition locally {#sec45} ---------------------------------------- We describe now the accurate implementation of the algorithm, named *Locally Sub-cycled Explicit* <span style="font-variant:small-caps;">LSE</span> algorithm that solves the problem of collisional relaxation of $\alpha$-suprathermal particles. The idea is to apply the explicit scheme with the stability conditions and applied *locally* in each cell of the suprathermal grid. Knowing the values of the distribution function $f_{jk}^n$ in any cell of the suprathermal velocity at time $t=t_n$, we apply the following strategy: [**First step**]{} – *Local time steps calculation*\ For each cell $\delta V_{jk}$ of the suprathermal velocity grid, we calculate a *local* time step $\Delta t_{jk}$ such that the stability conditions in the $\theta$ and $v$ directions are fulfilled. To find $\Delta t_{jk}$, the global time step, namely $\Delta t$, is halved until and are satisfied. The local time step $\Delta t_{jk}$ is then: $$\Delta t_{jk}=\min(\Delta t_{jk}^\theta,\Delta t_{jk}^v),$$ where: $$\label{eq:local_dt_th} \Delta t_{jk}^\theta = 2^{-{\rm nsplit}^{\theta}_{jk}} \Delta t ,$$ and $$\label{eq:local_dt_v} \Delta t_{jk}^v = 2^{-{\rm nsplit}^{v}_{jk}} \Delta t,$$ ${\rm nsplit}^{\theta}_{jk}$ (resp. ${\rm nsplit}^{v}_{jk}$) is the number of times the global time step has to be halved to fulfill the stability condition in the $\theta$ (resp. $v$) direction. [**Second step**]{} – *Sorting the cells*\ Then, the cells of the suprathermal velocity grid are sorted with respect to their local time step $\Delta t_{jk}$ calculated above. This can for instance be done with an efficient algorithm (e. g., ’Heapsort’ [@PRE924]), which takes on the order of $N\ln N$ operations for each time step where $N$ is the number of cells of the suprathermal velocity grid. This sorting stage then allows cells to be visited by the algorithm only when they actually need to be updated, and is thus an essential step for an computationally efficient algorithm, as shown in ref.[@LAR07]. [**Third step**]{} – *Sub-cycling*\ Each cell has to be advanced in both directions $v$ and $\theta$ over a time $\Delta t$ with respect to its *local* time-step $\Delta t_{jk}$, this procedure ensuring stability. We thus have to perform a *sub-cycling* for each cell. The effective computation proceeds through a loop over the smallest local time-step. Inside the loop, the fields (evaluated at the center of the cell) and the flux (evaluated at the borders) are updated consistently with the local time step of the considered cell. More precisely, we perform the following iterations: $$\label{eq:scheme_st_1_sub_th} \frac{f_{kj}^{p+1}-f_{kj}^p}{\Delta t_{jk}} = \frac{3v_j\delta v_j}{2\delta v^3_j}\frac{\sin\theta_{k+1/2}J^{\theta p}_{k+1/2j}-\sin\theta_{k-1/2}J^{\theta p}_{k-1/2j}}{\delta\mu_k}+\frac{1}{v_j^2}\frac{v_{j+\frac{1}{2}}^2J^{vp}_{kj+\frac{1}{2}}-v_{j-1/2}^2J^{vp}_{kj-1/2}}{2\delta v^3_j},$$ where the superscript $p$ refers to the sub-cycled iterations. The sub-cycling starts with $f_{kj}^{p=0} = f_{kj}^n$ and ends after $p^{\max}_{jk}$ iterations where $\Delta t = p^{\max}_{jk} \Delta t_{jk}$. During the process, the flux $J^\theta_{k+1/2j}$ (resp. $J^v_{kj+1/2}$) defined in (resp. and ), are updated with a frequency corresponding to $1/\Delta t_{jk}^{\theta}$ (resp. $1/\Delta t_{jk}^{v}$). For more details on the sub-cycling method, we refer to [@LAR07]. This strategy guarantees stability and positivity everywhere on the suprathermal velocity grid. By applying the local sub-cycling described above, we are able to treat the collisional part of the Vlasov-Fokker-Planck equation governing the suprathermal component of the $\alpha$ distribution function using a tractable explicit approach that does not lead to prohibitive computational time. To illustrate the efficiency of the <span style="font-variant:small-caps;">LSE</span> algorithm, we present in figure \[fig:mapnsplit\] the map of ${\rm nsplit}^{\theta}_{jk}$ and ${\rm nsplit}^{v}_{jk}$ defined in and on the suprathermal velocity grid. We consider two locations corresponding to the hot spot and the dense shell of a typical imploding capsule taken 1ns before stagnation. We note that the sub-cycling is more expensive in the dense shell region than in the hot spot. Indeed, the high density and low temperature of the shell imply smaller time step. Furthermore, considering the maps of ${\rm nsplit}^{\theta}_{jk}$ represented at the bottom of figure\[fig:mapnsplit\], we note that to advance the fields in $\theta$, we mainly have to sub-cycle the most central cells, where the local time step imposed by the stability condition is the smallest since the local cell size $v_j \delta \theta$ is small close to the center. For the outermost velocity cells, no sub-cycling is actually needed. Coupling with the thermal component {#sec46} ----------------------------------- We now discuss the implementation of the coupling strategy between the suprathermal and the thermal components, as described by system in Sec.\[sec32\]. ### From the suprathermal point of view {#sec461} From the point of view of suprathermal $\alpha$-particles, the coupling with the thermal component is made by the third term in Eq. on the right-hand side of . It induces a time variation of the suprathermal distribution given by the following equation: $$\label{eq:coupl_st} \left. \frac{\partial f^{ST}_\alpha}{\partial t}\right|_{ST \to T} = -\sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha f_i \simeq -\sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha n_i \delta^3(\vec v).$$ The time evolution of the suprathemal distribution function in central velocity meshes is then governed by: $$\label{eq:coupl_st_full} \left.\frac{\partial f_{\alpha}^{st}}{\partial t}\right|_{\mbox{coll}} = \frac{1}{v^2}\frac{\partial}{\partial v}\left(v^2 J^v\right) + \frac{1}{v\sin\theta}\frac{\partial}{\partial \theta}\left(\sin\theta \,J^\theta\right)-\sum_{i} \widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha n_i \displaystyle\frac{\delta(v)}{v^2},$$ where the slowing-down currents $J^v$ and $J^{\theta}$ are given by Eq.(\[eq:Jv\_st\_discr\]) and Eq.(\[eq:Jth\_st\_discr\]) respectively. As slowed down $\alpha$-particles approach the thermal velocity region, the transverse diffusion current $J^\theta$ intensifies so that the distribution function is almost isotropic in the central velocity meshes. Eq.(\[eq:coupl\_st\_full\]) simplifies to: $$\label{eq:coupl_st_full2} \left.\frac{\partial f_{\alpha}^{st}}{\partial t}\right|_{\mbox{coll}} = \frac{1}{v^2}\frac{\partial}{\partial v}\left(v^2 J^v\right) -\sum_{i} \widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha n_i \displaystyle\frac{\delta(v)}{v^2},$$ where the slowing-down current $J_v$ can be approximated by: $$J^v \simeq \widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} \displaystyle\frac{n_i}{v^2} f^{ST}_\alpha.$$ We then integrate Eq. over a central mesh ($j=1$, $1\leq k \leq k_{\max}$) of the suprathermal velocity. The suprathermal component in the central meshes corresponding to $j=1$ are then calculated as follows (see Fig.\[fig:polarcentre\]): $$\label{eq:coupl_st_discr} \frac{f_{k1}^{n+1}-f_{k1}^n}{\Delta t} \frac{v_{3/2}^3}{3} = \sum_{i} n_i \widetilde{\Gamma}_{\alpha i} (f_{k3/2}^n-f_{k1}^n).$$ In such a way, the distribution function remains stable in the most central part of the suprathermal velocity grid. ### From the thermal point of view {#sec462} To recover the full Fokker-Planck equation on the physical $\alpha$ distribution function $f_\alpha = f_\alpha^T + f_\alpha^{ST}$, we define an $\alpha$ thermal component $f_\alpha^T$, which evolves on the thermal velocity grid defined above. This is also the grid on which the thermal ion $D,T$ distribution functions evolve. This grid is actually inherited from the code <span style="font-variant:small-caps;">FPion</span>, so that we use the same cylindrical parametrization as explained in [@LAR93] for the $\alpha$ thermal component: $f_\alpha^T(r,v_r,v_\bot)$, $v_r$ and $v_\bot$ being the radial and tangential components of the velocity, respectively. The term subtracted from the suprathermal component equation reappears as a *source term* in the Vlasov-Fokker-Planck equation governing the thermal component of the $\alpha$ distribution function $f_\alpha^T$, so that the relaxed suprathermal component feeds the thermal one and no $\alpha$ particle is lost in the process: $$\begin{aligned} && \frac{\partial f_\alpha^T}{\partial t} + v_r\frac{\partial f_\alpha^T}{\partial r} + \frac{v_\bot}{r}\left(v_\bot\frac{\partial f_\alpha^T}{\partial v_r} - v_r\frac{\partial f_\alpha^T}{\partial v_\bot}\right) + \frac{\mathcal{E}_\alpha}{A_\alpha}\frac{\partial f_\alpha^T}{\partial v_r} = \sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i} \displaystyle\frac{\partial}{\partial \vec v} \cdot \,\left(\frac{A_\alpha}{A_i} f_\alpha^T \frac{\partial\mathcal{S}_i}{\partial \vec v} - \nabla^2\mathcal{T}_i\frac{\partial f_\alpha^{T}}{\partial\vec v} \right) \nonumber \\ && \qquad\qquad + \frac{1}{\widetilde{\tau}_{e\alpha}}\displaystyle\frac{\partial}{\partial \vec v} \cdot \,\left((\vec v - \vec {u_e})f_\alpha^T + \frac{T_e}{A_\alpha}\frac{\partial}{\partial \vec v}f_\alpha^T\right) +\sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha f_i . \label{eq:eqFP_t}\end{aligned}$$ The source term coming from the slowing down of the suprathermal component appears in the last term on the right-hand side of . From the point of view of the thermal component, the suprathermal component $f^{ST}_\alpha$ appears relatively constant over the whole thermal velocity grid since it varies significantly on the coarse suprathermal velocity grid whose mesh size is of the order of the thermal velocity. That is why we use the following estimate: $$\label{eq:source_term_T} \sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f^{ST}_\alpha f_i \sim f^{ST}_\alpha (V_0)\sum_{i} 4\pi\widetilde{\Gamma}_{\alpha i}\frac{A_\alpha}{A_i} f_i,$$ $V_0$ being the mean ion velocity. This procedures guarantees an exact mass conservation: the number of particles that are removed form the suprathermal component are injected into the thermal component. Note that the source term feeding the $\alpha$ thermal component depends on the thermal distribution functions of **all** thermal ion species. To solve , we use algorithms inherited from the code <span style="font-variant:small-caps;">FPion</span>. Their numerical implementation are for example discussed in [@LAR93]. Transport and acceleration of the suprathermal component {#sec47} -------------------------------------------------------- We discuss in this section the algorithm developed to solve the Vlasov part of Eq.), namely: $$\label{eq:vlasov_st} \frac{\partial f^{ST}_\alpha}{\partial t} + \vec v\cdot\vec\nabla_r f^{ST}_\alpha + \frac{ \vec {\mathcal E}_{\alpha}}{A_\alpha}\cdot\frac{\partial}{\partial \vec v} f^{ST}_\alpha = 0$$ We deal with the advection and acceleration separately. ### Advection {#sec471} In this stage, we solve the pure advection equation on the suprathermal component $f^{ST}_\alpha$ for a given velocity $\vec v$: $$\label{eq:advec_st} \frac{\partial f^{ST}_\alpha}{\partial t} + \vec v\cdot\vec\nabla_r f^{ST}_\alpha = 0,$$ whose exact solution is given by: $$\label{eq:soluce_advec} f_\alpha^{ST}(\vec r,\vec v, t+\Delta t) = f_\alpha^{ST}(\vec r-\vec v \Delta t,\vec v, t).$$ Thus, solving amounts to interpolating on the whole phase space. We thus start with a given point $(r,v,\theta)$ of the phase space, $v,\theta$ being chosen on the polar suprathermal velocity grid. We have to compute the transformation of the suprathermal phase space coordinates $r,v,\theta$ during one time step $\Delta t$. Since the suprathermal velocity grid is centered on the mean bulk velocity $V_0$, we firstly project the polar velocity coordinates on the cylindrical basis: $$\label{eq:proj1} v_r=V_0+v\cos\theta, \qquad v_\bot = v\sin\theta.$$ Then, we apply the following transformations on $r,v_r,v_\bot$ over one time step $\Delta t$: $$\label{advvr} r(t-\Delta t) = \left[r(t)^2-2r(t)v_r(t)\Delta t+ v^2\Delta t^2 \right]^{1/2},\quad v_r(t-\Delta t) = \frac{r(t)v_r(t)- v^2\Delta t}{r(t-\Delta t)},\quad v_\bot(t-\Delta t) = \frac{r(t)v_\bot(t)}{r(t-\Delta t)},$$ which gives us the advected point in phase space. For the interpolation in space, we have to find the two consecutive nodes $r_{i_0}$ and $r_{i_0+1}$ of the spatial mesh such that $r_{i_0} \leq r(t-\Delta t) \leq r_{i_0+1}$. Then, for each spatial nod $r_{i_0}$ (respectively $r_{i_0+1}$), we have to carry out an interpolation of on the polar suprathermal velocity grid centered on the local mean bulk velocity $V_0(r_{i_0})$ (respectively $V_0(r_{i_0+1})$). We thus calculate: $$\label{advv} v(t-\Delta t) = \left[\left(v_r(t-\Delta t)-V_0(r_i) \right)^2+v_\bot^2(t-\Delta t)\right]^{1/2} , \quad \theta(t-\Delta t) = \cos^{-1}{\frac{v_r(t-\Delta t)}{v(t-\Delta t)}},$$ for $i=i_0$ and $i=i_0+1$. We then interpolate on the nodes of the suprathermal velocity grid centered on $V_0(r_i)$, using a simple linear interpolation method. This gives us the advected points: $$f_{i_0}=f_\alpha^{ST}(r_{i_0},v(t-\Delta t),\theta(t-\Delta t),t-\Delta t),\quad f_{i_0+1}=f_\alpha^{ST}(r_{i_0+1},v(t-\Delta t),\theta(t-\Delta t),t-\Delta t).$$ The final stage is a cubic interpolation with respect to space: $$f_\alpha^{ST}(r(t-\Delta t),v(t-\Delta t),\theta(t-\Delta t),t-\Delta t) = f_{i_0}+p\delta rf_{i_0}^\prime + p^2[3\delta f-\delta r(2f_{i_0}^\prime+f_{i_0+1}^\prime)] + p^3[\delta x(f_{i_0}^\prime+f_{i_0+1}^\prime)-2\delta f]$$ with $ \delta r = r_{i_0+1} - r_{i_0}, \quad p=r(t-\Delta t)/\delta r, \quad \delta f = f_{i_0+1}-f_{i_0}$. In this equation, the spatial gradients $f_{i_0}^\prime$ and $f_{i_0+1}^\prime$ are evaluated by finite differences. The slopes are limited to prevent unphysical over/undershoots in the interpolation process. ### Acceleration {#sec472} The electric field effect on the $\alpha$ suprathermal component is modeled by: $$\label{accf} \frac{\partial f_\alpha^{ST}}{\partial t} + \frac{\vec{\mathcal E}_\alpha}{A_\alpha} \frac{\partial f_\alpha^{ST}}{\partial \vec v} = 0$$ where the effective electrostatic field $\vec {\mathcal E}_\alpha$ is defined by Eq.. Here again, we use a method of characteristics to solve since an acceleration can be seen as an advection in velocity. The situation gets simpler here, since we only have to carry out an interpolation in velocity on the suprathermal velocity grid. The process is repeated independently in each spatial cell. Chain of algorithms to solve the suprathermal Vlasov-Fokker-Planck problem {#sec48} -------------------------------------------------------------------------- We conclude this section by summarizing the sequence of algorithms that have been developed to solve the whole problem of creation, transport and collisional relaxation of $\alpha$ suprathermal particles, consistently with a ion-kinetic treatment of the plasma thermal bulk. In particular, we show how the algorithms related to the suprathermal components are linked with those dealing with electrons and thermal ion distribution functions. This constitutes the main loop of our kinetic code <span style="font-variant:small-caps;">Fuse</span>. For a global time step $\Delta t$, we apply the following splitting sequence: [**Step 1**]{} – *Electron conductivity*\ We solve the conduction part of , which takes the form of a pure diffusion (or heat) equation during the time $\Delta t/2$. [**Step 2**]{} – *Acceleration*\ We accelerate ion thermal distribution functions for species D, T, $\alpha$ over the time $\Delta t/2$, and at the same time we solve the convective part of , which enables us to improve the energy conservation between ions and electrons (see [@LAR03A]). Then, we accelerate the suprathermal $\alpha$ component. [**Step 3**]{} – *Advection*\ We carry out the advection of thermal components for every ion species D, T, $\alpha$ as well as the suprathermal $\alpha$ component over the time $\Delta t/2$. [**Step 4**]{} – *Feeding the suprathermal component*\ The suprathermal $\alpha$ component is fed by the fusion reaction according to applied over the whole time step $\Delta t$. [**Step 5**]{} – *Suprathermal collisional relaxation*\ We next solve the collisional part of applying the Locally Split Explicit (LSE) algorithm over the time step $\Delta t$. [**Step 6**]{} – *Feeding the thermal component*\ We apply the feeding term of the $\alpha$ thermal component by the suprathermal one over the time step $\Delta t$. [**Step 7**]{} – *Thermal collisional relaxation*\ We perform the collisional relaxation of every ion thermal distribution functions (for ion species D, T, $\alpha$) on thermal ions and on electrons, applying the same algorithms as in <span style="font-variant:small-caps;">Fpion</span>. Note that the collisional relaxation of ion distribution functions on themselves is non-linear and is solved using Crank-Nicholson iterations with an ADI scheme (see Appendix of [@CHE979]). [**Step 8**]{} – *Advection*\ Step 3 is repeated for another $\Delta t/2$. [**Step 9**]{} – *Acceleration*\ Step 2 is repeated for another $\Delta t/2$. [**Step 10**]{} – *Electron conduction*\ Step 1 is repeated for another $\Delta t/2$. After each modification of the ion distribution functions (thermal or suprathermal), the ion moments as well as the slowing-down and diffusion coefficients are updated consistently. Validation of the code by test problems {#sec49} --------------------------------------- In this section, we apply the algorithms developed to model the collisional relaxation and thermalization of $\alpha$-particles in simplified configurations where analytical results are known. ### Isotropic time-dependent test problem {#sec491} In this first test problem, we consider the collisional relaxation of fast $\alpha$-particles in an homogeneous and steady plasma made of one mean ion species $Z_i=1,A_i=2.5$ and electrons. The reference density is $n_i=n_e=10^{22}$ particles/cm$^3$, and the temperature is 1 keV. We keep those conditions constant during the test problem calculation. Suprathermal $\alpha$ particles are then injected isotropically at the energy 3.52 MeV at a steady rate $S_0$ (particles.cm$^{-3}$.s$^{-1}$), so that the suprathermal component remains isotropic during the slowing down process. Following our two-scale approach, the $\alpha$ distribution function $f_\alpha(v,t)=f_\alpha^{ST}(v,t)+f_\alpha^{T}(v_r,v_\bot,t)$ is the solution of: $$\begin{aligned} \label{eq:system_ST_T_tp1} && \left.\partial_t f_\alpha^{ST}\right.= \Gamma_{\alpha i} \frac{n_i}{v^2} \partial_{v} f_\alpha^{ST} + \frac{1}{\tau_{\alpha e} v^2}\partial_v\left(v^3 f_\alpha^{ST}\right)- 4\pi n_i\Gamma_{\alpha i} f_\alpha^{ST} \frac{\delta(v)}{4\pi v^2}+\frac{{S_0}\delta(v-v_h)}{4\pi v^2}, \nonumber\\ && \left.\partial_t f_\alpha^T\right. = \left.\partial_t f_\alpha^T\right|_{\alpha i} + \left.\partial_t f_\alpha^T\right|_{\alpha e}+ 4\pi\Gamma_{\alpha i} f_i f_\alpha^{ST}(0).\end{aligned}$$ $\left.\partial_t f_\alpha^T\right|_{\alpha i}$ (resp. $\left.\partial_t f_\alpha^T\right|_{\alpha e}$) corresponds to the collisional terms of the thermal ions (resp. electrons) on the $\alpha$-thermal particles. In those conditions, we have the characteristic velocity scales, expressed in cm/s: $$v_i^{th} \sim 3.0\times 10^7 << v_c \sim 1.1\times 10^8 << v_h\sim 1.3\times 10^9 < v_e^{th} \sim 4.2\times 10^9$$ For $v>v_{c}$ ($v_c$ given in Eq.), the slowing down of $\alpha$-particles is mainly due to the Coulomb collisions with electrons. The suprathermal component $f_\alpha^{ST}(v,t)$ then tends to the stationary solution of: $$\partial_t f_\alpha^{ST} = \frac{1}{\tau_{\alpha e} v^2}\partial_v\left(v^3 f_\alpha^{ST}\right)+\frac{{S_0}\delta(v-v_0)}{4\pi v^2}.$$ The stationary solution is given by: $$\label{eq:statio_tp1} f_1(v) = \frac{S_0\tau_{\alpha e}}{v^3}\mathcal{H}(v_0-v), v>v_c,$$ where $v_h$ is the velocity corresponding to the injected $\alpha$-particles at 3.52 MeV, which corresponds to $v_h \sim 1.3\times 10^9$ cm/s, and $\mathcal{H}$ is the Heaviside distribution. We plot $f_\alpha^{ST}(v,t)$ calculated by <span style="font-variant:small-caps;">Fuse</span> at different times as well the stationary analytical solution given by (\[eq:statio\_tp1\]) (see Fig (\[ftp1\])). The numerical solution agrees with (\[eq:statio\_tp1\]) as long as $v>v_c$. When $v<v_c$, ions tend to dominate the slowing down of the $\alpha$-particles and the suprathermal component solution of (\[eq:system\_ST\_T\_tp1\]) tends to a stationary state that is almost constant close to thermal ions. This is due to the removal of the term $\propto f_\alpha^{ST} n_i\delta^3(\vec v)$ in the collision term governing the slowing down of $f_\alpha^{ST}$. The suprathermal component actually feeds the thermal one, the feeding process being driven by the source term $\propto f_\alpha^{ST}(v=0)f_i$. The thermal component subsequently evolves towards a Maxwellian characterized by the total density $n_\alpha$ of $\alpha$ particles injected in the system, and the reference temperature $T_0$ (which is kept constant during the test problem calculation): $$\label{eq:gauss_tp1} \mathcal{M}_\alpha(v)=n_\alpha \left(\frac{m_\alpha}{2\pi T_0}\right)^{3/2}\exp{-\frac{m_\alpha v^2}{2T_0}}.$$ The total density is given by: $$n_\alpha = \int_0^{\tau_s} S_0 dt,$$ $\tau_s$ being the time when the source is shut down. The convergence to the Gaussian (\[eq:gauss\_tp1\]) is represented on Fig (\[ftp1\]). Note that this convergence is calculated on the refined thermal grid. The $\alpha$ thermal component is fed by a source term $\propto f_i$, of width $\sim\sqrt{T_0/m_i}$, and relaxes on the thermal grid towards the Gaussian of width $\sim\sqrt{T_0/m_\alpha}$. [0.5]{}   [0.5]{} ### Anisotropic time-dependent test problem {#sec492} We next consider the following anisotropic test problem. We consider an initial condition for the $\alpha$ suprathermal component highly localized in velocity space. Namely, we take: $$f_\alpha^{ST}(v,\theta,t=0) = n_\alpha \frac{\delta(v-v_0)}{4\pi v^2}\delta(\cos\theta-\cos\theta_0),$$ with $v_h=1.3\times10^9$cm/s and $\theta_0=\pi/4$. We then let the suprathermal $\alpha$ distribution slow dow on electrons and on thermal ions. As previously, the thermal plasma is homogeneous and made of one ion species $Z_i=1,A_i=2.5$ and electrons. The temperature of the thermal plasma is kept constant during the calculation: we take $T_0=5$ keV. In those conditions, the characteristic velocity scales are (in cm/s): $$v_{th,i} \sim 6.9\times 10^7 << v_c \sim 2.4\times 10^8 << v_h\sim 1.3\times 10^9 < v_{th,e} \sim 9.4\times 10^9$$ The evolution of the $\alpha$ distribution function is represented in Fig.\[tp\_fdhot\_2\]. As long as $v>v_c$, the momentum and energy losses by the fast ions to the background plasma electrons are the dominant process. The distribution function remains highly localized in velocity space around a velocity $v_{\mbox{b}}(t)$ that declines due to the slowing down on electrons. The velocity of the bulk $v_{\mbox{b}}(t)$ can be calculated analytically [@RAX]: $$\label{eq:bulk} v_{\mbox{b}}(t)=\lbrack(v_0^3+v_c^3)\exp{-\frac{3t}{\tau_{\alpha e}}}-v_c^3\rbrack^{1/3}$$ The comparison between the code and the exact solution is represented on Fig.\[vmax\_2\] and reveals a pretty good agreement, as long as $v>v_c$. Then, as $v\leq v_c$, the energy diffusion process as well as the perpendicular diffusion due to the thermal ions become significant. The $\alpha$ distribution function is scattered in the $\theta$ direction, due to the diffusion on the thermal ions, that intensifies as $v\to 0$. Consequently, as $v\to 0$, the $\alpha$ suprathermal distribution tends to become isotropic while feeding the thermal component. Finally, the thermal component then converges towards the Gaussian, as in the first test problem. To model properly what happens in the vicinity of the thermalization, for $v\sim v_i^{th}$, we solve the full Coulomb operator applied to the $\alpha$ thermal component $f_\alpha^T$ that evolves on the thermal refined grid. This guarantees a proper modeling of the thermalization of the $\alpha$ distribution function, as it slows down, scatters and diffuses in energy in joining up with the background thermal ions. [![\[tp\_fdhot\_2\] $\alpha$ suprathermal distribution solution of the anisotropic test problem at different times. Final stages of collisional relaxation. The values of the distribution function are expressed in cgs units. ](seq_fdhot_tpaniso_testo1.ps "fig:"){width="\textwidth"}]{} [![\[vmax\_2\] Time evolution of the velocity corresponding to the maximum of the $\alpha$ suprathermal distribution function solution corresponding to the anisotropic test problem. ](comp_vmax_time.eps "fig:")]{} ### Energy conservation {#sec493} We finally consider a full collision relaxation process, starting from an isotropic $\alpha$ suprathermal component that slows down through collisions on the electrons and the thermal ions. In this test problem, the electron (res. ion) temperatures evolve consistently with the slowing down of the suprathermal particles. More precisely, as $v>v_c$, suprathermal particles slow down essentially on electrons. The electron temperature thus increases. Then, due to the collisional relaxation of thermal ions with electrons, the thermal ion temperature increases. When the suprathermal particles reach the thermal velocity region, the $\alpha$ thermal component builds up and a collisional relaxation between electrons and thermal ions (including the $\alpha$ thermal component) brings the system to a stationary state. The aim of this test problem is to illustrate that the way we solve the coupling between the suprathermal component and the thermal background ensures the conservation of mass and energy. We check that the total mass remains constant (with a numerical error less than 1% due to the finite size of the velocity mesh). We plot the time evolution of the temperatures (electrons, thermal background ions and $\alpha$-thermal component) on Fig.(\[massener\]). We show how the system evolves naturally to a stationary state calculated by the algorithm described above. The total energy variation of the system between the initial state and the final stationary state is less than 1%. Our original algorithm based on a 2-scale approach to model the collisional relaxation between suprathermal particles and the thermal background is thus validated in simplified test problems where exact results are known. Besides, the mass and energy conservation principles are fulfilled at a discrete level. We can consider that our code <span style="font-variant:small-caps;">Fuse</span> is reliable. We then apply it on real target configurations. Application on the ignition and thermonuclear burn of typical ICF capsules {#sec5} ========================================================================== We apply the numerical scheme presented in Sec.\[sec4\] to model a typical spherical implosion of a cryogenic DT capsule. Our code allows us to study ion-kinetic effects during the ignition stage and the beginning of the thermonuclear burn stage. Initial conditions {#sec51} ------------------ We consider the same fluid reference simulation as in [@LAR03A] corresponding to an ICF target with parameters typical of ignition capsules designed for the LMJ and NIF laser [@SAI001] and [@BRA012]. Namely, we consider a 0.3mg cryogenic DT layer deposited on the inner surface of a CH shell of a 1mm (inner) radius. The kinetic calculation is started at $t=17$ns after the beginning of the implosion, when the main converging shock reaches the center of the target. The boundary condition is taken from the hydrodynamic simulation. The densities, temperatures and velocities are recorded on the fuel/pusher interface in the fluid simulation. The kinetic simulation considers three ion species, namely D, T and $\alpha$. Initially, only thermal species D and T are present. They give birth to suprathermal $\alpha$ particles in the fusion reactions. The relaxation of the suprathermal $\alpha$ component then leads to the creation of an $\alpha$ thermal component interacting with the other thermal ion distribution functions (D and T, respectively). Note that the thermal bulk is described in more details than in [@LAR03A] where a single mean ion species with a mass number of 2.5 was considered. In our kinetic simulation, the position of each spatial meshes is updated after each time step with respect to the imposed boundary condition and to the fixed number of spatial meshes $i_{\max}$. This updating is performed before each advection phase. This means that the position of a given spatial cell $r_{i_0}$, with $1\leq i_0 \leq i_{\max}$ is time dependent, decreasing with the size of the imploding system. To represent in a satisfactory manner both the dense region where the fluid simulation grid is the finest and the central zone where it is rather coarse, we employ 78 cells with a geometrically varying mesh size (with the ratio 0.97) so that the mesh size $\delta r$ is decreasing from 20$\mu$m near the center to less than one micron near the outer boundary. The thermal velocity space $(v_r,v_\bot)$ is discretized into $129\times 64$ cells, whereas the suprathermal velocity grid $(v,\theta)$ makes use of $100\times60$ cells. The reference time-step value is 0.05ps. Comparison with <span style="font-variant:small-caps;">Fpion</span> and <span style="font-variant:small-caps;">FCI1</span> -------------------------------------------------------------------------------------------------------------------------- To validate the thermal part of our code <span style="font-variant:small-caps;">Fuse</span>, we compare the density, velocity and temperature profiles with the hydrodynamic code <span style="font-variant:small-caps;">FCI1</span> as well as with the kinetic code <span style="font-variant:small-caps;">FPion</span> at two different times of the implosion: - [at $t=17.1$ ns, that is to say 100 ps after the beginning of the implosion. We find a pretty good agreement between the <span style="font-variant:small-caps;">Fuse</span> kinetic calculation and the <span style="font-variant:small-caps;">FCI1</span> fluid simulation (Fig.\[hydr100\]). The kinetic modeling reveals a significant anisotropy on the ion temperatures (and pressures), as the one observed with <span style="font-variant:small-caps;">FPion</span> [@LAR03A]. The anisotropy then tends to disappear during the implosion.]{} - [At $t=17.65$ ns, in the vicinity of the target stagnation, <span style="font-variant:small-caps;">Fuse</span> and <span style="font-variant:small-caps;">FCI1</span> are still in good agreement. However, we note that the compression zone near the inner interface of the dense fuel lies closer to the target center in the kinetic calculation (see the negative velocity gradient region about r = 70 $\mu$m on the right part of Fig.\[hydr650\]). This result has already been obtained with <span style="font-variant:small-caps;">FPion</span> and discussed in [@LAR03A]. This is related to a higher ion heat flux, which tends to increase the rate of ablation of the cold fuel by the hot spot. ]{} As long as $t\leq 17.65$ ns, the $\alpha$-particles number is small, so that the above comparisons between the codes <span style="font-variant:small-caps;">Fuse</span> and <span style="font-variant:small-caps;">Fpion</span> (which does not take $\alpha$-particles into account) are relevant and tend to validate the methods programmed in  <span style="font-variant:small-caps;">Fuse</span> regarding the thermal background (thermal ions and electrons). ![\[hydr100\]Profiles of the density, velocity and of the electron and total ion temperatures in a DT ignition target at the time $t=17.1$ ns, which corresponds to 100ps after the beginning of the kinetic calculation and roughly 1ns before the target stagnation.](comp_synth1_t100.eps){width="65.00000%"} ![\[hydr650\]Profiles of the density, velocity and of the electron and total ion temperatures in a DT ignition target at the time $t=17.65$ ns, which corresponds to 650ps after the beginning of the kinetic calculation. This time is also just before the target stagnation.](comp_synth1_t650.eps){width="65.00000%"} Transport of $\alpha$ particles {#sec52} ------------------------------- We analyze the transport of suprathermal $\alpha$ particles throughout the capsule. Figure\[fig:nacnah\] shows the spatial density profiles during the implosion for the suprathermal and thermal components of $\alpha$-particles. At early times, suprathermal $\alpha$-particles are produced in the hot central region of the capsule and deposit their energy in the surrounding cold shell. The region corresponding to the suprathermal $\alpha$ energy deposition is indicated by a sharp decreasing of the suprathermal density profile. This occurs at a distance which corresponds to the collisional mean free path of suprathermal $\alpha$ particles. Meanwhile, the slowing down of suprathermal $\alpha$ particles feeds the thermal component, that process corresponding to the bump observed in the thermal $\alpha$ density profiles (Figure\[fig:nacnah\]-right). During the implosion process, the $\alpha$ collisional mean free path decreases, so that the $\alpha$ suprathermal particles are trapped in a smaller radius. In the mean time, the production of suprathermal $\alpha$-particles intensifies due to the increasing ion temperature. As a result, the suprathermal $\alpha$ density increases. ![Density profiles of suprathermal (left) and thermal (right) $\alpha$ particles. The initial time (i) corresponds to $t=17.1$ ns and the final time (f) to $17.87$ ns. The time interval between two consecutive profiles is 50ps.[]{data-label="fig:nacnah"}](nah_b1.eps "fig:"){width="45.00000%"} ![Density profiles of suprathermal (left) and thermal (right) $\alpha$ particles. The initial time (i) corresponds to $t=17.1$ ns and the final time (f) to $17.87$ ns. The time interval between two consecutive profiles is 50ps.[]{data-label="fig:nacnah"}](nac_b1.eps "fig:"){width="45.00000%"} Collisional relaxation of suprathermal $\alpha$ particles {#sec53} --------------------------------------------------------- ### Anisotropy in the suprathermal region {#sec531} In this section, we focus on the collisional relaxation of the suprathermal $\alpha$ component. We consider a given spatial cell with the number $i_0$ that evolves in space during implosion. The distribution function of $\alpha$-particles $f_\alpha^{ST}(r_{i_0}(t),v,\theta,t)$ is presented in figure \[fdhot\_1\]. [![\[fdhot\_1\] $\alpha$ suprathermal distribution observed in a given mesh of the imploding hot spot at different times. The simulation takes into account the creation, the transport and the collisional relaxation of $\alpha$ particles. The values of the distribution function are expressed in cgs units. Times refer to beginning of the kinetic calculation. ](seq_fdhot_ix17_testo6im.ps "fig:"){width="\textwidth"}]{} The suprathermal distribution function is rather anisotropic. It is highly peaked toward positive velocities $v_r > 0$. This can be explained by the inhomogeneous fusion reaction source term, which strongly depends on the ion local temperature. Since $T_i$ is more peaked towards the center of the capsule, as it can be seen in the temperature profiles in figure\[hydr100\], an observer located outside of the highly emissive central region sees the suprathermal $\alpha$-particles passing from the center to the outside. That leads to a local distribution shape shown in the top panel of figure\[fdhot\_1\]. The spatial gradient of the fusion reaction source term thus accounts for the anisotropy of the suprathermal $\alpha$ distribution function. Let us consider the cell $i_0$ with the radius such that $r_{i_0}(t) = \lambda_\alpha(\rho(t))$, where $\lambda_\alpha$ is the collisional mean free path of a suprathermal $\alpha$ particle and $\rho$ the mean density of the capsule. As $\alpha$-particles deposit their energy in the considered spatial cell $i_0$, which corresponds to the sequence shown in figure\[fdhot\_1\], the suprathermal $\alpha$ distribution function slows down significantly towards the thermal velocity region. During this slowing down process, the distribution function tends to spread over a wider domain in the polar angle $\theta$. This is a consequence of the diffusion part of the Fokker-Planck equation, which leads to a mainly transverse slowing-down current that intensifies close to the thermal velocity region. To check that the collisional module of the code behaves correctly in a real target configuration, we artificially do not calculate the effect of the advection and acceleration on the $\alpha$-suprathermal component, so that the time evolution is driven by the collisions on electrons and thermal ions only. The corresponding time evolution is represented in Fig.\[fdhot\_2\]. This numerical test is closed the third test problem presented in Sec.4.9.3, but is carried out in thermodynamic conditions corresponding to real ICF target configuration. The suprathermal particles are initially distributed anisotropically in velocity space with respect to Fig.\[fdhot\_2\] (top-left). For $v\geq v_c\sim 3-4 v_i^{th}$, fast ions mostly slow down by collisional drag on the background electrons with very little pitch-angle scattering. The fast ions stay mostly in their original pitch-angle direction. For $v\leq v_c$, the suprathermal particles slow-down predominantly on the thermal background ions and scatter in pitch-angle. The suprathermal distribution function tends to be isotropic as it approaches the thermal velocity region. The suprathermal grid resolution is fine enough to represent the variations of the suprathermal component, that tends to be constant as it gets closer to the thermal velocity region. [![\[fdhot\_2\] $\alpha$ suprathermal distribution observed in a given mesh of the imploding hot spot at different times, when only the collisional relaxation is considered, starting from a given anisotropic initial state. Times refer to beginning of the kinetic calculation. ](seq_fdhot_sansadvec_testo6im.ps "fig:"){width="\textwidth"}]{} ### Feeding the thermal component {#sec532} When the slowed down suprathermal $\alpha$-particles reach the thermal velocity region, a fraction of $\alpha$-particles is removed from the suprathermal component, to feed the thermal component according to Eq.. The sequences represented in figures \[fdhot\_1\]- \[fdhot\_2\] illustrates this coupling from the suprathermal component point of view. The distribution function remains stable, while the particles are accumulating in the vicinity of the thermal region. Without the removal of the term on the right hand side of Eq., the suprathermal distribution function would have become unstable as $v\to V_0$. The evolution of the thermal component of the $\alpha$-particle distribution function represented in figure\[fdcold\]. It shows how the thermal component builds up. Ignition and burning wave propagation ------------------------------------- We finally give the density, velocity and temperature profiles calculated by <span style="font-variant:small-caps;">Fuse</span> and compare the results with the fluid code at the time $t=17.85$ ns (Fig.\[kinhydr850\]) . After that time, corresponding to the arrival of the flame near the outermost cells, the kinetic simulation may not be relevant since the boundary condition (which comes from the hydrodynamic calculation) may not be consistent with the pressure calculated by the kinetic code. In the kinetic calculation, the heating of the hot spot appears to be faster than in the fluid code. This is consistent with the differences observed during the implosion phase, where the dense zone corresponding to the ablated cold fuel was imploding faster in the kinetic calculation. Besides, the kinetic ion temperature profile displays a preheating wave ahead of the main temperature front. This is specially visible on the ion temperature profiles of (Fig.\[kinhydr850\]). This structure is related to the Bragg peak of the D,T ions located in the dense cold fuel cold. Suprathermal $\alpha$-particles are created mainly in the central hot spot and deposit their energy and momentum near the inner interface of the cold fuel, where the thermal ion heating occurs. This interpretation will be examined more closely with future kinetic calculations of different target designs (that may be less efficient than the one considered here). [0.5]{} \[fig:gull\]   [0.5]{} \[fig:tiger\]   [0.5]{} \[fig:mouse\] [0.5]{} \[fig:mouse\] By applying the efficient algorithm (based on a 2-scale approach) exposed and validated in Sec.4 on real target configurations (that could not be solved analytically), the code <span style="font-variant:small-caps;">Fuse</span> is able to simulate the fuel of real ICF targets at a kinetic level over a time corresponding to 1 ns after the start of the implosion. One thus models the ignition and the beginning of the burning wave propagation. Besides, by making use of a parallelization method of the collisional part of the code (which is possible since we can calculate the effect of collisions in each spatial cell independently from the others), it takes less than 1 day of computation time, which is roughly twice as long as the usual simulations performed by <span style="font-variant:small-caps;">Fpion</span> (corresponding to the implosion phase without $\alpha$-particles). Summary and perspectives {#sec6} ======================== We have developed a numerical strategy to model fast $\alpha$-particles produced by fusion reactions at a ion kinetic level. A two-scale approach has been specially-tailored to represent the two-component nature of the $\alpha$ distribution function and simulate the thermalization process accurately. Efficient algorithms have been designed to simulate the time evolution of the fast $\alpha$ component, driven by the transport in the inhomogeneous thermal plasma as well as the Coulomb collisional relaxation on electrons and ions. The energy and momentum exchange between fast fusion products and the thermal plasma are thus calculated at the kinetic level. The methods have been tested in thermodynamic conditions corresponding to typical DT targets close to ignition. It has been shown that a locally split explicit scheme can be used to describe the fast $\alpha$ population evolution in non-prohibitive computational time. Besides, the algorithms presented here are easily parallelizable to take advantage of present-day multi-core architectures. The ion-kinetic code <span style="font-variant:small-caps;">Fuse</span>, built as an extension of the former code <span style="font-variant:small-caps;">FPion</span>, is thus able to model a full DT target implosion, including the ignition and burn processes, at a ion-kinetic level. Investigating in more details the role of kinetic effects of fusion products in the ignition and burn of DT targets is the purpose of ongoing work and will be published elsewhere [@future]. We may have in view to study implosions in the vicinity of the ignition threshold, where kinetic effects should be enhanced and may modify the energy gain. Finally, the algorithms developed here may be naturally extended to add the effect of Boltzmann-type large angle scattering, that would feed a suprathermal component for the D,T ions. Neutron momentum and energy deposition may be modeled in a similar way. **Acknowledgments.** The authors are grateful to Professors Xavier Blanc, Josselin Garnier, Rémi Sentis and Gerald Samba for fruitful discussions on the subject. [2]{} J. D. Lindl, Inertial Confinement Fusion – The quest for ignition and energy gain using indirect drive, Springer Verlag, New York, 1998. S. Atzeni and J. Meyer-ter-Vehn, The physics of inertial fusion, Oxford, Oxford University Press, 2004. G. S. Fraley, E. J. Linnebur, R. J. Mason, R. L. Morse l, Phys. Fluids 17 (1974) 474. M. Casanova, O. Larroche, J.-P. Matte, Phys. Rev. Lett. 67 (1991) 2143. F. Vidal, J.-P. Matte, M. Casanova, O. Larroche, Phys. Rev. E 52 (1995) 4568. O. Larroche, Eur. Phys. J. D 27 (2003) 131. O.Larroche, Phys. Fluids B 5, 2816 (1993). P. A. Haldy, J. Ligou, Nucl. Fusion 17 (1977) 6. E. G. Corman, W. E. Loewe, G. E. Cooper, A. M. Winslow, Nucl. Fusion 15 (1975) 377. G. C. Pomraning, Nucl. Sci. Eng. 85 (1983) 116. J. J. Honrubia, Nuclear fusion by inertial confinement - A comprehensive treatise, G. Velarde, Y. Ronen, J. M. Martinez-Val (CRC Press, Boca Raton, Florida, 1993), chap. 9, p. 211. B. Lapeyre, E. Pardoux, R. Sentis, Monte Carlo methods fo transport and diffusion equations, Oxford University Press, 2003. F. Chaland, R. Sentis, Int. Numer. Meth. Fluids 56 (2008) 1489. T. A. Mehlhorn, J. J. Duderstadt , J. Comput. Phys. 38 (1980) 1. J. Killeen, K. D. Marx, Meth. Comput. Phys. 9 (1970) 422. R. Duclous, J.-P. Morreeuw, V. T. Tikhonchuk, B. Dubroca, Laser and Particle Beams 28 (2010) 165. M. N. Rosenbluth, W. M. MacDonald, D. L. Judd, Phys. Rev. 107 (1957) 1. S. I. Braginskii, Transport Processes in a Plasma, Reviews of Plasma Physics, V. 1, M. A. Leontovich ed. Consultants Bureau, New York, 1965, p. 205. L. Spitzer, R. Härm, Phys. Rev. 89 (1953) 977. C. Chenais-Popovics [*et al.*]{}, Phys. Plasmas 4 (1997) 190. O. Larroche, J. Comput. Phys. 223 (2007) 436. W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical recipes in C, Cambridge University Press, Cambridge, 1992. Y. Saillard, C. R. Acad. Sci. Paris t. 1 sér. IV (2000) 705. P. A. Bradley, D. C. Wilson, Phys. Plasmas 8 (2001) 3724 and references therein. J.M Rax, Physique des Plasmas, Chapter 12, Dunod, 2008. B.E Peigney, O. Larroche, V. Tikhonchuk, in preparation.
--- abstract: 'We determine analytically the distribution of conductances of quasi one-dimensional disordered electron systems, neglecting electron-electron interaction, for all strengths of disorder. We find that in the crossover region between the metallic and insulating regimes, $P(g)$ is highly asymmetric. The average and the variance of $P(g)$ are shown to agree with exact results.' author: - 'P. Wölfle$^{1,3}$, and K. A. Muttalib$^{2,3}$' title: | Conductance distribution of disordered quasi\ one-dimensional wires --- Introduction {#intro} ============ The conductance $g$ (in units of $e^2/h$) of a mesoscopic disordered electron system is known to fluctuate strongly from sample to sample, or as a function of an external parameter such as a magnetic field or a gate voltage controlling the electron density \[1\]. In the metallic regime these fluctuations are universal and of Gaussian nature, i.e. the variance of $g$ is given by a pure number independent of the specifics of the system, depending only on the presence (or absence) of time reversal symmetry with respect to orbital or spin motion (orthogonal, unitary and symplectic cases) \[2\]. For increasing disorder the fluctuations grow and are no longer universal, Gaussian and symmetric about the average value. When the variance becomes as large as the average conductance it is necessary to consider the full distribution of conductances, $P(g)$. The situation is simple again in the localized regime, where $P(g)$ is known to be a log-normal distribution, with variance $\sim < \ell n (1/g)>$ \[3\]. Except for numerical studies of finite size systems \[4,5,6\] little is known about the conductance distribution in the crossover regime. These studies suggest that the distribution is highly asymmetric \[5\], with $-\ell nP(g)$ increasing like a power of $g$ for $g\rightarrow \infty$ and like $(\ell ng)^2$ for $g\rightarrow 0$ \[6\]. The shape of $P(g)$ in the crossover regime depends on the spatial dimension, but appears to be compatible with one-parameter scaling, and hence universality at a true metal-insulator transition in $d\ge 3$. On the other hand, analytical results for finite systems (length $L$) in $d = 2+\epsilon$ dimensions ($\epsilon \ll 1)$, where a weak disorder approximation can be applied, showed that the higher moments of $P(g)$ are non-universal and diverge in the limit $L\rightarrow \infty$ \[7\]. It has been proposed, however, that these results are not incompatible with a universal distribution at the critical point, which was determined to be a Gaussian with power law tails \[8\]. This may seem surprising in view of the numerical results \[4,6\]. One should keep in mind, however, that in $d = 2+\epsilon$ dimensions the critical conductance at the transition is large $<g>_c = 1/\epsilon \gg 1$, which is deep in the metallic regime and hence is quite different from the critical value $<g>_c \sim 1$ expected, e.g. in $d=3$ dimensions. Here we consider the conductance distribution for the simpler case of a quasi one-dimensional wire of width $W \ll \ell$, where $\ell$ is the mean free path due to elastic scattering, and length $L\gg \ell$. Although in this case (for orthogonal and unitary symmetry) all states are localized in the thermodynamic limit $L \rightarrow \infty$, for finite length $L$ the system exhibits well defined metallic and insulating regimes, and a smooth crossover between them. To be more precise, this is the case for a quantum wire with ideal leads of the same cross section, for which the perpendicular momenta at given energy $E_F$ are quantized into $N$ discrete levels, providing $N$ channels of transport. The localization length $\xi = N\ell$ in this case, which for $N \gg 1$ allows for a metallic regime to be realized in short wires $(\xi \ll L)$, whereas for long wires $(L \gg \xi)$ the system is of insulating character. For strictly one-dimensional weakly disordered systems the conductance distribution may be obtained analytically \[9\], but in this case a metallic regime is absent. The dimensionless conductance $g$ of a quantum wire can be expressed in terms of the N transmission eigenvalues $T_i$ of the corresponding scattering problem as $g = \Sigma_{i=1}^N T_i$ \[10\]. The joint probability distribution $P_T(\{T_i\})$ of the $T_i$ may be obtained \[11\] from a Fokker-Planck equation known as the Dorokhov-Mello-Pereyra-Kumar (DMPK) equation \[12\], in the limit of large $N$. The distribution $P_T(\{T_i\})$ depends only on the parameter $L/\xi$. The DMPK approach has been shown to be in agreement with the exact formulation of the problem in terms of a supersymmetric nonlinear sigma model \[13\]. Within the latter formulation the average and the variance of the conductance have been calculated for all values of $L/\xi$ \[14\]. To calculate the conductance distribution $P(g)$ from the joint distribution $P_T(\{T_i\})$ an N-fold integration is required, subject to the constraints $0 \le T_i \le 1$ and $\Sigma_i T_i = g$, which has only been done in the limiting cases of $L/\xi \ll 1$ (metal) and $L/\xi \gg 1$ (insulator). Here we describe a systematic and simple method, valid for all values of $L/\xi$, to obtain $P(g)$ from $P_T(\{T_i\})$ essentially analytically. We employ a generalized saddlepoint approximation, which recovers all the known results in the limiting cases, and provides results in the crossover regime in semiquantitative agreement with numerical data and with analytical results for the average and the variance of $g$. In particular, we find that $P(g)$ for $L/\xi \sim 1$ is given by a “one-sided” log-normal distribution for $g <1$, with a Gaussian tail at $g > 1$ \[15\]. Generalized saddlepoint approximation {#saddle} ===================================== It is useful to introduce variables $\lambda_i$ and $x_i$ defined by $T_i = (1 + \lambda_i)^{-1}$, $\lambda_i = \sinh^2 x_i$, in terms of which the conductance distribution may be represented as $$P(g) = \frac{1}{Z}\int_{-\infty}^\infty \frac{d\tau}{2\pi} e^{i\tau g} \int_{0}^\infty (\Pi_{i=1}^N d\lambda_i)\exp \Big[-F(\{\lambda_i\};\tau)\Big] \label{1}$$ The “free energy” $F$ for unitary symmetry (for orthogonal and symplectic symmetry the calculation is analogous) is obtained from the DMPK equation \[11\] $$F = 2\sum_i V(\lambda_i) + \sum_{i,j}u(\lambda_i,\lambda_j) + \sum_i \frac{i\tau}{1 + \lambda_i} \label{2}$$ where $u(\lambda_i,\lambda_j)$ is generated by the Jacobian of the integration over the transfer matrix elements and leads to “level repulsion”. Here one may interpret $V(\lambda_i) = (\xi/ 2L) x_i^2$ as a “one-body potential” and $u(\lambda_i,\lambda_j) = - \frac{1}{2} (u_1 +u_2)$, with $u_1(\lambda_i, \lambda_j)= \ell n\mid \lambda_i - \lambda_j\mid$ and $u_2(\lambda_i, \lambda_j) = \ell n \mid x_i^2 - x_j^2\mid$ as an “interaction potential” of charges at positions $\lambda_i$. In the metallic regime $V(\lambda)$ gives rise to a confinement of the charges in the regime $\lambda_i < 1$ (note $V(\lambda) \propto \lambda^2$, $\lambda < 1$), such that a description in terms of a charge density $\rho(\lambda)$ is appropriate. In the insulating regime $(V (\lambda) \sim \ell n^2\lambda, \lambda \gg 1)$ the logarithmic repulsion between the charges dominates the potential $V(\lambda)$, leading to an exponentially large separation between the charges, of which only the one closest to the origin is of importance.\ To capture both aspects we keep the first eigenvalue $\lambda_1$ separate and represent all the other eigenvalues by a continuum density $\rho(\lambda)$, beginning at a lower limit $\lambda_2 > \lambda_1$. The free energy then takes the form $$\begin{aligned} F(\rho (\lambda); \lambda_1, \lambda_2;\tau) &=2\int_{\lambda_2}^\infty d\lambda \rho (\lambda)V_{tot}(\lambda) + 2V(\lambda_1) +\nonumber \\ & + \int_{\lambda_2}^\infty d\lambda d\lambda{'} \rho(\lambda)u(\lambda, \lambda{'})\rho(\lambda{'}) + \frac{i\tau}{1 + \lambda_1} \label{3}\end{aligned}$$ where $V_{tot}(\lambda) = V(\lambda) + u(\lambda,\lambda_1) + \frac{i\tau}{1 + \lambda}$. The integration over variables $\lambda_3, ....\lambda_N$ in (1) is replaced by a functional integration $D[\rho(\lambda)]$. The latter is done in saddlepoint approximation, leading to the integral equation for $\rho(\lambda)$ $$\int_0^\infty d\zeta{'} [u_1(\zeta - \zeta{'}) + u_2(\zeta + \lambda_2, \zeta{'}+\lambda_2)]\rho (\zeta{'} + \lambda_2) = 2V_{tot}(\zeta + \lambda_2) \label{4}$$ where $\zeta =\lambda - \lambda_2$ has been introduced. This integral equation can be solved approximately by putting $u_2(\zeta + \lambda_2, \zeta{'} + \lambda_2) = u_2(\zeta,\zeta{'}) + \Delta u$ and neglecting $\Delta u$ in lowest order, which is exact in the limits $\zeta, \zeta{'} \gg \lambda_2$ and $\zeta, \zeta{'} \ll \lambda_2$. The leading correction term $\Delta u \propto \lambda_2$ in the metallic regime can be treated perturbatively by replacing $\rho$ in the integral involving $\Delta u$ by the saddlepoint solution for $\Delta u = 0$, (the results of this approximation will be presented below).\ The saddlepoint density $\rho_{sp}(\lambda)$ is found to develop negative parts for small $\lambda_2$, although $\rho(\lambda)$ is positive by definition. We take this as a signal that configurations of charges with $\lambda_2 < \lambda_c$ (for which $\rho_{sp} (\lambda)$ starts to turn negative at small $\lambda$) are unphysical and should be deleted. This is done by limiting the integration on $\lambda_2$ to $\lambda_2 > \lambda_c + \lambda_1$. The free energy after the saddlepoint integration on $\rho(\lambda)$ is found as $$F(\lambda_1,\lambda_2;\tau) = \int_{\lambda_2}^\infty d\lambda V_{eff} (\lambda)\rho_{sp}(\lambda) + 2V(\lambda_1) + \frac{i\tau}{1 + \lambda_1} + F_{f\ell} \label{5}$$ where $F_{f\ell}$ is the fluctuation part of the functional integral on $\rho(\lambda)$ and may be shown to depend on $\lambda_1, \lambda_2$ as $F_{f\ell} = \frac{1}{2} \ell n (\lambda_2 - \lambda_1) + \rm{const}$. Since $V_{eff}$ and $\rho_{sp}$ are linear functions in $\tau$, $F(\lambda_1,\lambda_2;\tau) = F^0 + i\tau F{'} + \frac{1}{2}(i\tau)^2F{''}$ is a quadratic form in $\tau$ leading to a Gaussian integral over $\tau$ in (1), with the result $$P(g) = \frac{1}{Z} \int_0^\infty d\lambda_1 \int_{\lambda_1 + \lambda_c}^\infty d\lambda_2 e^{-S} \label{6}$$ where $S = - (g-F{'})^2/2F{''} + F^0$. The remaining integrals on $\lambda_1, \lambda_2$ can be done numerically, or again in saddlepoint approximation. Results ======= In the metallic regime $(L/\xi \ll 1)$, the relevant values of $\lambda_1$ and $\lambda_2$ are small of order $L/\xi$. In the limit $\lambda_1, \lambda_2 \rightarrow 0$ we find $\mid F{''}\mid = \frac{1}{15}$ and $F{'} = \xi/L$, whereas $F^0$ tends to a constant at the saddlepoint $\lambda_2 = \lambda_c + \lambda_1$. Thus $P(g)$ is given by a Gaussian centered at $g = \xi/L$, of variance $1/15$, in agreement with known results \[1\].\ We have calculated the expressions for $F^0$ and $F{'}$ analytically up to and including all terms of order $L/\xi$ (the correction to $F{''}$ is $0[(L/\xi)^2]$). The correction to the average conductance in the metallic regime to this order is found as $< g > = \xi /L - \eta L/\xi$, with $\eta = 0.027$, which compares well with the exact result \[15\] $\eta = 1/45 \simeq 0.022$. There is no correction to the variance in order $L/\xi$, in agreement with \[15\].\ In the insulating regime, $L/\xi \gg 1$, the typical values of $x_1$ and $x_2$ are both $\gg 1$. In fact, the requirement of positivity of the density for $L/\xi >, \pi^2/2$ can only be satisfied if $x_2 \rightarrow \infty$, independent of $x_1$. Using the saddlepoint values of $x_1 = - \frac{1}{2}\ell n(g/4)$ and $F^0 = (\xi/L) x_1^2 - x_1$, $F{'} = 4e^{-2x_1}$ and $F{''} = - \frac{4}{3}e^{-4x_2}\rightarrow 0$ one finds $$P(g) = \frac{1}{Z}\frac{1}{g}\exp - [\frac{\xi}{4L} (\ell n(g/4) + L/\xi)^2] \label{}$$ a log-normal distribution in agreement with \[3\].\ In the crossover regime on the insulating side, where $\xi/L < 1$, we make use of the fact that the typical values of $x_1, x_2$ are $x_2 \gg 1$, but $x_1 < x_2$, otherwise arbitrary. We then find $F^0 \simeq (1/3)(\xi/L)^2x_2^3 - (\xi/L)(x_2^2 - x_1^2) + x_2 - (1/2) \ell n (x_1 \sinh (2x_1))$; $F{'}\simeq \cosh^{-2}x_1$ and $F{''} = - \sinh^{-2}(2x_2)[\frac{1}{3} - \frac{1}{4x_2^2} + \sinh^{-2}(2x_2)]$. The saddlepoint equation for $x_1$ is given by $\cosh x_1 = g^{-1/2}$, which has a solution only for $g \leq 1$. For $g >1$, instead the boundary values $x_1 = 0$, $x_2 = (2L/\pi\xi)$ give the minimum of $F$. The corresponding results for $P(g)$ are $$P(g) = \frac{1}{Z} \exp - a (g-1)^2\;\;\;,\;\;\;g>1 \label{9}$$ ![Conductance distribution $P(g)$ versus $g$ for $\xi/L = 0.4, 0.7, 1.6, 2.0$ (dotted, solid, long-dashed, short-dashed lines)[]{data-label="fig1"}](loc99-fig1.eps){width="7cm"} $$P(g) = \frac{1}{Z}\frac{1}{g}\Big[\frac{\rm{arsech}\sqrt{g}}{g\sqrt{1-g}} \Big]^{1/2}\exp\Big[- \frac{\xi}{L}\Big(\rm{arsech}\sqrt{g}\Big)^2\Big],\;\;\; g<1 \label{10}$$ Here $a=F{''}(x_2 = 2L/\pi \xi)$ controls the Gaussian cut-off of $P(g)$ for $g>1$. For $L/\xi \gg 1$ and $g^{-1} \gg 1$ Eq. (9) reduces to the log-normal distribution (7). ![Conductance distribution versus $\ell n(1/g)$ for $\xi/L = 0.7, 0.4, 0.25, 0.1$ (solid, dotted, long-dashed, short-dashed lines)[]{data-label="fig2"}](loc99-fig2.eps){width="7cm"} Fig. 1 shows $P(g)$ versus $g$ for several values of $\xi/L = 0.4, 0.7, 1.6, 2.0$. In Fig. 2 the results for $\xi/L = 0.4$ and $0.7$ are again shown plotted versus $\ell n(1/g)$ together with results for $\xi/L = 0.25, 0.1$. In the logarithmic plot (Fig. 2) one clearly recognizes a log-normal distribution centered at $\ell n(1/g) \equiv L/\xi - \ell n 4$ and of variance $var[\ell n(1/g)]\cong 2L/\xi$, cut-off at $\ell n g = 0 (g=1)$. The abrupt qualitative change of the shape of the distribution $P(g)$ in the crossover regime $(L/\xi \sim 1)$ as one goes from values $g < 1$ to $g > 1$ is a consequence of the small value of $F{''}$. Note that $\mid F{''}\mid \ll 1$ even in the metallic regime, decreasing exponentially in the insulating regime. The term $\propto (g-F{'})^2$ is thus multiplied by the large number $1/\mid F{''}\mid$, which forces the saddlepoint equation $F{'} = g$, which, however, has a solution only for $g < 1$. This leads to a power law dependence of $S$ on $\ell n g$. At $g > 1$ the minimum of $S$ is attained at the boundary of the integration regime $x_1 = 0$, where the term $\propto (g-F{'})^2$ dominates, resulting in a Gaussian cut-off of $P(g)$.\ This result might suggest that quite generally the statistics of the conductance in the crossover regime is Gaussian centered at $g = 1$ for $g>1$ and log-normal centered at $<\ell n g> = L/\xi$ for $g<1$. This may be made plausible in the following way. If the center of the distribution is located at $g\sim 1$, the lowest eigenvalue $\lambda_1$ still must be dominant, meaning that $\lambda_2 \gg 1$. The statistics of $\lambda_1$ is then essentially determined by the single particle potential $V(\lambda_1)$, giving rise to Gaussian statistics for $\lambda_1 < 1$, where $V(\lambda_1) \propto \lambda_1^2$ and to log-normal statistics for $\lambda_1 > 1$, where $V(\lambda_1) \propto \ell n^2\lambda_1$. These dependences carry over to the statistics of $g \simeq 1/(1+\lambda_1)$.\ The shape of $P(g)$ in higher dimensions, as determined numerically \[5,6\] shares the feature of an abrupt change at $g = 1$ from approximately log-normal to exponential behavior with our results. Thus, although the DMPK approach followed here is applicable only for quasi one-dimensional systems, the qualitative behavior found here may be more generally valid. [9]{} For reviews, see: [*Mesoscopic Phenomena in Solids*]{}, eds. B. L. Altshuler, P.A. Lee, and R.A. Webb (Elsevier, Amsterdam, 1991); C.W.J. Beenakker, Rev. Mod. Phys. [**69**]{} (1997) 731 P.A. Lee and A.D. Stone, Phys. Rev. Lett. [**55**]{} 1622 (1985); B.L. Altshuler, JETP Lett. [**41**]{} (1985) 648 J.L. Pichard et al., J. Phys. France [**51**]{} (1990) 587 V. Plerou and Z. Wang, Phys. Rev. B [**58**]{} (1998) 1967 B. Jovanovic and Z. Wang, Phys. Rev. Lett. [**81**]{} (1998) 2771; K. Slevin and T. Ohtsuki, Phys. Rev. Lett. [**78**]{} (1997) 4083; C. Soukoulis et al., Phys. Rev. Lett. [**82**]{} (1999) 668; T. Ohtsuki, K. Slevin, and T. Kawarabayaski, cond-mat/9809221. P. Markos. Phys. Rev. Lett. [**83**]{} (1999) 588 B.L. Altshuler, V.E. Kravtsov, I. Lerner, Sov. Phys. JETP [**64**]{} (1986) 1352, and Phys. Lett. A [**134**]{} (1989) 488 B. Shapiro, Phys. Rev. Lett. [**65**]{} (1990) 1510 B.L. Altshuler and V.N. Prigodin, JETP Lett. [**45**]{} (1987) 687 R. Landauer, IBM J. Res. Dev. [**1**]{} (1957) 223; D.S. Fisher and P.A. Lee, Phys. Rev. B [**23**]{} (1981) 6851; E.N. Economou and C.M. Soukoulis, Phys. Rev. Lett. [**46**]{} (1981) 618 C.W. J. Beenakker and B. Rejaei, Phys. Rev. Lett. [**71**]{} (1993) 3689 O.N. Dorokhov, JETP Lett. [**36**]{}, (1982) 318; P.A. Mello, P. Pereyra, and N. Kumar, Ann. Phys. (N.Y.) [**181**]{} (1988) 290 K. Frahm, Phys. Rev. Lett. [**74**]{} (1995) 4706; P. Brouwer and K. Frahm, Phys. Rev. B [**53**]{} (1996) 1490; B. Rejaei, Phys. Rev. B [**53**]{} (1996) 13235. M.R. Zirnbauer, Phys. Rev. Lett. [**69**]{} (1992) 1584; A.D. Mirlin, A. Müller-Groeling, and M.R. Zirnbauer, Ann. Phys. (N.Y.) [**236**]{} (1994) 325. K.A. Muttalib and P. Wölfle, cond-mat/9907235, submitted to Phys. Rev. Lett.
--- abstract: 'In medical imaging, there is a growing interest to provide real-time images with good quality for large anatomical structures. To cope with this issue, we developed a library that allows to replace, for some specific clinical applications, more robust systems such as Computer Tomography (CT) and Magnetic Resonance Imaging (MRI). Our python library *Py3DFreeHandUS* is a package for processing data acquired simultaneously by ultra-sonographic systems (US) and marker-based optoelectronic systems. In particular, US data enables to visualize subcutaneous body structures, whereas the optoelectronic system is able to collect the 3D position in space for reflective objects, that are called markers. By combining these two measurement devices, it is possible to reconstruct the real 3D morphology of body structures such as muscles, for relevant clinical implications. In the present research work, the different steps which allow to obtain a relevant 3D data set as well as the procedures for calibrating the systems and for determining the quality of the reconstruction.' author: - 'Davide Monari$^{\setcounter{footnotecounter}{1}\fnsymbol{footnotecounter}\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$ [^1][^2], Francesco Cenni$^{\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$, Erwin Aertbeliën$^{\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$, Kaat Desloovere$^{\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$[^3]' title: 'Py3DFreeHandUS: a library for voxel-array reconstruction using Ultrasonography and attitude sensors' --- medical imaging, free-hand ultrasonography, optoelectronic systems, compounding Introduction \[introduction\] ============================= In medical imaging, 3D data sets are an essential tool to explore anatomical volumes and to extract clinical features, which can describe a particular condition of the patient. These data are usually recorded by CT or MRI for identifying hard or soft tissue, respectively, and provide a high image quality together with a large field of view. On the other hand, these systems are *very* expensive (espacially MRI ones) and time consuming both for operators and patients. Plus, radioations from CT are an issue. Therefore, for some clinical applications, it could be interesting to replace these systems with others that can allow to provide 3D data sets quickly, although without the same high image quality. Ultrasonography (US) devices are systems largely used to collect medical images. For example, it is very common to examine pregnant women. This system, compared to other medical imaging systems, has several advantages: real-time images, portability, no ionizing radiation. However, one of the major drawbacks in US is the limited field of view and the lack of spatial information among different images acquired. Therefore a technique called 3D Freehand Ultrasound (3DUS) was originally proposed in the 90s [@Rankin93], [@Prager99] with the aim of reconstructing large 3D anatomical parts. The idea is to combine US images and the corresponding position and orientation (POS) of the US transducer; by simultaneously scanning a series of 2D images and recording spatial information it is possible to perform the relevant reconstruction and then the visualization of the entire volume acquired. The aim of the present work is to customize the 3DUS implementation by pushing on vectorization in NumPy / SciPy along with memory waste avoidance, for speeding up the processing phase as much as possible. These aspects are essential in this context, since for commodity hardware: i) memory resources are relatively limited and 3D volumes involved here can quickly reach large dimensions, ii) computation time can become unrealistic if very large for- or while- loops are used in Python. In addition the few existing applications for applying this technique have at least one of the following disadvantages: i) not open-source; ii) only supporting data streams from a limited number of US/POS sensors; iii) they are written in low-level languages such as C++, making rapid development and prototyping more difficult. We developed a pure Python library called **Py3DFreeHandUS** that solves all the above issues. Requirements \[requirements\] ============================= Py3DFreeHandUS was developed in Python 2.7 (Python 3 not yet supported), and uses the following libraries: - NumPy - SciPy (0.11.0+) - matplotlib - SymPy - pydicom - b-tk (Biomechanical ToolKit) [@Barre14] - VTK - OpenCV (2.4.9+) - Cython + gcc (optional, we are cythonizing bottlenecks but leaving pure Python implementation available) We used the Python distribution *Python(x,y)* for development and testing, since it already includes all libraries but b-tk. Description of the package \[description-of-the-package\] ========================================================= The present package is able to process synchronized data by US and POS, being as input DICOM and C3D files, respectively. The operations flowchart is composed by: US probe *temporal* and *spatial calibration* and *3D voxel array reconstruction*. US probe temporal calibration \[us-probe-temporal-calibration\] --------------------------------------------------------------- The aim of the temporal calibration is to estimate the time delay between US and the POS devices. This procedure is foundamental whenever it is not possible to hardware-trigger US and POS devices, so data needs to be time-shifted later. Time delay resolution cannot be lower than the inverse of the lower frequency (normally US). Briefly, we moved vertically up and down the US probe (rigidly connected to the POS sensor) in a water-filled tank and generated two curves: the first one being the vertical coordinate of the the POS sensor, the second one being the vertical coordinate (in the US image) of the center of the line representing the edge between water and tank. These two sine-like signal were demeaned, normalized to be inside the range [\[]{}-1,+1[\]]{} and cross-correlated with the function `matplotlib.pyplot.xcorr`. The time of the first peak for the cross-correlation estimates the time delay. US probe spatial calibration \[us-probe-spatial-calibration\] ------------------------------------------------------------- The probe spatial calibration is an essential procedure for image reconstruction which allows to determine the *pose* (position and orientation) of the US images with respect to the POS device. The corresponding results take the form of six parameters, three for position and three for orientation. The quality of this step mainly influences the reconstruction quality of the anatomical shape. To perform the probe calibration we used two different steps. First we applied an established procedure already published in the literature [@Prager98] and later we tuned the results by using an image compounding algorithm [@Wein08]. The established procedure was proposed by Prager et al. [@Prager98] and improved by Hsu [@Hsu06], with the idea of scanning the floor of a water tank by covering all the degrees of freedom (see Figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref calib ]{}); this scanning modality produces clear and consistent edge lines (between water and tank bottom) in the US images (B-scans). All the pixels lying on the visible line in the B-scan should satisfy equations that come from the different spatial transformations, which leave to solve 11 identifiable parameters. Each B-scan can be used to write 2 equations. The overdetermined set of equations is solved using the Levenberg-Marquardt algorithm. We found that it is essential to move the US transducers following the sequence of movements suggested in [@Prager98], in order to have reasonable results. The equation that a pixel with image coordinates $(u,v)$ must satisfy (see [@Prager98] for details) is as follows: [0em]{} $\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} =\ ^{C}T_{T}\ ^{T}T_{R}\ ^{R}T_{P} \begin{pmatrix} s_{x}u \\ s_{y}v \\ 0 \\ 1 \end{pmatrix}$, [0em]{} where $s_{x}$ and $s_{y}$ are conversion factors from *pixel* to *mm*. This is the code snippet for the equation creation: [0em]{} \PY{k+kn}{from} \PY{n+nn}{sympy} \PY{k+kn}{import} \PY{n}{Matrix}\PY{p}{,} \PY{n}{Symbol}\PY{p}{,} \PY{n}{var} \PY{k+kn}{from} \PY{n+nn}{sympy} \PY{k+kn}{import} \PY{n}{cos} \PY{k}{as} \PY{n}{c}\PY{p}{,} \PY{n}{sin} \PY{k}{as} \PY{n}{s} \PY{c}{\PYZsh{} Pp} \PY{n}{sx} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{sx}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{sy} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{sy}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{u} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{u}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{v} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{v}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{Pp} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{p}{(}\PY{p}{[}\PY{n}{sx} \PY{o}{*} \PY{n}{u}\PY{p}{]}\PY{p}{,}\PYZbs{} \PY{p}{[}\PY{n}{sy} \PY{o}{*} \PY{n}{v}\PY{p}{]}\PY{p}{,}\PYZbs{} \PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{,}\PYZbs{} \PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PYZbs{} \PY{p}{)}\PY{p}{)} \PY{c}{\PYZsh{} rTp} \PY{n}{rTp}\PY{p}{,} \PY{n}{syms} \PY{o}{=} \PY{n}{creatCalibMatrix}\PY{p}{(}\PY{p}{)} \PY{p}{[}\PY{n}{x1}\PY{p}{,} \PY{n}{y1}\PY{p}{,} \PY{n}{z1}\PY{p}{,} \PY{n}{alpha1}\PY{p}{,} \PY{n}{beta1}\PY{p}{,} \PY{n}{gamma1}\PY{p}{]} \PY{o}{=} \PY{n}{syms} \PY{c}{\PYZsh{} tTr} \PY{n}{tTr} \PY{o}{=} \PY{n}{MatrixOfMatrixSymbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{tTr}\PY{l+s}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{,} \PY{l+m+mi}{4}\PY{p}{)} \PY{n}{tTr}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{:}\PY{l+m+mi}{4}\PY{p}{]} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{c}{\PYZsh{} cTt} \PY{n}{x2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{x2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{y2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{y2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{z2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{z2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{alpha2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{alpha2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{beta2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{beta2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{gamma2} \PY{o}{=} \PY{n}{Symbol}\PY{p}{(}\PY{l+s}{\PYZsq{}}\PY{l+s}{gamma2}\PY{l+s}{\PYZsq{}}\PY{p}{)} \PY{n}{cTt} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{p}{(}\PY{p}{[}\PY{n}{c}\PY{p}{(}\PY{n}{alpha2}\PY{p}{)}\PY{o}{*}\PY{n}{c}\PY{p}{(}\PY{n}{beta2}\PY{p}{)}\PY{p}{,} \PY{o}{.}\PY{o}{.}\PY{o}{.} \PY{p}{[}\PY{n}{s}\PY{p}{(}\PY{n}{alpha2}\PY{p}{)}\PY{o}{*}\PY{n}{c}\PY{p}{(}\PY{n}{beta2}\PY{p}{)}\PY{p}{,} \PY{o}{.}\PY{o}{.}\PY{o}{.} \PY{p}{[}\PY{o}{\PYZhy{}}\PY{n}{s}\PY{p}{(}\PY{n}{beta2}\PY{p}{)}\PY{p}{,} \PY{n}{c}\PY{p}{(}\PY{n}{beta2}\PY{p}{)}\PY{o}{*}\PY{n}{s}\PY{p}{(}\PY{n}{gamma2}\PY{p}{)}\PY{p}{,} \PY{o}{.}\PY{o}{.}\PY{o}{.} \PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{0}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{]}\PYZbs{} \PY{p}{)}\PY{p}{)} \PY{c}{\PYZsh{} see [Prager98] for full expressions} \PY{c}{\PYZsh{} Calculate full equations} \PY{n}{Pc} \PY{o}{=} \PY{n}{cTt} \PY{o}{*} \PY{n}{tTr} \PY{o}{*} \PY{n}{rTp} \PY{o}{*} \PY{n}{Pp} \PY{n}{Pc} \PY{o}{=} \PY{n}{Pc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{:}\PY{l+m+mi}{3}\PY{p}{,}\PY{p}{:}\PY{p}{]} \PY{c}{\PYZsh{} Calculate full Jacobians} \PY{n}{x} \PY{o}{=} \PY{n}{Matrix}\PY{p}{(}\PY{p}{[}\PY{n}{sx}\PY{p}{,} \PY{n}{sy}\PY{p}{,} \PY{n}{x1}\PY{p}{,} \PY{n}{y1}\PY{p}{,} \PY{n}{z1}\PY{p}{,} \PY{n}{alpha1}\PY{p}{,} \PY{n}{beta1}\PY{p}{,} \PY{n}{gamma1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{y2}\PY{p}{,} \PY{n}{z2}\PY{p}{,} \PY{n}{alpha2}\PY{p}{,} \PY{n}{beta2}\PY{p}{,} \PY{n}{gamma2}\PY{p}{]}\PY{p}{)} \PY{n}{J} \PY{o}{=} \PY{n}{Pc}\PY{o}{.}\PY{n}{jacobian}\PY{p}{(}\PY{n}{x}\PY{p}{)} The equations system was solved by using the function `scipy.optimize.root` with `method=’lm’`. To validate the solution, the calibration part in this package allows to visualize the corresponding covariance matrix; this can be exploited to understand if some variable is not well constrained. In addition, since in each B-scan it is necessary to have the position for at least two pixels that belong to the edge line, we developed an automatic tool for extracting the corresponding lines in each image, based on the Hough transform: [0em]{} \PY{k+kn}{import} \PY{n+nn}{cv2} \PY{c}{\PYZsh{} Threshold image} \PY{n}{maxVal} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{iinfo}\PY{p}{(}\PY{n}{I}\PY{o}{.}\PY{n}{dtype}\PY{p}{)}\PY{o}{.}\PY{n}{max} \PY{n}{th}\PY{p}{,} \PY{n}{bw} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{threshold}\PY{p}{(}\PY{n}{I}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{round}\PY{p}{(}\PY{n}{thI}\PY{o}{*}\PY{n}{maxVal}\PY{p}{)}\PY{p}{,} \PY{n}{maxVal}\PY{p}{,}\PY{n}{cv2}\PY{o}{.}\PY{n}{THRESH\PYZus{}BINARY}\PY{p}{)} \PY{c}{\PYZsh{} Detect edges} \PY{n}{edges} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{Canny}\PY{p}{(}\PY{n}{bw}\PY{p}{,}\PY{n}{thCan1}\PY{p}{,}\PY{n}{thCan2}\PY{p}{,} \PY{n}{apertureSize}\PY{o}{=}\PY{n}{kerSizeCan}\PY{p}{)} \PY{c}{\PYZsh{} Dilate edges} \PY{n}{kernel} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{ones}\PY{p}{(}\PY{n}{kerSizeDil}\PY{p}{,}\PY{n}{I}\PY{o}{.}\PY{n}{dtype}\PY{p}{)} \PY{n}{dilate} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{dilate}\PY{p}{(}\PY{n}{edges}\PY{p}{,} \PY{n}{kernel}\PY{p}{,} \PY{n}{iterations}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \PY{c}{\PYZsh{} Find longest line} \PY{n}{lines} \PY{o}{=} \PY{n}{cv2}\PY{o}{.}\PY{n}{HoughLinesP}\PY{p}{(}\PY{n}{dilate}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{pi}\PY{o}{/}\PY{l+m+mi}{180}\PY{p}{,}\PY{n}{thHou}\PY{p}{,} \PY{n}{minLineLength}\PY{p}{,}\PY{n}{maxLineGap}\PY{p}{)} \PY{n}{maxL} \PY{o}{=} \PY{l+m+mi}{0} \PY{k}{if} \PY{n}{lines} \PY{o}{==} \PY{n+nb+bp}{None}\PY{p}{:} \PY{n}{a}\PY{p}{,} \PY{n}{b} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{nan}\PY{p}{,} \PY{n}{np}\PY{o}{.}\PY{n}{nan} \PY{k}{else}\PY{p}{:} \PY{k}{for} \PY{n}{x1}\PY{p}{,}\PY{n}{y1}\PY{p}{,}\PY{n}{x2}\PY{p}{,}\PY{n}{y2} \PY{o+ow}{in} \PY{n}{lines}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{:} \PY{n}{L} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{linalg}\PY{o}{.}\PY{n}{norm}\PY{p}{(}\PY{p}{(}\PY{n}{x1}\PY{o}{\PYZhy{}}\PY{n}{x2}\PY{p}{,}\PY{n}{y1}\PY{o}{\PYZhy{}}\PY{n}{y2}\PY{p}{)}\PY{p}{)} \PY{k}{if} \PY{n}{L} \PY{o}{\PYZgt{}} \PY{n}{maxL}\PY{p}{:} \PY{n}{maxL} \PY{o}{=} \PY{n}{L} \PY{n}{a} \PY{o}{=} \PY{n+nb}{float}\PY{p}{(}\PY{n}{y1} \PY{o}{\PYZhy{}} \PY{n}{y2}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{x1} \PY{o}{\PYZhy{}} \PY{n}{x2}\PY{p}{)} \PY{n}{b} \PY{o}{=} \PY{n}{y1} \PY{o}{\PYZhy{}} \PY{n}{a} \PY{o}{*} \PY{n}{x1} \PY{c}{\PYZsh{} a, b being line parameters: y = a * x + b} [0em]{} Since we experienced unsatisfactory calibration results (in terms of later reconstruction compounding) at this stage, we passed those through an image compounding algorithm which allows to achieve a good tuning. This is an image based method which uses as input 2 perpendicular sweeps, at approximately 90 degrees, for the same 3D volume [@Wein08]. Briefly, a similarity measure (Normalized Cross Correlation, NCC) between the two sweeps was applied to maximize this measure with the final aim to find the calibration parameters relative to the best overlapping between the images. The initial values of this iterative method are the results of the equations-based approach. A calibration quality assessment was also implemented in terms of precision and accuracy of the calibration parameters obtained. Precision gives an indication of the dispersion of measures around their mean, whereas the accuracy gives an indication of the difference between the mean of the measures and the real value [@Hsu06]. For example, this measure can be the known position of a point in space (*Point accuracy*) or the known dimension of an object (*Distance accuracy*). 3D voxel array reconstruction \[d-voxel-array-reconstruction\] -------------------------------------------------------------- The 3D reconstruction is performed by positioning the 2D US scans in the 3D space by using the corresponding pose. The first step is to import the images (DICOM file, standard format for medical imaging) and the synchronized kinematics files (C3D format) containing pose data. A 3D voxel array is then initialized. The 3D voxel array (a parallelepipedon) should be the smallest one containing the sequence of all the repositioned scans, as seen in Figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref voxarrsmall ]{}, in order to avoid RAM waste. To face this issue, in the present package two options are presented: reorienting manually the global reference frame in order to be approximately aligned with the scan direction during the acquisition; on the other hand, by using the Principal Component Analysis (PCA), it is also possible to find the scan direction and thereby realigning the voxel array according to this direction. The grey values of the original pixels in the 2D slices are then copied in the new corresponding 3D position. This procedure is performed by using an algorithm called Pixel Nearest Neighbor (PNN) which runs through each pixel in every image and fills the nearest voxel with the value of that pixel; in case of multiple contributions to the same voxel, the values are averaged. Below the code to perform this is shown. Each 2D scan is positioned in the 3D volume in a vectorized way. [0em]{} \PY{c}{\PYZsh{} x, y, z: arrays for 3D coordinates of} \PY{c}{\PYZsh{} the pixels in image I} \PY{c}{\PYZsh{} idxV: unique ID for each voxel of the} \PY{c}{\PYZsh{} 3D voxel array} \PY{c}{\PYZsh{} V: 1D array containing grey values for the} \PY{c}{\PYZsh{} 3D voxel\PYZhy{}array} \PY{c}{\PYZsh{} contV: 1D array containing current number of} \PY{c}{\PYZsh{} contributions for voxels} \PY{c}{\PYZsh{} I: 2D array containing US slice grey values} \PY{n}{idxV} \PY{o}{=} \PY{n}{xyz2idx}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{y}\PY{p}{,} \PY{n}{z}\PY{p}{,} \PY{n}{xl}\PY{p}{,} \PY{n}{yl}\PY{p}{,} \PY{n}{zl}\PY{p}{)}\PY{o}{.}\PY{n}{astype}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{int32}\PY{p}{)} \PY{n}{V}\PY{p}{[}\PY{n}{idxV}\PY{p}{]} \PY{o}{=} \PY{p}{(}\PY{n}{contV}\PY{p}{[}\PY{n}{idxV}\PY{p}{]} \PY{o}{*} \PY{n}{V}\PY{p}{[}\PY{n}{idxV}\PY{p}{]}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{contV}\PY{p}{[}\PY{n}{idxV}\PY{p}{]} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)} \PY{o}{+} \PY{n}{I}\PY{o}{.}\PY{n}{ravel}\PY{p}{(}\PY{p}{)} \PY{o}{/} \PY{p}{(}\PY{n}{contV}\PY{p}{[}\PY{n}{idxV}\PY{p}{]} \PY{o}{+} \PY{l+m+mi}{1}\PY{p}{)} \PY{c}{\PYZsh{} iterative avg} [0em]{} Only 2 outer loops exist, one for the DICOM file number and one for the scan number. After all the scans are correctly positioned in the 3D space, gaps can occur in the voxel array when the voxel size is small compared to the distance between the acquired images (e.g. scanning velocity significantly different from 0). Therefore interpolation methods are applied for filling these empty voxels. For optimizing this process, a robust method was also used, i.e. convex hull (see Figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref convhull ]{}), for restricting the gap filling operation only to the voxels contained between 2 consecutive slices: The quick-and-dirty way, known as VNN (Voxel Nearest Neighbour), consists of filling a gap by using the closest voxel having an assigned grey value. We also implemented another (average cube) solution which consist of the following steps: - Create a cube with side 3 voxels, centered around the gap; - Search the minimum percentage of non-gaps inside the cube (100% = number of voxels in the cube); - If that percentage is found, a non-gap voxels average (weighted by the Euclidean distances) is performed into the cube; - If that percentage is not found, the cube size in incremented by 2 voxels (e.g. 5); - If cube size is lesser or equal than a maximum size, start again from point 2. Otherwise, stop and don’t fill the gap. The entire voxel array can be subdivided in N parallelepipedal blocks, and the gap filling is performed on each one at a time, to spare some of the RAM. The bigger the number of blocks, the bigger the number of iterations to go, but the smaller the block size, the RAM used and the time spent per iteration. Finally, both the voxel array scans silhouette (previously created with the wrapping convex hulls) and the grey scale data voxel array are exported to VTI files, after being converted to `vtk.vtkImageData`. These can be opened with software like MeVisLab or Paraview for visualization and further processing. Preliminary results \[preliminary-results\] ------------------------------------------- The calibration quality assessments were 1.9 mm and 3.9 mm for the distance accuracy and reconstruction precision, respectively. The average data processing time (calibration + reconstruction + gap filling) over 3 trials on a human calf, shown in Figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref calf ]{}, was 5.9 min, on a 16 GB RAM Intel i7 2.7 GHz machine. [Prager99]{} [^1]: Corresponding author: <davide.monari@kuleuven.be> [^2]: KULeuven [^3]: Copyright©2014 Davide Monari et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. http://creativecommons.org/licenses/by/3.0/
--- abstract: 'Given a normalized Orlicz function $M$ we provide an easy formula for a distribution such that, if $X$ is a random variable distributed accordingly and $X_1,\ldots,X_n$ are independent copies of $X$, then $$\frac{1}{C_p} \|x\|_M \leq \mathbb E \|(x_iX_i)_{i=1}^n\|_p \leq C_p\|x\|_M,$$ where $C_p$ is a positive constant depending only on $p$. In case $p=2$ we need the function $t\mapsto tM''(t) - M(t)$ to be $2$-concave and as an application immediately obtain an embedding of the corresponding Orlicz spaces into $L_1[0,1]$. We also provide a general result replacing the $\ell_p$-norm by an arbitrary $N$-norm. This complements some deep results obtained by Gordon, Litvak, Schütt, and Werner in [@GLSW1]. We also prove a result in the spirit of [@GLSW1] which is of a simpler form and easier to apply. All results are true in the more general setting of Musielak-Orlicz spaces.' address: - 'Departamento de Matemáticas, Universidad de Murcia, Campus de Espinardo, 30100-Murcia, Spain' - 'Mathematisches Seminar, Christian Albrechts University Kiel, Ludewig-Meyn-Stra[ß]{}e 4, 24098 Kiel, Germany' - 'Institute of Analysis, Johannes Kepler University Linz, Altenbergerstra[ß]{}e 69, 4040 Linz, Austria' - 'Institute of Analysis, Johannes Kepler University Linz, Altenbergerstra[ß]{}e 69, 4040 Linz, Austria' author: - 'David Alonso-Gutiérrez' - Sören Christensen - Markus Passenbrunner - Joscha Prochno bibliography: - 'distribution\_of\_random\_variables\_and\_orlicz\_function.bib' title: 'On the Distribution of Random variables corresponding to Musielak-Orlicz norms' --- [^1] Introduction ============ In their outstanding work [@KS1], Kwapień and Schütt obtained beautiful and strong combinatorial inequalities in connection with Orlicz norms that were then used to study certain invariants of Banach spaces (see also [@KS2]). The new tool not only allowed them to compute the positive projection constant of a finite-dimensional Orlicz space, but also led to a characterization of the symmetric sublattices of $\ell_1(c_0)$ and the finite-dimensional symmetric subspaces of $\ell_1$. The method was later used in [@IS] to determine $p$-absolutely summing norms and was extended by Raynaud and Schütt to infinite-dimensional Banach spaces in [@RS] (see also [@S2] for applications to Lorentz spaces). In some special cases, the combinatorial expressions were already considered by Gluskin in [@G] (see also [@S1]). Quite recently, in [@PS], the tools were generalized to obtain new results on the local structure of the classical Banach space $L_1$. In the great paper [@GLSW1], building upon the combinatorial results from [@KS1] and [@KS2], Gordon, Litvak, Schütt and Werner were able to obtain even more general results in the continuous setting. They proved that, if $N$ is an Orlicz function and $X_1,\ldots,X_n$ are independent copies of a random variable $X$, then $\mathbb E \| (x_iX_i)_{i=1}^n \|_N$ is of the order $\|x\|_M$ where $M$ depends on $N$ and the distribution of $X$. This result, of course, is already interesting from a purely probabilistic point of view and was later used by the authors in [@GLSW3] to obtain estimates for various parameters associated to the local theory of convex bodies. It also initiated further research and led to beautiful results on order statistics [@GLSW2; @GLSW]. Recently, in the series of papers [@AGP2; @AGP; @AGP3], these results were also successfully used to study geometric functionals corresponding to random polytopes. A natural question that arises is whether the converse is true, i.e., given Orlicz functions $M$ and $N$, can we provide a formula for a distribution so that, if $X_1,\ldots,X_n$ are independent copies of an accordingly distributed random variable $X$, then $\mathbb E \| (x_iX_i)_{i=1}^n \|_N$ is of the order $\|x\|_M$. This is one part of the motivation for our work and we will answer this question in the affirmative. The “natural” candidate for the distribution is deduced from a new simpler version of a result from [@GLSW1] that we prove here. In the special case of $N(t)=t^p$ we give very easy formulas for the distribution of the random variables depending on the Orlicz function $M$, provided $M$ satisfies a certain condition depending on the parameter $p$. For $p=2$, this condition amounts to the $2$-concavity of $t\mapsto tM'(t) - M(t)$. In his beautiful paper [@S] Schütt proved that, if $M$ is equivalent to a $2$-concave Orlicz function, then the spaces $\ell_M^n$, $n\in{\mathbb N}$ embed uniformly into $L_1$ (see also [@BDC] and [@P1]). The proof is quite technical and based on combinatorial inequalities, some of them first appeared in the joint work [@KS1; @KS2] with Kwapień. Given a $2$-concave Orlicz function $M$ with some additional properties, he provided an explicit formula to obtain a sequence $a_1,\ldots,a_n$ of positive real numbers so that for all $x\in{\mathbb R}^n$ $$c_1 \|x\|_M \leq \frac{1}{n!} \sum_{\pi \in \mathfrak{S}_n} \left( \sum_{i=1}^n |x_ia_{\pi(i)}|^2\right)^{1/2} \leq c_2 \|x\|_M,$$ where $\mathfrak{S}_n$ is the set of all permutations of the numbers $\{1,\ldots,n\}$ and $c_1,c_2$ are absolute constants (see Theorem 2 in [@S]). Khintchine’s inequality then implies that these Orlicz spaces embed uniformly into $L_1$. Unfortunately, the formula is rather complicated and it is non-trivial to calculate the Orlicz function. This, in fact, shall be the other part of our motivation. The converse result we obtain for $p=2$, where we need $t\mapsto tM'(t) - M(t)$ to be $2$-concave, immediately implies that these Orlicz spaces $\ell_M^n$, $n\in {\mathbb N}$ are uniformly isomorphic to subspaces of $L_1$. Although it seems we need a somehow stronger assumption on $M$, the inversion formula we obtain is much simpler and easier to apply. The result might also be useful in finding new and easily verifiable characterizations for more general classes of subspaces of $L^1$. We provide here two different approaches to prove the converse results (for $\ell_p$-norms and general $N$-norms) where in each one of them conditions on $M$ naturally appear. Even more, if $p=2$ and we do not assume the $2$-concavity of $t\mapsto tM'(t) - M(t)$, but only the equivalence of $\mathbb E \| (x_iX_i)_{i=1}^n \|_2$ and $\|x\|_M$, then it is not hard to see that $t\mapsto tM'(t) - M(t)$ already had to be $2$-concave (see Proposition \[thm:general\]). Therefore, it seems that the condition is natural and not “too far” from the $2$-concavity of $M$. Our main result is the following: \[main\] Let $1<p<\infty$ and $M\in\mathcal{C}^3$ be an Orlicz function with $M'(0)=0$ and $M''(T)=0$ for $T=M^{-1}(1)$. Assume the normalization $\int_0^\infty x{\,\mathrm{d}}M'(x)=1$ and that $M|_{[T,\infty)}$ is linear. Moreover, assume that for all $x>0$ $$f_X(x)=\Big(1-\frac{2}{p}\Big)\frac{1}{x^3}M''\Big(\frac{1}{x}\Big)-\frac{1}{px^4}M'''\Big(\frac{1}{x}\Big)\geq 0.$$ Then $f_X$ is a probability density and for all $x\in{\mathbb R}^n$, $$c_1(p-1)^{1/p}\|x\|_{M} \leq {\mathbb E}\|(x_i X_i)_{i=1}^n\|_p \leq c_2\|x\|_{M},$$ where $c_1,c_2$ are positive absolute constants and $X_1,\dots,X_n$ are iid with density $f_X$. If $M$ is not normalized, we can divide the function $f_X$ by $\int_0^\infty x{\,\mathrm{d}}M'(x)$ to obtain a probability density and the statement of the theorem is true with constants depending on $p$ and $M$. Due to the definition of the Orlicz norm, its value is uniquely determined by the values of the function $M$ on the interval $[0,M^{-1}(1)]$. Hence, it is no restriction to extend $M$ linearly. If $p=2$, this immediately yields the desired embedding of Orlicz spaces into $L_1$ (see Corollary \[embedding\]). In fact, we will prove the case $p=\infty$ first, which will then imply the result for arbitrary $\ell_p$-norms. Preliminaries and Notation {#Preliminaries} ========================== A convex function $M:[0,\infty)\to[0,\infty)$ where $M(0)=0$ and $M(t)>0$ for $t>0$ is called an *Orlicz function*. The $n$-dimensional *Orlicz space* $\ell_M^n$ is ${\mathbb R}^n$ equipped with the norm $$\label{def:orlicznorm} \Vert{x}\Vert_M = \inf \Big\{ \rho>0 \,:\, \sum_{i=1}^n M\left(|x_i|/\rho\right) \leq 1 \Big\}.$$ In case $M(t)=t^p$, $1\leq p<\infty$ we just have $\ell_M^n = \ell_p^n$, i.e., $\Vert\cdot\Vert_M=\Vert\cdot\Vert_p$. Given Orlicz functions $M_1,\dots,M_n$, we define the corresponding *Musielak-Orlicz function* as $\mathbb M = (M_1,\dots,M_n)$ and the $n$-dimensional *Musielak-Orlicz space* $\ell_{\mathbb M}^n$ is ${\mathbb R}^n$ equipped with the norm $$\Vert{x}\Vert_{\mathbb M} = \inf \Big\{ \rho>0 \,:\, \sum_{i=1}^n M_i\left(|x_i|/\rho\right) \leq 1 \Big\}.$$ If $M_i=M$ for all $i=1,\ldots,n$, then $\ell_{\mathbb M}^n = \ell_M^n$. We say that two Orlicz functions $M$ and $N$ are equivalent if there are positive constants $a$ and $b$ such that for all $t\geq0$ $$a^{-1}M( b^{-1}t) \leq N(t) \leq aM(bt).$$ If two Orlicz functions are equivalent so are their norms. An Orlicz function is said to be *$p$-concave* for some $1\leq p<\infty$, if $t\mapsto M(t^{1/p})$ is a concave function. We say that an Orlicz function $M$ is *normalized* if $$\int_0^\infty x{\,\mathrm{d}}M'(x)=1.$$ Note also that, if two Orlicz functions are equivalent in a neighborhood of zero, then the corresponding sequence spaces already coincide [@LT1 Proposition 4.a.5]. For a detailed and thorough introduction to the theory of Orlicz spaces we refer the reader to [@KR], [@RR] or [@LT1; @LT2] and to [@M] in the case of Musielak-Orlicz spaces. Let $X$ and $Y$ be isomorphic Banach spaces. We say that they are *$C$-isomorphic* if there is an isomorphism $T:X\rightarrow Y$ with $\|T\|\|T^{-1}\|\leq C$. We define the Banach-Mazur distance of $X$ and $Y$ by $$d(X,Y) = \inf\left\{ \|T\|\|T^{-1}\| \,:\, T\in L(X,Y) ~ \hbox{isomorphism} \right\}.$$ Let $(X_n)_n$ be a sequence of $n$-dimensional normed spaces and let $Z$ also be a normed space. If there exists a constant $C>0$, such that for all $n\in{\mathbb N}$ there exists a normed space $Y_n \subseteq Z$ with $\dim(Y_n)=n$ and $d(X_n,Y_n)\leq C$, then we say $(X_n)_n$ *embeds uniformly* into $Z$. The beautiful monograph [@TJ] gives a detailed introduction to the concept of Banach-Mazur distances. We will use the notation $A\sim B$ to indicate the existence of two positive absolute constants $c_1,c_2$ such that $c_1A\leq B\leq c_2A$. Similarly, we define the symbol $\lesssim$. We write $\sim_p$, with some positive constant $p$, to indicate that the constants $c_1$ and $c_2$ depend on $p$. $c_1, c_2, c, C,\dots$ will always denote positive absolute constants whose value may change from line to line. By $L_1$ we denote the $L_1$ space on the unit interval $[0,1]$ with Lebesgue measure. We write $f\in \mathcal C^k$ for some $k\in{\mathbb N}$, whenever the function $f$ is $k$ times continuously differentiable and $\mathcal C^k(a,b)$ for $\mathcal C^k((a,b))$. The following theorem was obtained in [@GLSW2] and provides a formula for the Orlicz function $M$ provided that we know the distribution of $X$: [(]{}[@GLSW2 Lemma 5.2][)]{}.\[thm:orlicz\] Let $X_1,\dots X_n$ be iid integrable random variables. For all $s\geq 0$ define $$M(s)=\int_0^s \int_{1/t\leq |X_1|} |X_1|\,{\,\mathrm{d}}\mathbb P {\,\mathrm{d}}t.$$ Then, for all $x=(x_i)_{i=1}^n\in \mathbb{R}^n$, $$c_1\|x\|_M\leq \mathbb{E}\max_{1\leq i\leq n}|x_iX_i|\leq c_2\|x\|_M,$$ where $c_1,c_2$ are absolute constants independent of the distribution of $X_1$. Obviously, the function $$\label{EQU Orlicz function M} M(s)=\int_0^s \int_{1/t\leq |X_1|} |X_1| \, {\,\mathrm{d}}\mathbb P \, {\,\mathrm{d}}t$$ is non-negative and convex, since $\int_{1/t\leq |X|}|X| \,d\mathbb P$ is increasing in $t$. Furthermore, we have that $M$ is continuous, differentiable and $M(0)=M'(0)=0$. Note that, in fact, Theorem \[thm:orlicz\] is true for Musielak-Orlicz spaces when we do not assume the random variables to be identically distributed: \[thm:orliczgeneral\] Let $X_1,\dots X_n$ be independent integrable random variables. For all $s\geq 0$ and all $j=1,\ldots,n$ define $$M_j(s)=\int_0^s \int_{1/t\leq |X_j|} |X_j|\,{\,\mathrm{d}}\mathbb P {\,\mathrm{d}}t.$$ Then, for all $x=(x_i)_{i=1}^n\in \mathbb{R}^n$, $$c_1\|x\|_{\mathbb M}\leq \mathbb{E}\max_{1\leq i\leq n}|x_iX_i|\leq c_2\|x\|_{\mathbb M},$$ where $c_1,c_2$ are absolute constants and $\mathbb M = (M_1,\dots,M_n)$. A proof in the case of averages over permutations can be found in [@P] and can be generalized to our setting by a straightforward adaption of the proof of Theorem \[thm:orlicz\]. Because of Theorem \[thm:orliczgeneral\], all results presented in this paper hold in the more general setting of Musielak-Orlicz spaces, but for notational convenience we state them only for Orlicz spaces. If $M$ is an Orlicz function such that $M\in \mathcal C^3$, then for $t\mapsto tM'(t)-M(t)$ to be $2$-concave is equivalent to $M'''\leq 0$. Therefore, and for the sake of convenience, we will later assume $M'''\leq 0$, but might still talk about the $2$-concavity of $t\mapsto tM'(t)-M(t)$ at the same time. We will also need a result from [@PR] about the generating distribution of $\ell_p$-norms. We recall that the density of a $\log \gamma_{1,p}$ distributed random variable $\xi$ with parameters $p>0$ is given by $$f_{\xi}(x) = px^{-p-1}\mathbbm 1_{[1,\infty)}(x).$$ Note also that for all $x>0$ $$\mathbb P \left(\xi \geq x \right) = \min(1,x^{-p}).$$ [(]{}[@PR Theorem 3.1][)]{}.\[THM\_lp\_normen\] Let $p>1$ and $\xi_1,...,\xi_n$ be iid copies of a $\log \gamma_{1,p}$ distributed random variable $\xi$. Then, for all $x\in{\mathbb R}^n$, $$c_1 \|x\|_{p} \leq \mathbb E \max_{1\leq i \leq n} | x_i\xi_i | \leq \frac{c_2}{(p-1)^{1/p}} \|x\|_{p},$$ where $c_1,c_2$ are positive absolute constants. Recall the following well-known theorem about the existence of independent random variables corresponding to given distributions: [(]{}[@B Theorem 20.4][)]{}.\[thm\_measure\] Let $(\mu_j)_j$ be a finite or infinite sequence of probability measure on the real line. Then there exists an independent sequence of random variables $(\xi_j)_j$ defined on the probability space $([0,1],\mathfrak{B}_{\mathbb R},\lambda)$, with Borel $\sigma$-algebra $\mathfrak{B}_{\mathbb R}$ and Lebesgue measure $\lambda$, so that the distribution of $\xi_j$ is $\mu_j$. A simple Representation Result {#sec:simple} ============================== In this section we prove a result of the same spirit as Theorem \[thm:orlicz\], where we replace the $\ell_\infty$-norm by some $\ell_p$-norm for $1< p <\infty $. This is a special case of Theorem 1 in [@GLSW1] with $N(t)=t^p$. There it seems unclear how to determine the “precise” form of the Orlicz function that appears. Of course, this is somehow unsatisfactory and, therefore, we provide a result that produces a “simple” representation of this Orlicz function. Observe also that the following result, which is a consequence of Theorems \[thm:orlicz\] and \[THM\_lp\_normen\], corresponds to the discrete results recently obtained in [@PS]. \[thm:orlicz\_p\_norm\] Let $1<p<\infty$, $X_1,\ldots,X_n$ be iid integrable random variables. For all $s\geq0$ define $$M(s) = \frac{p}{p-1}\int_0^s\left( \int_{|X_1| \leq \frac{1}{t}} t^{p-1} \left| X_1 \right|^p {\,\mathrm{d}}{\mathbb P}+ \int_{ |X_1| > 1/t}|X_1| {\,\mathrm{d}}\mathbb P \right){\,\mathrm{d}}t .$$ Then, for all $x\in{\mathbb R}^n$, $$c_1 (p-1)^{1/p} \| x \|_M \leq \mathbb E \| (x_iX_i)_{i=1}^n \|_p \leq c_2 \| x \|_M,$$ where $c_1,c_2,$ are positive absolute constants. Let $X_1,\dots,X_n$ be defined on $(\Omega_1,{\mathbb P}_1)$ and let $\xi_1,\dots,\xi_n$ be independent copies of a $\log\gamma_{1,p}$ distributed random variable $\xi$, say on $(\Omega_2,\mathbb P_2)$. Then, by Theorem \[THM\_lp\_normen\], $$\mathbb E_{\Omega_1} \|(x_iX_i)_{i=1}^n\|_p\lesssim \mathbb E_{\Omega_1} \mathbb E_{\Omega_2} \max_{1\leq i \leq n}|x_i X_i\xi_i| \lesssim (p-1)^{-1/p} \mathbb E_{\Omega_1} \|(x_iX_i)_{i=1}^n\|_p,$$ holds for all $x\in{\mathbb R}^n$. On the other hand, by Theorem \[thm:orlicz\], $$\mathbb E_{\Omega_1} \mathbb E_{\Omega_2} \max_{1\leq i \leq n}|x_i X_i\xi_i| \sim \|x\|_{M}$$ for all $x\in{\mathbb R}^n$, where $$M(s)=\int_0^s \int_{1/t \leq |X_1\xi|} |X_1\xi|\,{\,\mathrm{d}}\mathbb P {\,\mathrm{d}}t.$$ For $t>0$ and $\omega_1\in\Omega_1$ define $$I_{\omega_1} := \left\{\omega_2\in\Omega_2 \,:\, t |\xi(\omega_2) X_1(\omega_1)| \geq 1\right\}.$$ Now, we observe that $$\begin{aligned} M(s) & = \int_0^s \int_{\Omega_1} \int_{I_{\omega_1}} |X_1(\omega_1)\xi(\omega_2)|\,{\,\mathrm{d}}\mathbb P_2(\omega_2) {\,\mathrm{d}}\mathbb P_1(\omega_1) {\,\mathrm{d}}t \\ & = \int_0^s \int_{\Omega_1} |X_1(\omega_1)| \int_{I_{\omega_1}} |\xi(\omega_2)|\,{\,\mathrm{d}}\mathbb P_2(\omega_2) {\,\mathrm{d}}\mathbb P_1(\omega_1) {\,\mathrm{d}}t .\\\end{aligned}$$ Let us take a closer look at the inner integral. Fix $t>0$ and $\omega_1\in\Omega_1$ and recall that the density of $\xi$ is $$f_{\xi}(x) = px^{-p-1}\mathbbm 1_{[1,\infty)}(x).$$ Therefore, if $t|X_1(\omega_1)| \leq 1$, $$\int_{I_{\omega_1}} |\xi(\omega_2)|\,{\,\mathrm{d}}\mathbb P_2(\omega_2) = p \int_{\{z\,:\, zt|X_1(\omega_1)| \geq 1\}} z^{-p} {\,\mathrm{d}}z = \frac{p}{p-1} (t |X_1|)^{p-1}.$$ Now assume that $t|X_1(\omega_1)| \geq 1$. Then we get $$\int_{I_{\omega_1}} |\xi(\omega_2)|\,{\,\mathrm{d}}\mathbb P_2(\omega_2) = \mathbb E |\xi| = \frac{p}{p-1}.$$ Hence, by splitting the integral over $\Omega_1$, for fixed $t$ we have $$\begin{aligned} &&\int_{\Omega_1} \int_{I_{\omega_1}} |X_1(\omega_1)\xi(\omega_2)|\,{\,\mathrm{d}}\mathbb P_2(\omega_2) {\,\mathrm{d}}\mathbb P_1(\omega_1)\\ & = &\frac{p}{p-1} \int_{ |X_1| \leq 1/t} t^{p-1} \left| X_1 \right|^p{\,\mathrm{d}}\mathbb P_1(\omega_1) + \frac{p}{p-1} \int_{ |X_1| > 1/t}|X_1|{\,\mathrm{d}}\mathbb P_1(\omega_1).\end{aligned}$$ This implies the result. Note that by Fubini’s theorem, $$\begin{aligned} \int_0^s \int_{0}^{\frac{1}{t}} t^{p-1}|x|^p {\,\mathrm{d}}{\mathbb P}_{X_1}(x) {\,\mathrm{d}}t & = \int_{\frac{1}{s}}^\infty |x|^p\int_{0}^{|x|^{-1}} t^{p-1} {\,\mathrm{d}}t {\,\mathrm{d}}{\mathbb P}_{X_1}(x) \\ & = \frac{1}{p} {\mathbb P}\left( |X_1| \geq s^{-1} \right) \leq \frac{1}{p},\end{aligned}$$ and, hence, the limit case in Theorem \[thm:orlicz\_p\_norm\] for $p\to \infty$ coincides with Theorem \[thm:orlicz\]. Observe also that Theorem \[thm:orlicz\_p\_norm\] provides a natural candidate for the probability density that appears in Theorem \[main\]: If the random variables $|X_1|,\dots,|X_n|$ have a density $f_X$, then $$M''(s) = ps^{p-2}\int_{0}^{s^{-1}} x^pf_X(x) {\,\mathrm{d}}x,$$ that is, $$\int_{0}^{s^{-1}} x^pf_X(x) {\,\mathrm{d}}x = \frac{1}{p}s^{2-p}M''(s).$$ Therefore, differentiating once again, $$f_X(s^{-1}) = \left( 1-\frac{2}{p}\right) s^3M''(s) - \frac{1}{p}s^4M'''(s).$$ In the following section we will prove Theorem \[main\] in the case $p=\infty$. We then reduce the case of general $p$ to the case $p=\infty$ in Section \[general\_p\]. The case of the $\ell_\infty$-norm {#sec:infinity} ================================== To obtain the case of $\ell_p$-norms it is enough to settle the question for the $\ell_\infty$-norm. We will give a short explanation of that fact: Assume that $N$ is an arbitrary Orlicz function and we know how to choose a distribution (depending on $N$) so that, if $\xi_1,\ldots,\xi_n$ are independent random variables distributed according to that law, then, for all $x=(x_i)_{i=1}^n\in{\mathbb R}^n$, $$\mathbb E \max_{1\leq i \leq n} \left| x_i \xi_i\right| \sim \| x\|_N.$$ Now, let $M$ be the normalized Orlicz function given in Theorem \[main\]. We want to find a distribution and independent random variables $X_1,\ldots,X_n$ defined on a measure spaces $(\Omega_1,{\mathbb P}_1)$ distributed according to this such that $$\label{equ_ell_p_norm} \mathbb E_{\Omega_1} \| (x_i X_i)_{i=1}^n\|_p \sim_p \|x\|_M.$$ Of course, we can find a distribution and accordingly distributed independent random variables $Z_1,\ldots,Z_n$ so that $$\mathbb E \max_{1\leq i \leq n} \left| x_iZ_i \right| \sim \| x \|_M,$$ since we can just take $N=M$. On the other hand, observe that $$\mathbb E_{\Omega_1} \| (x_i X_i)_{i=1}^n\|_p \sim_p \mathbb E_{\Omega_1} \mathbb E_{\Omega_2} \max_{1\leq i \leq n} \left| x_iX_iY_i\right|,$$ where we get the distribution of the independent random variables $Y_1,\ldots,Y_n$, say on $(\Omega_2,{\mathbb P}_2)$, by choosing $N(t)=t^p$. So, for all $x=(x_i)_{i=1}^n \in{\mathbb R}^n$, $$\mathbb E \max_{1\leq i \leq n} \left| x_iZ_i \right| \sim_p \|x\|_M \sim \mathbb E_{\Omega_1} \mathbb E_{\Omega_2} \max_{1\leq i \leq n} \left| x_iX_iY_i\right|.$$ Therefore, to obtain (\[equ\_ell\_p\_norm\]), we just have to choose the distribution of $X_1,\ldots,X_n$ so that $X_1Y_1\stackrel{\mathcal D}{=} Z_1$. Of course, here the distribution of $Z$ and $Y$ is known. Before we continue, we observe that the transformation formula for integrals yields the following substitution rule for Stieltjes integrals: $$\label{eq:subst} \int_a^b f\circ u {\,\mathrm{d}}(F\circ u) = \int_{u(a)}^{u(b)} f {\,\mathrm{d}}F,$$ where $f$ is an arbitrary measurable function, $F$ is a non-decreasing function and $u$ is monotone on the interval $[a,b]$. The following result is the converse to Theorem \[thm:orlicz\]: \[PRO\_inverse\_maximum\] Let $M$ be a normalized Orlicz function with $M'(0)=0$. Let $X_1,\ldots,X_n$ are independent copies of a random variable $X$ with distribution $$\label{eq:distrX} \mathbb P(X\leq t)=\int_{[1/t,\infty)} s{\,\mathrm{d}}M'(s),\quad t> 0.$$ Then, for all $x=(x_i)_{i=1}^n\in \mathbb{R}^n$, $$c_1\|x\|_M\leq \mathbb{E}\max_{1\leq i\leq n}|x_iX_i|\leq c_2\|x\|_M,$$ where $c_1,c_2$ are constants independent of the Orlicz function $M$. We first observe that for an arbitrary random variable $X$ which is $\geq 0$ a.s., we have by $$F_X(t):=\mathbb P(X\leq t)=\int_{(0,t]} {\,\mathrm{d}}F_X(s)=-\int_{[1/t,\infty)} {\,\mathrm{d}}(F_X\circ u)(s),$$ where $u(s)=1/s$. If the distribution of $X$ is given by , we obtain $${\,\mathrm{d}}(F_X\circ u)(s)=-s{\,\mathrm{d}}M'(s).$$ Now we obtain, again by and this identity $$\begin{aligned} \int_0^s \int_{[1/t,\infty)} x{\,\mathrm{d}}F_X(x){\,\mathrm{d}}t &= -\int_0^s \int_{(0,t]} \frac{1}{x} {\,\mathrm{d}}(F_X\circ u)(x){\,\mathrm{d}}t \\ &= \int_0^s \int_{(0,t]} {\,\mathrm{d}}M'(x){\,\mathrm{d}}t \\ &= M(s).\end{aligned}$$ The assertion of the theorem is now a consequence of Theorem \[thm:orlicz\]. The assumption that $M$ is normalized, i.e., $\int_0^\infty x\,{\,\mathrm{d}}M'(x)=1$, assures us that the constants do not depend on $M$. Note also that, as an immediate consequence of Proposition \[PRO\_inverse\_maximum\], by the integration by parts rule for Stieltjes integrals we obtain $$\label{equ_tail_distribution_function_of_X} \mathbb P\left( X > t\right) = \int_{0}^{\frac{1}{t}} s \, {\,\mathrm{d}}M'(s) = \frac{1}{t}M'\left(\frac{1}{t}\right) - M\left(\frac{1}{t}\right)$$ for any $t>0$. If $M$ is “sufficiently smooth”, we get that the density $f_X$ of $X$ is given by $$f_X(t)={t^{-3}}M''(t^{-1}).$$ To generate an $\ell_p$-norm in Proposition \[PRO\_inverse\_maximum\], i.e., to consider the case $M(t)=t^p$, one needs to pass to an equivalent Orlicz function so that the normalization condition is satisfied. The function $\widetilde M$ with $\widetilde M(t) = t^p$ on $[0, (p-1)^{-1/p}]$ which is then extended linearly does the trick. The case of $\ell_p$-norms {#general_p} ========================== We will now prove the result which will then imply the main result, Theorem \[main\]. Of course, in the proposition we could also assume $M\in\mathcal C^3$, but $M\in\mathcal C^2$ so that $M''$ is absolutely continuous on each compact subinterval of $(0,\infty)$ is sufficient. \[thm:p\] Let $M\in\mathcal C^2(0,\infty)$ be a normalized Orlicz function and $M''$ be absolutely continuous on each compact subinterval of $(0,\infty)$. Assume that $M'(0)=0=M''(T)$ for $T=M^{-1}(1)$ and that $M|_{[T,\infty)}$ is linear. Let $1< p < \infty$ and $X, Y$ be two independent random variables distributed according the laws $$\begin{aligned} {\mathbb P}(Y\geq y)&=\min (1,y^{-p})\quad\text{and } \\ {\mathbb P}(X\geq x)&=-M\Big(\frac{1}{x}\Big)+\frac{1}{x} M'\Big(\frac{1}{x}\Big)-\frac{1}{px^2}M''\Big(\frac{1}{x}\Big).\end{aligned}$$ Then the tail distribution function of $XY$ is $$\label{eq:probXY} {\mathbb P}(XY\geq z)=\frac{1}{z}M'\Big(\frac{1}{z}\Big)-M\Big(\frac{1}{z}\Big), \quad z>0.$$ First note that the density function of $X$ is given by $$\label{eq:densityXY} \begin{aligned} f_X(x)&=\Big(1-\frac{2}{p}\Big)\frac{1}{x^3}M''\Big(\frac{1}{x}\Big)-\frac{1}{px^4}M'''\Big(\frac{1}{x}\Big), \\ \end{aligned}$$ Inserting the expression for ${\mathbb P}(Y\geq y)$, we obtain $$\label{eq:prodxyintermed} \begin{aligned} {\mathbb P}(XY\geq z)&=\int \mathbbm{1}_{\{XY\geq z\}} {\,\mathrm{d}}{\mathbb P}=\int_0^\infty {\mathbb P}(Y\geq z/x) f_X(x){\,\mathrm{d}}x \\ &= \int_0^\infty \min(1,x^p/z^p)f_X(x){\,\mathrm{d}}x\\ &= {\mathbb P}(X\geq z)+z^{-p}\int_0^z x^p f_X(x){\,\mathrm{d}}x. \end{aligned}$$ Observe that, under the above assumptions and for $z\leq T^{-1}$, ${\mathbb P}(X\geq z)=1=z^{-1}M'(z^{-1})-M(z^{-1})$ and $f_X(z)=0$, since $\int_0^\infty x{\,\mathrm{d}}M'(x)=TM'(T)-M(T)=1$. This yields for $z\leq 1/T$. Thus we now assume $z>1/T$ and continue with calculating the integral $\int_0^z x^p f_X(x){\,\mathrm{d}}x$. We substitute $u=1/x$ and obtain $$\begin{aligned} \int_0^z x^p f_X(x){\,\mathrm{d}}x &= \int_{z^{-1}}^\infty u^{-p-2} f_X(u^{-1}){\,\mathrm{d}}u \\ &= \int_{z^{-1}}^T \Big(1-\frac{2}{p}\Big)u^{1-p}M''(u)-\frac{u^{2-p}}{p}M'''(u){\,\mathrm{d}}u.\end{aligned}$$ Partial integration further yields $$\int_0^z x^p f_X(x){\,\mathrm{d}}x = -\frac{u^{2-p}}{p}M''(u)\Big|_{z^{-1}}^T=\frac{1}{p}z^{p-2} M''(z^{-1}),$$ since $M''(T)=0$. Combining equation with this result and the expression for the distribution of $X$, we obtain for $z>1/T$. Now we can finally prove our main theorem: Let $M$ be the given Orlicz function and $(X_i)_{i=1}^n$ the given random variables on a measure space $(\Omega_1,{\mathbb P}_1)$. First note that by Proposition \[PRO\_inverse\_maximum\] and the remark after it we get $$\label{eq:main1} \|x\|_M\sim {\mathbb E}\max_{1\leq i\leq n} |x_i Z_i|,$$ where ${\mathbb P}(Z\geq z)=z^{-1}M'(z^{-1})-M(z^{-1})$. Secondly, by Theorem \[THM\_lp\_normen\], $$\label{eq:main2} {\mathbb E}_{\Omega_1} \| (x_iX_i)_{i=1}^n \|_p \lesssim {\mathbb E}_{\Omega_1}{\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_iX_iY_i| \lesssim (p-1)^{-1/p} {\mathbb E}_{\Omega_1} \| (x_iX_i)_{i=1}^n \|_p $$ where the random variables $(Y_i)_{i=1}^n$, defined on $(\Omega_2,{\mathbb P}_2)$, are independent and $\log\gamma_{1,p}$-distributed. Since, by Proposition \[thm:p\], $X_1Y_1\stackrel{\mathcal D}{=} Z_1$, we combine and to obtain the assertion of the theorem. In case $p=2$, we obtain the following corollary: \[cor\_p=2\] Let $M\in\mathcal{C}^3(0,\infty)$ be a normalized Orlicz function with $M'(0)=0$ and $M'''(x)\leq 0$ for all $x\geq 0$ and assume that $M''(M^{-1}(1))=0$. Then $$\label{equ_density_for_p=2} f_X(x)=-\frac{1}{2x^4} {M}'''\Big(\frac{1}{x}\Big)$$ is a probability density and for all $x\in{\mathbb R}^n$, $$c_1\|x\|_{M}\leq{\mathbb E}\|(x_i X_i)_{i=1}^n\|_2\leq c_2\|x\|_{M},$$ where $c_1,c_2$ are positive absolute constants and $X_1,\dots,X_n$ are iid with density $f_X$. Again, the normalization condition $\int_0^\infty y {\,\mathrm{d}}M'(y)=1$ assures that constants do not depend on $M$ and, in fact, is of the same form as the normalization condition in Theorem 2 from [@S]. Note also that in the proof of Theorem \[thm:p\] and its corollaries we need that $M''(T)=0$ for $T=M^{-1}(1)$. This, indeed, is no restriction, since Lemma \[lem:approx\] in Section \[SEC\_Appendix\] shows that for any $2$-concave Orlicz function we can assume that $M''(T)=0$, otherwise we pass to an equivalent Orlicz function which has this property. Recall also that every Orlicz function which satisfies $M'''\leq 0$ is already $2$-concave. The authors do not know whether for an Orlicz function $M$ to be $2$-concave is equivalent (up to equivalent Orlicz functions) to have non-positive third derivative. Note that another proof of Corollary \[cor\_p=2\] via a Choquet-type representation theorem in the spirit of Lemma 7 in [@S] also yields the condition that the function $z\mapsto zM'\left(z\right) - M\left(z\right)$ has to be $2$-concave (or equivalently $M'''\leq 0$). Orlicz spaces that are isomorphic to subspaces of $L_1$ ======================================================= As we will see, it is an easy consequence of Corollary \[cor\_p=2\] that the sequence of Orlicz spaces $\ell_M^n$, $n\in{\mathbb N}$, where $t\mapsto tM'(t)-M(t)$ is $2$-concave, embeds uniformly into $L_1$. Although we need $t\mapsto tM'(t)-M(t)$ to be a $2$-concave function, which seems a bit stronger than to assume that $M$ is $2$-concave, the simplicity of the representation (\[equ\_density\_for\_p=2\]) of the density that we need in our embedding has a strong advantage over the representation in Theorem 2 in [@S], since it is much easier to handle. We obtain the following result: \[embedding\] Let $M$ be a normalized Orlicz function so that $M'(0)=0$ and $M'''\leq 0$. Then there exists a positive absolute constant $C$ (independent of $M$) such that for all $n\in{\mathbb N}$ there is a subspace $Y_n$ of $L_1$ with $\dim(Y_n)=n$ and $$d(\ell_M^n,Y_n) \leq C,$$ i.e., $(\ell_M^n)_n$ embeds uniformly into $L_1$. The proof is a simple consequence of Corollary \[cor\_p=2\], Khintchine’s inequality and Theorem \[thm\_measure\]. Given $n\in{\mathbb N}$, we let $\mu_1=\dots=\mu_n$ be the distribution of Rademacher functions, that is, $$\mu_i(\{1\})=\mu_i(\{-1\})=1/2,\quad 1\leq i\leq n.$$ Additionally, we let $\mu_{n+1}=\dots = \mu_{2n}$ be the distribution of $X_i$ given in Corollary \[cor\_p=2\]. Then we apply Theorem \[thm\_measure\] to the finite sequence $(\mu_i)_{i=1}^{2n}$ of probability measures to get independent random variables $r_1,\dots,r_n,X_1,\dots X_n$ defined on the unit interval $[0,1]$ such that the distribution of $r_i$ is $\mu_i$ and the distribution of $X_i$ is $\mu_{n+i}$ for all $1\leq i\leq n$. Then the asserted isomorphism is given by $$\Psi_n:\ell_M^n \to L_1[0,1], \quad a\mapsto \sum_{i=1}^n a_i r_i(\cdot) X_i(\cdot).$$ Thus, applying Khintchine’s inequality, for any $a=(a_i)_{i=1}^n\in{\mathbb R}^n$, $$\begin{aligned} \| \Psi_n(a) \|_{L_1} & = \int_0^1 \Big| \sum_{i=1}^n a_i r_i(t) X_i(t)\Big| {\,\mathrm{d}}t \\ & = \int_{{\mathbb R}^n} \int_{\{-1,1\}^n} \Big| \sum_{i=1}^n a_i \varepsilon_i x_i \Big| {\,\mathrm{d}}(\mu_1\otimes\dots\otimes\mu_n)(\varepsilon){\,\mathrm{d}}(\mu_{n+1}\otimes\cdots\otimes\mu_{2n})(x)\\ & \sim \int_{{\mathbb R}^n} \Big( \sum_{i=1}^n |a_i x_i|^2 \Big)^{1/2} {\,\mathrm{d}}(\mu_{n+1}\otimes\dots\otimes\mu_{2n})(x) \\ & = \int_{[0,1]} \Big( \sum_{i=1}^n |a_i X_i(t)|^2 \Big)^{1/2} {\,\mathrm{d}}t \\ & \sim \|a\|_M,\end{aligned}$$ where we used Corollary \[cor\_p=2\] in the last step. The general result ================== Following the ideas described in Section \[sec:infinity\], we now generalize our results to find an inequality of the form $$\frac{1}{C} \|x\|_M \leq \mathbb E \|(x_iX_i)_{i=1}^n\|_N \leq C \|x\|_M$$ for a general Orlicz function $N$. For each normalized Orlicz function $L$, we write $$\overline{F}_L(t)=\int_{0}^{1/t}s {\,\mathrm{d}}L'(s) = \frac{1}{t}L'\left(\frac{1}{t}\right) - L\left(\frac{1}{t}\right)$$ and call this function the tail distribution function associated to $L$, motivated by Proposition \[PRO\_inverse\_maximum\] and equation . \[thm:general\] Let $M,N$ be normalized Orlicz functions with $M'(0)=N'(0)=0$. (i) If there exists a probability measure $\mu$ on $(0,\infty)$ such that $$\label{eq:mult_convolution} \overline F_M(t)=\int_{(0,\infty)} \overline F_N(t/x) {\,\mathrm{d}}\mu(x),$$ then, for all $x=(x_i)_{i=1}^n\in \mathbb{R}^n$, $$c_1\|x\|_M\leq \mathbb{E}\|(x_iX_i)_{i=1}^n\|_N\leq c_2\|x\|_M,$$ where $c_1,c_2$ are positive absolute constants and $X_1,\dots,X_n$ are iid random variables with distribution $\mu$. (ii) If there exist iid random variables $X_1,\dots,X_n$ with distribution $\mu$ on $(0,\infty)$ such that $$c_1\|x\|_M\leq \mathbb{E}\|(x_iX_i)_{i=1}^n\|_N\leq c_2\|x\|_M,$$ where $c_1,c_2$ are positive absolute constants, then there exists an Orlicz function $\widetilde{M}$ equivalent to $M$ such that $$\overline F_{\widetilde M}(t)=\int_{(0,\infty)} \overline F_N(t/x) {\,\mathrm{d}}\mu(x).$$ (i): Note that condition guarantees that we can follow the line of argument in the proof of Theorem \[main\]. Indeed, we choose independent sequences of iid random variables $(Z_1,\dots,Z_n)$ defined on $(\Omega_1,{\mathbb P}_1)$ and $(Y_1,\dots,Y_n)$ defined on $(\Omega_2,{\mathbb P}_2)$ with tail distribution functions $\overline F_M$ and $\overline F_N$, respectively. By Proposition \[PRO\_inverse\_maximum\] we have $$\|x\|_M\sim {\mathbb E}_{\Omega_1} \max_{1\leq i\leq n} |x_i Z_i| \quad \textrm{and} \quad\|x\|_N\sim {\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_i Y_i|$$ for all $(x_i)_{i=1}^n\in{\mathbb R}^n$. By , $X_1Y_1\stackrel{\mathcal D}{=} Z_1$, since for all $t>0$ $$\label{eq:productdistribution} \begin{aligned} {\mathbb P}(Z_1> t)&=\overline F_M(t)=\int_{(0,\infty)} \overline F_N(t/x){\,\mathrm{d}}\mu(x) \\ &=\int_{(0,\infty)}{\mathbb P}(xY_1> t){\,\mathrm{d}}\mu(x)={\mathbb P}(X_1Y_1> t). \end{aligned}$$ Therefore, $$\begin{aligned} \|x\|_M &\sim {\mathbb E}_{\Omega_1} \max_{1\leq i\leq n} |x_i Z_i|={\mathbb E}_{\Omega}{\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_i X_iY_i|\\ &=\int_{\Omega}{\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_i X_i(\omega)Y_i|{\,\mathrm{d}}{\mathbb P}(\omega)\\ &\sim \int_{\Omega}\|(x_i X_i(\omega))_{i=1}^n\|_N{\,\mathrm{d}}{\mathbb P}(\omega) \\ &={\mathbb E}_{\Omega}\|(x_i X_i)_{i=1}^n\|_N.\end{aligned}$$ (ii): Assume that $$\begin{aligned} {\mathbb E}\|(x_i X_i)_{i=1}^n\|_N \sim \|x\|_M\end{aligned}$$ for iid random variables $X_1,\dots,X_n$ with distribution $\mu$. Define the tail distribution function $\overline{F}$ by $$\overline F(t)=\int_{(0,\infty)} \overline F_N(t/x) {\,\mathrm{d}}\mu(x)$$ and choose a sequence of iid random variables $(Z_1,\dots,Z_n)$ defined on $(\Omega_1,{\mathbb P}_1)$ with tail distribution function $\overline F$ and sequence $(Y_1,\dots,Y_n)$ independent of $(X_1,\dots,X_n)$ defined on $(\Omega_2,{\mathbb P}_2)$ with tail distribution function $\overline F_N$. By construction, $Z_i$ has the same distribution as $X_iY_i,i=1,\dots,n$. Now define the Orlicz function $\widetilde{M}$ by $$\widetilde M(s)=\int_0^s \int_{1/t\leq |Z_1|} |Z_1|\,{\,\mathrm{d}}\mathbb P_1 {\,\mathrm{d}}t.$$ By Theorem \[thm:orlicz\], $\|x\|_{\widetilde M}\sim {\mathbb E}_{\Omega_1} \max_{1\leq i\leq n} |x_i Z_i|$ and, therefore, we obtain $$\begin{aligned} \|x\|_M&\sim {\mathbb E}_{\Omega}\|(x_i X_i)_{i=1}^n\|_N= \int_{\Omega}\|(x_i X_i(\omega))_{i=1}^n\|_N{\,\mathrm{d}}{\mathbb P}(\omega)\\ &\sim\int_{\Omega}{\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_i X_i(\omega)Y_i|{\,\mathrm{d}}{\mathbb P}(\omega)={\mathbb E}_{\Omega}{\mathbb E}_{\Omega_2} \max_{1\leq i\leq n} |x_i X_iY_i|\\ &={\mathbb E}_{\Omega_1} \max_{1\leq i\leq n} |x_i Z_i|\sim \|x\|_{\widetilde M}.\end{aligned}$$ Thus, $M$ and $\widetilde M$ are equivalent [@LT1 Proposition 4.a.5]. Condition seems hard to check for general Orlicz functions $M$ and $N$. However, in the special case that we have $N(t)=t^2$ on $[0,1]$ which is then extended linearly, condition is equivalent to the positivity of the function $f_X$ in . Indeed, $$\overline F_M(t)=\int_{(0,\infty)} \overline F_N(t/x){\,\mathrm{d}}\mu(x)=\int_{(0,\infty)} \min(1,x^2/t^2){\,\mathrm{d}}\mu(x).$$ Note that $$\int_{(0,\infty)} \min(1,x^2z^2){\,\mathrm{d}}\mu(x)=\overline F_M\left(1/z\right)=zM'\left(z\right) - M\left(z\right)$$ is obviously a $2$-concave function in $z$ as an average over such functions, in correspondence with the discussion before. On the other hand, Corollary \[cor\_p=2\] can be restated in the following form that shows that the converse is also true: if $z\mapsto zM'\left(z\right) - M\left(z\right)$ is $2$-concave under the conditions stated in Corollary \[cor\_p=2\], the tail distribution function $\overline F_M$ has a representation of the form and the distribution $\mu$ is explicitly given by the density $$f(x)=-\frac{1}{2x^4} {M}'''\Big(\frac{1}{x}\Big).$$ Appendix {#SEC_Appendix} ======== We provide some approximation results for Orlicz functions that we need in this paper and which might be interesting in further applications. Let $M\in \mathcal{C}^2(0,\infty)$ be an Orlicz function with $M'(0)=0$ and such that $M''$ is decreasing. Then $M$ is $2$-concave. Recall that $M$ is $2$-concave if and only if $xM''(x) \leq M'(x)$. For all $\varepsilon\in(0,x)$, there exists $\xi_\varepsilon\in (\varepsilon,x)$ such that $$M'(x)=M'(\varepsilon)+(x-\varepsilon)M''(\xi_\varepsilon).$$ Since $M''$ is decreasing, we get $$M'(x)\geq M'(\varepsilon)+(x-\varepsilon)M''(x),$$ and so, for $\varepsilon\rightarrow 0$, $M'(x)\geq xM''(x)$, which means that $M$ is $2$-concave. \[lem:approx\] Let $M\in \mathcal{C}^2(0,M^{-1}(1))$ be an Orlicz function that is linear to the right of $T:=M^{-1}(1)$. Then, for all constants $c>1$, there exists an Orlicz function $N$ such that 1. $N''(T) = 0$ 2. \[it:two\] $N(t) \leq M(t) \leq c N(t)$ for all $t\in[0,\infty)$. Additionally, if $M''$ is decreasing, we can choose $N$ such that $N''$ is decreasing. We let $\delta\in(0,1)$ and define $N$ as follows: We set $N(t)=M(t)$ for all $t\leq T(1-\delta)$ and we extend $M$ to $[0,T]$ such that $N''$ is smooth, decreasing, $N''(t)\leq M''(t)$ for $t\in [0,T)$ and $N''(T)=0$. For $t>T$, we define $N$ linearly with the same slope as $M$. We have to show property . The inequality $N(t)\leq M(t)$ follows from the construction for all $t\in[0,\infty)$. The second inequality is trivial for $t\leq T(1-\delta)$ since for such $t$, $M(t)=N(t)$. Next, we explore the case $t\in [T(1-\delta),T]$. If we choose $t$ in this interval, by the above definition of $N$, $$\begin{aligned} 0 &\leq M(t)-N(t) \\ &= \int_{T(1-\delta)}^t \int_{T(1-\delta)}^s M''(x)-N''(x){\,\mathrm{d}}x{\,\mathrm{d}}s \\ &\leq T\delta^2 \max_{x\in[T(1-\delta),T]} \big(M''(x)-N''(x)\big) \\ &\leq T\delta^2 \max_{x\in[T(1-\delta),T]} M''(x).\end{aligned}$$ Now we choose $\delta$ such that $T\delta^2 \max_{x\in[T(1-\delta),T]} M''(x)\leq (c-1)M(T(1-\delta))$. This is possible, since $\max_{x\in[T(1-\delta),T]} M''(x)$ is an increasing function of $\delta$ and $M(T(1-\delta))$ is a decreasing function of $\delta$. Then we obtain for $t\in[T(1-\delta),T]$ $$\begin{aligned} M(t) &= N(t)+M(t)-N(t) \\ &\leq N(t)+(c-1)M(T(1-\delta)) \\ &= N(t)+(c-1)N(T(1-\delta)) \\ &\leq cN(t).\end{aligned}$$ This is property for $t\in [T(1-\delta),T]$. Since for $t\geq T$, the difference $M(t)-N(t)$ is constant by definition of $N$, and the two Orlicz functions $M$ and $N$ are both increasing, the inequality $M(t)\leq cN(t)$ also holds for $t\geq T$ by the following simple calculation: $$\begin{aligned} M(t) &= N(t)+M(t)-N(t) \\ &= N(t)+M(T)-N(T) \\ &\leq N(t)+(c-1)N(T) \\ &\leq cN(t).\end{aligned}$$ This completes the proof. Figure \[figure1\] illustrates the choice of the equivalent Orlicz function in the proof of Lemma \[lem:approx\] which has the desired properties. (0,0)–(6,0); (0,0)–(0,3); plot(,\^2/8); (4,2)–(5,3) node\[left,blue\][$M$]{}; (6,2)–(0,2) node\[left\][$1$]{}; (4,2)–(4,0); (3.65,-0.15) node\[anchor=west,blue\][$T=T_M=M^{-1}(1)$]{}; (2,0.5)–(2,0) node\[below\][$T(1-\delta)$]{}; plot(,\^2/4-\^3/48-/4+1/6); (4,1.8333)–(5,2.8333) node\[below,red\][$N$]{}; (4.17,2)–(4.17,0) node\[anchor=south west,red\][$T_N=N^{-1}(1)$]{}; Let $M$ and $N$ be as in Lemma \[lem:approx\]. In order to apply this lemma to Proposition \[thm:p\], we have to pass once again to an equivalent Orlicz function $\widetilde{N}$, a multiple of the function $N$ constructed in Lemma \[lem:approx\] (see Figure \[figure1\]), to assure $M^{-1}(1)=\widetilde{N}^{-1}(1)$ and, hence, that the function $\widetilde{N}$ is “smooth” up to the point $\widetilde{N}^{-1}(1)$. The last named author would like to thank Gideon Schechtman and Carsten Schütt for helpful discussions. [^1]: The first author is partially supported by MICINN project MTM2010-16679, MICINN-FEDER project MTM2009-10418 and “Programa de Ayudas a Grupos de Excelencia de la Región de Murcia”, Fundación Séneca, 04540/GERM/06. The third and fourth author are supported by the Austrian Science Fund, FWF project P23987 “Projection operators in Analysis and geometry of classical Banach spaces”.
--- abstract: | We study an optimal control problem in which both the objective function and the dynamic constraint contain an uncertain parameter. Since the distribution of this uncertain parameter is not exactly known, the objective function is taken as the worst-case expectation over a set of possible distributions of the uncertain parameter. This ambiguity set of distributions is, in turn, defined by the first two moments of the random variables involved. The optimal control is found by minimizing the worst-case expectation over all possible distributions in this set. If the distributions are discrete, the stochastic min-max optimal control problem can be converted into a convensional optimal control problem via duality, which is then approximated as a finite-dimensional optimization problem via the control parametrization. We derive necessary conditions of optimality and propose an algorithm to solve the approximation optimization problem. The results of discrete probability distribution are then extended to the case with one dimensional continuous stochastic variable by applying the control parametrization methodology on the continuous stochastic variable, and the convergence results are derived. A numerical example is present to illustrate the potential application of the proposed model and the effectiveness of the algorithm. [**AMS Subject Classification**]{} [   34H05 $\cdot$ 49M25 $\cdot$ 49M37 $\cdot$ 93C41 ]{} author: - Jianxiong Ye - Lei Wang - 'Changzhi Wu\' - Jie Sun - Kok Lay Teo - Xiangyu Wang date: 'Received: date / Accepted: date' title: 'A robust optimal control problem with moment constraints on distribution: theoretical analysis and an algorithm' --- Introduction {#intro} ============ Ideas to immunize optimization problems against perturbations in model parameters arose as early as in 1970s. A worst-case model for linear optimization such that constraints are satisfied under all possible perturbations of the model parameters was proposed in [@Soyster]. A common approach to solving this type of models is to transform the original uncertain optimization problem into a deterministic convex program. As a result, each feasible solution of the new program is feasible for all allowable realizations of the model parameters, therefore the corresponding solution tends to be rather conservative and in many cases even infeasible. For a detailed survey, see the recent monograph [@bental]. For traditional stochastic programming approaches, uncertainties are modeled as random variables with known distributions. In very few cases, analytic solutions are obtained (see, e.g. Birge and Louveaux [@ISP], Ruszczynski and Shapiro [@SP]). These approaches may not be always applicable in practice, as the exact distributions of the random variables are usually unknown. In the framework of robust optimization, uncertainties are usually modeled by uncertainty sets, which specify certain ranges for the random variables. The worst-case approach is used to handle the uncertainty. It is often computationally advantageous to use the “robust" formulation of the problem. However, the use of uncertainty sets as the possible supporting sets for the random variables is restrictive in practice; it leads to relatively conservative solutions. The recently developed “distributionally robust" optimization approach combines the philosophies of traditional stochastic and robust optimization – this approach does not assume uncertainty sets, but keep using the worst-case methodology. Instead of requiring the shape and size of the support sets for the random variables, it assumes that the distributions of the random variables satisfying a set of constraints, often defined in terms of moments and supports. Since the first two moments can usually be estimated via statistical tools, the distributionally robust model appears to be more applicable in practice. Furthermore, since it takes the worst-case expected cost, it inherits computational advantages from robust optimization. Due to these advantages, distributionally robust optimization has attracted more and more attention in operations research community [@Goh14; @Sim; @Sim07; @Sim09; @El; @Ye; @Zymler; @Mehrotra]. In this paper, we propose a novel optimal control model with an uncertain parameter for which its exact distribution is unknown. However, it is assumed that the mean and the standard deviation of the uncertain parameter are known. The optimal control is found by minimizing the worst-case expectation with respect to all distributions in an “ambiguity set". Both the problems with discrete probability distribution and with continuous probability distribution will be discussed. We first consider the case of discrete probability distribution, in which the min-max optimal control problem is transformed into an equivalent finite dimensional minimization problem via duality. Then the necessary conditions of optimality are derived. The results for the case of discrete probability distribution are then extended to the case with one dimensional continuous stochastic variable. The control parametrization methodology is applied to parameterise the continuous stochastic variable. Finally, an example is solved showing the potential application of the proposed optimal control framework and the effectiveness of the algorithm. [Problem statement]{} {#sec:1} ===================== For simplicity, we only discuss optimal control of dynamical systems with a single uncertain parameter. However, the results can be directly extended to cases involving multiple independent uncertain parameters. To begin with, consider a system of ordinary differential equations with an uncertain parameter as follows: $$\begin{aligned} \label{s1} \left\{\begin{array}{ccc} \dot{x}(t)=&f(x,u,p)\\ x(0)=&x^0~~~~~~~~ \end{array} \right.\end{aligned}$$ where $x\in R^{n_x}$ is the state vector, $u\in R^{n_u}$ is the control vector function, and $p\in R$ is an uncertain parameter. In general, the parameter $p$ is regarded as uniquely determined. In reality, however, this hypothesis often does not hold, since parameter $p$ is uncertain subject to variability. The only reliable information is that the value of the parameter falls within a certain range and that its potential values follow some statistical distribution. Our interest focuses on the following distributionally robust optimal control problem. $$\begin{aligned} (\mbox{DROCP}):&&\inf_{u} \sup_{F}\,\,\,J(u,F)\triangleq\mathbb{E}_{F} h(x(t_f;u,p)) \nonumber \\ && {\rm s.t.}\,\,\,\dot{x}(t)=f(x,u,p), \ \ \ \ t\in [0,t_f],\ \ x(0)=x^0,\label{DR-SC} \\&&\ \ \ \ \ \ p\sim F\in \mathcal{F}(\mu,\sigma^2)=\{F:\mathbb{E}_{F}(p)=\mu, \mathbb{E}_{F}(p-\mu)^2=\sigma^2\} ,\label{DR-FC} %&&\ \ \ \ \ \ \mathbb{E}_{F}c(x(t_f;u,p))=0,\label{DR-TC} \\ &&\ \ \ \ \ \ u(t)\in U\subset R^{n_u}.\label{DR-UC}\end{aligned}$$ Here, $U$ is a compact and convex subset of $R^{n_u}$. The difference between Problem (DROCP) and the standard optimal control problem is that the parameter $p$ herein is considered as a stochastic variable with distribution $F$. The distribution $F$, however, is not exactly known. The only knowledge, which is available, is the mean and the standard deviation of the distribution $F$; they are denoted by $\mu$ and $\sigma$, respectively. The set of all such distributions is denoted by $\mathcal{F}(\mu,\sigma)$. Any measurable function defined in $[0,t_f]$ with values in $U$ is called an admissible control. Let $\mathcal{U}$ be the class of all such admissible controls. It is sufficient to discuss the objective function in Mayer form, because the problems in Bolza or Lagrange form can be transformed into this form by introducing a new variable. See, e.g., [@Bolty] for a detailed description. Throughout this paper, we make the following assumptions. - The functions $f:R^{n_x}\times U\times R\rightarrow R^{n_x}$ and $h:R^{n_x}\rightarrow R$ are at least continuously differentiable with respect to all their arguments.\ - For each fixed $p\in R$, there exist positive constants $L$ and $C$ such that the following inequality holds $$\|f(x,u,p)\|\leq L\|x\|+C,\ \ \forall x\in R^{n_x} \mbox{ and } u\in \mathcal{U}.$$ From the classical differential equation theory (See, for example, Proposition 5.6.5 in [@Polark]), we recall that the system (\[s1\]) admits a unique solution, $x(t;u,p)$, corresponding to each $u\in\mathcal{U}$ and $p\sim F\in\mathcal{F}(\mu,\sigma^2)$. Problem (DROCP) can be roughly stated as: Find a control $u\in\mathcal{U}$ such that the worst-case expectation from all feasible distributions is minimized over $\mathcal{U}$. Obviously, Problem (DROCP) is a min-max optimal control problem. Distributionally robust optimal control problem with discrete distribution ========================================================================== In this section, we focus on the case of discrete distributions. In this case, we will reformulate the distributionally robust optimal control as an equivalent combined optimal control and optimal parameter selection problem by using a dual transformation. We will then develop an algorithm to solve the resulting problem based on the parametric sensitivity functions and the control parametrization method. Problem reformulation and optimality conditions ----------------------------------------------- Let $p^i$ be a possible value of the parameter $p$, and let $q_i$ be the corresponding probability, i.e., $\mathbb{P}(p=p^i)=q_i$, $i=1,2,\cdots,m$. We first investigate the inner $\sup$-optimization problem, in which the value of $u$ is fixed. In this context, there are $m$ possible system trajectories due to $m$ different values of the parameter $p$. Let $x^i(t;u,p^i)$ be the trajectory of system (\[s1\]) with $p=p^i$. When there is no confusion, $x^i(t;u,p^i)$ is written as $x^i$. Each possible trajectory yields a corresponding system cost $h(x^i(t_f;u,p^i))$. The inner subproblem is to evaluate the worst-case expectation from all possible distributions, which is given as follows: $$\begin{aligned} (\mbox{ISP})&&\sup_{q_i} \ \ \ \ \ \sum_{i=1}^m q_i h(x^i(t_f;u,p^i))\\ &&\mbox{s.t.} \ \ \ \ \ \sum_{i=1}^m q_i=1,\\ &&\ \ \ \ \ \ \ \ \ \sum_{i=1}^m q_ip^i=\mu,\\ &&\ \ \ \ \ \ \ \ \ \sum_{i=1}^m q_i{(p^i)}^2=\mu^2+\sigma^2,\\ &&\ \ \ \ \ \ \ \ \ q_i\geq0, \ \ \ i=1,...,m.\end{aligned}$$ Note that the only variables to be optimized in the above inner subproblem are $q_i$, $i=1,2,\cdots,m$. Hence, the constraints (\[DR-SC\]) and (\[DR-UC\]) are not present in ISP. In addition, Problem (ISP) is a linear programming, and its dual is given as follows: $$\begin{aligned} (\mbox{Dual-ISP})&&\inf_{y} \ \ \ \ \ \ y^{\top}b\\ &&\mbox{s.t.} \ \ \ \ y^{\top}a^i\geq h(x^i(t_f;u,p^i)),~~i=1,2,\cdots,m.\end{aligned}$$ where $$\begin{aligned} &b:=[1,\mu,\mu^2+\sigma^2]^{\top}, \qquad y:=[y_{1},y_2,y_3]^{\top},\nonumber\\ &a^i:=[1,p^i,{(p^i)}^2]^{\top}, \qquad i=1,2,\cdots,m.\nonumber\end{aligned}$$ There is no duality gap between the inner subproblem and its dual problem, since the feasible set of Problem (ISP) is nonempty and bounded. Thus, the original Problem (DROCP) is equivalent to the following problem: $$\begin{aligned} (\mbox{Dual-DROCP}):&&\inf_{u,y}\,\,\, y^{\top}b \label{D1}\\ && {\rm s.t.}\,\,\,y^{\top}a^i\geq h(x^i(t_f;u,p^i)),\ \ i=1,2,\cdots,m, \label{D2}\\ &&\ \ \ \ \ \ \dot{x}^i(t)=f(x^i,u,p^i), \ \ \ \ t\in [0,t_f],\ i=1,2,\cdots,m,\ \ x^i(0)=x^0,\label{D3}\\ %&&\ \ \ \ \ \ c(x^i(t_f;u,p^i))=0, \ \ i=1,2,\cdots,m,\label{D4}\\ &&\ \ \ \ \ \ u\in\mathcal{U}.\end{aligned}$$ Problem (Dual-DROCP) can be regarded as a combined optimal control and optimal parameter selection problem, where $u$ is the control function and $y$ is a parameter vector to be optimized. Let $h_i:=h(x^i(t_f;u,p^i))$ and $f^i:=f(x^i,u,p^i)$. Combining system (\[D3\]) and the scalar inequality constraints (\[D2\]) to the cost function $y^{\top}b$ with multiplier functions $\lambda(t):=[\lambda_{i,j}(t)]_{m\times n_{x}}$ and multiplier vector $\theta:=[\theta_1,\theta_2,\cdots,\theta_m]^{\top}$ yields the Lagrangian of Problem (Dual-DROCP) as given below. $$\begin{aligned} \mathcal{L}(u,y)&=y^{\top}b+\sum\limits_{i=1}^m\theta_i(y^{\top}a^i-h(x^i(t_f;u,p^i)))+\sum\limits_{i=1}^{m}\int_{0}^{t_f}\lambda^i(t)[\dot{x}^i-f^i]dt\nonumber\\ &=y^{\top}b+\sum\limits_{i=1}^m\theta_i[y^{\top}a^i-h(x^i(t_f;u,p^i))]-\sum\limits_{i}^{m}\int_{0}^{t_f}\lambda^i(t)f^idt\nonumber\\ &~+\sum\limits_{i}\lambda^i(t_f)x^i(t_f)-\sum\limits_{i}^m\lambda^i(0)x^i(0)-\sum\limits_{i}^m\int_{0}^{t_f}\dot{\lambda}^i(t)x^i(t)dt,\nonumber\end{aligned}$$ where $\lambda^i(t):=[\lambda_{i,1}(t),\lambda_{i,2}(t),\cdots,\lambda_{i,n_{x}}(t)]$. Let $\tilde{y}=y+\epsilon\delta y$ and $\tilde{u}=u+\epsilon\delta u$. Then $$\begin{aligned} \triangle\mathcal{L}&=&\mathcal{L}(u+\epsilon \delta u,y+\epsilon \delta y)-\mathcal{L}(u,y)\\ &=&\epsilon\delta y^{\top}(b+\sum\limits_{i=1}^m\theta_i a^i)-\epsilon\sum\limits_{i=1}^m\theta_i\frac{\partial h_i}{\partial x^i}\delta x^i(t_f)-\epsilon\int_{0}^{t_f}\Big[\sum\limits_{i=1}^m\lambda^i(t)\Big(\frac{\partial f^i}{\partial x^i}\delta x^i(t)+\frac{\partial f^i}{\partial u}\delta u\Big)\Big]dt\\ &+&\epsilon\sum\limits_{i=1}^m\lambda^i(t_f)\delta x^i(t_f)-\epsilon\int_{0}^{t_f}\sum\limits_{i=1}^m\dot{\lambda}^i(t)\delta x^i(t)dt+ o(\epsilon).\end{aligned}$$ Based on the fundamental variational principle [@APPL], we have the necessary optimality conditions of Problem (Dual-DROCP) given in the following as a theorem. \[opti\] Consider Problem (Dual-DROCP). If $u^*(t)\in U$ is an optimal control, and $x^*(t)$ is the corresponding state. Then there exist costate functions $\lambda^i(t)=[\lambda_{i,1}(t),\lambda_{i,2}(t),\cdots,\lambda_{i,n_{x}}(t)]$, $i=1,2,\cdots,m$, and a multiplier vector $\theta=[\theta_1,\theta_2,\cdots,\theta_m]^{\top}$ with $\theta_i\geq 0$, $i=1,2,\cdots,m$, such that - $b+\sum\limits_{i=1}^m\theta_i a^i=0$; - $\dot{\lambda}^i(t)=-\lambda^i(t)\displaystyle\frac{\partial f^i}{\partial x^i}$ and the terminal condition $\lambda^i(t_f)=\theta_i\displaystyle\frac{\partial h_i}{\partial x^i}$, $\ i=1,2\cdots,m$; - $\sum\limits_{i=1}^m\lambda^i(t)\displaystyle\frac{\partial f^i}{\partial u}=0$; - $\theta_i\cdot(y^{\top}a^i-h(x^i(t_f;u,p^i)))=0$, $i=1,2,\cdots,m$. The optimization algorithm -------------------------- Assume that $(u^*(t),y^*)$ is a solution of Problem (Dual-DROCP). Clearly, the optimal control function, $u^*(t)$, for Problem (Dual-DROCP) is also the optimal solution of Problem (DROCP). Then, the optimal distribution, $q^*$, can be obtained by solving Problem (ISP) with $u=u^*(t)$. The algorithm framework for the solution of Problem (DROCP) is presented as follows. - Step 1. Solve Problem (Dual-DROCP), denote the solution by $(u^*(t),y^*)$; - Step 2. For each possible parameter $p^i$, $i=1,2,\cdots,m$, compute the optimal trajectories, $x(t;u^*,p^i)$, and the corresponding cost $h(x^i(t_f;u,p^i))$; - Step 3. Compute the optimal solution $q^*$ of Problem (ISP) by using linear programming solver. Note that we can obtain the most robust optimal control $u^*(t)$ by only solving Problem (Dual-DROCP). The corresponding “worst” distribution $q^*$ shall also be obtained for many practical problems. In this case, we can estimate the distribution of the performance under the most robust optimal control and the corresponding distribution of the uncertain parameter. Therefore, Problem (DROCP) is solved completely by further carrying out Step 2 and Step 3 in the above algorithm framework. For the above algorithm framework, Step 2 and Step 3 can be computed readily. Thus, the remaining problem is on how to solve Problem (Dual-DROCP). ### Control parametrization Let $0=t_0<t_1<t_2<\cdots<t_n=t_f$ be the partition grids of the time horizon $[0,t_f]$. On the control parametrization framework, the control function $u(t)$ is approximated by a piecewise constant function or a piecewise linear function, where the heights of these approximate functions are decision variables. In fact, the control function can be approximated by a linear combination of any appropriate set of basis functions. Thus, Problem (Dual-DROCP) is approximated as a finite-dimensional optimization problem, where the coefficients of the basis functions are regarded as decision variables. In this paper, the control is approximated as a piecewise constant function in the form as given below: $$\begin{aligned} \label{uv} u^\iota(t)=\sum\limits_{k=1}^n v^{k}\chi_{I_{k}}(t), \ \ t\in[0,t_f],\end{aligned}$$ where $v^k=[v^k_1,v^k_2,\cdots,v^k_{n_u}]^{\top}\in U$, $I_k=[t_{k-1},t_k)$, $k=1,2,\cdots,n$, and $\chi_I$ denotes the characteristic function of $I$. Define $v=[({v^1})^{\top},({v^2})^{\top},\cdots,({v^n})^{\top}]^{\top}$ and $\mathcal{V}=\prod\limits_{k=1}^n U$. Clearly, the control $u$ defined in the form of (\[uv\]) is one to one corresponding with the $n\times n_{u}$ control parameter vector $v$. Let $x(t;v,p^i)$ be the solution of system (\[s1\]) corresponding to $(v,p^i)$. With some abuse of notation, $x(t;v,p^i)$ is abbreviated as $x^{v,i}(t)$ or $x^{v,i}$ when no confusion can arise. Then, the parameterized problem for Problem (Dual-DROCP) can be stated as given below: $$\begin{aligned} (\mbox{Discre-Dual-DROCP}):&&\inf_{v,y}\,\,\, y^{\top}b \label{DD1}\\ && {\rm s.t.}\,\,\,y^{\top}a^i\geq h(x^i(t_f;u,p^i)), \label{DD2-new}\\ &&\ \ \ \ \ \ \dot{x}^i(t)=f(x^i,v,p^i), \ t\in [0,t_f],\ i=1,2,\cdots,m, \ x^i(0)=x^0, \label{DD3}\\ %&&\ \ \ \ \ \ c(x^{v,i}(t_f))=0,\ i=1,2,\cdots,m. \label{DD4} \\ && \ \ \ \ \ \ v\in\mathcal{V}.\end{aligned}$$ ### Gradient formulas Problem (Discre-Dual-DROCP) is essentially a finite-dimensional optimization problem, which can be solved readily by various optimization techniques. In general, the values of the objective function and the constraint functions and their respective gradients are required to be computed at each iteration of the optimization procedure. The gradient of the objective function is obvious since it is only a linear function of $y$. The gradients of the constraint functions can be evaluated by solving either the adjoint equations (see, for example, [@Teo1989]) or the sensitivity function (see, for example, [@Ryan2012; @Feehery1998; @Rose2000]). In this paper, the method based on the sensitivity function is used. The parametric sensitivity system and the gradient formulas are given in the following as a theorem. \[para\] Consider system (\[DD3\]). Let $x(t;v,p^i)$ be the solution and let $n_v=n_u\times n$ be the dimension of $v$. Let $s^j(t;v,p^i)=[s_1^j(t;v,p^i),s_2^j(t;v,p^i),\cdots,s_{n_x}^j(t;v,p^i)]^{\top}$ be the parametric sensitivity function of system (\[DD3\]) with respect to $v_j$, i.e., $$\begin{aligned} \label{psf} s^j(t;v,p^i)=\frac{\partial x(t;v,p^i)}{\partial v_j}, \ \ j=1,2,\cdots,n_v.\end{aligned}$$ Then, $s^j(t;v,p^i)$ is the unique solution of the following differential equation system $$\begin{aligned} \label{ps} \left\{ \begin{array}{ccc} \displaystyle\frac{d s^j}{dt}&&=\displaystyle\frac{\partial f}{\partial x}(x^i,v,p^i)s^j+\displaystyle\frac{\partial f}{\partial u}(x^i,v,p^i)E_l\chi_{I_k}(t),\ \ t\in[0,t_f],\\ s^j(0)&&=0 .~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array}\right.\end{aligned}$$ where $I_k:=[t_{k-1},t_k)$, $k=[\frac{j}{n_u}]+1$, $l=j \mbox { mod } n_u$, $E_l$ is an $n_u$-dimensional column vector whose $l$-th component is one and all other components are zeros. Furthermore, the gradients of the constraint functions $h(x^{v,i}(t_f))$, $i=1,2,\cdots,m$, with respect to $v$, are given by $$\begin{aligned} \nabla_v h(x^{v,i}(t_f))=\frac{\partial h}{\partial x}S(t_f),\end{aligned}$$ where $S=[(s^1)^{\top},(s^2)^{\top},\cdots,(s^{n_v})^{\top}]^{\top}$, and its components are given by (\[ps\]). ### Algorithm procedure Problem (Discre-Dual-DROCP) differs from the standard mathematical programming problems in the sense that it involves the dynamic system (\[DD3\]) and the end-point constraints (\[DD2-new\]). The dynamic constraint (\[DD3\]) as well as the systems of differential equations of the parametric sensitivity functions are solved by an ordinary differential equation (ODE) solver in each iteration of the optimization procedure. The end-point constraints (\[DD2-new\]) are handled as follows. Define $$\begin{aligned} g_i(x,v,y):=h(x^{v,i}(t_f))-y^{\top} a^i\end{aligned}$$ and $G_0({x},v,y):=\max\limits_{i}\{0,{g}_i({x} ,v,y)\}$. Constraints (\[DD2-new\]) are equivalent to the following equality constraint: $$\begin{aligned} {G}_0({x},v,y)=0.\label{Gc}\end{aligned}$$ However, $G_0 ({x},v,y)$ is nonsmooth in $(v,y)$. Standard optimization routines would have difficulties in handling this type of equality constraints. A widely used smoothing technique [@Ryan2009] is to approximate ${g} _i$ by $$\begin{aligned} {g} _i^{\epsilon}({x} ,v,y):=\left\{ \begin{array}{lll} 0,&\mbox{if}~~{g}_i({x} ,v,y)<-\epsilon,\\ \displaystyle\frac{({g} _i({x},v,y)+\epsilon)^2}{4\epsilon}, &\mbox{if}~~-\epsilon\leq {g}_i({x},v,y)\leq \epsilon,\\ {g}_i({x},v,y), &\mbox{if}~~{g}_i({x},v,y)>\epsilon. \end{array} \right.\end{aligned}$$ By using the quadratic penalty function, Problem (Discre-Dual-DROCP) is finally approximated by $$\begin{aligned} (\mbox{QP-Dual-DROCP}):&&\inf_{v,y}\,\,\, \mathcal{J}({x},v,y):=y^{\top}b%-\varsigma{G}_\epsilon({x},v,y) +\frac{\varrho}{2}(G_\epsilon({x},v,y))^2 \label{LD1}\\ &&\ \ \ \ \ \ \dot{x}^i(t)=f(x^i,v,p^i), \ \ \ \ t\in [0,t_f],\ i=1,2,\cdots,m,\label{LD3}\\ &&\ \ \ \ \ \ x^i(0)=x^0. \label{LD4}\end{aligned}$$ where $G_\epsilon({x},v,y):=\sum\limits_{i=1}^{m}{g}_i^\epsilon({x},v,y)$ and $\varrho$ is the penalty parameter. The algorithm framework for the solution of Problem (QP-Dual-DROCP), which is constructed based on Algorithm 17.4 in [@Noceal2006], is stated as follows. **[Algorithm 3.1]{}** - Initialize:\ Choose an initial point $(v^0,y^0)$. Choose convergence tolerances $\eta_*$ and $\omega_*$. Choose positive constants $\bar{\varrho}$, $\alpha_1>1$, $\alpha_2<1$ and $\alpha_3<1$. Set $\varrho_0=\bar{\varrho}$, $\omega_0=1/\varrho_0, \eta_0=1/\varrho_0^{0.1}, k=0$;\ - Repeat\ - For $i=1,2,\cdots, m$, integrate system (\[LD3\])-(\[LD4\]) together with the parametric sensitivity systems (\[ps\]) forward in time from 0 to $t_f$. - Evaluate the value of the merit function $\mathcal{J}$ and its gradients, denoted by $\mathcal{J}({x},v^k,y^k)$ and $\nabla\mathcal{J}({x},v^k,y^k)$, respectively. - If $\|P_\mathcal{V}[\nabla\mathcal{J}({x},v^k,y^k)]\|\leq \omega_k$, where $P_\mathcal{V}d$ is the partial projection of the vector $d\in R^{n_v+3}$ onto the rectangular box $\mathcal{V}=[v_*,v^*]$ at the current point $(v^k,y^k)$, defined by $$\begin{aligned} P_\mathcal{V}d=\left\{\begin{array}{lll} \min \{0,d_i\},\ \ \mbox{ if } i\leq n_v \mbox{ and } v_i=v_{i*},\\ d_i,\ ~\quad \ \qquad \ \mbox{ if } i\leq n_v \mbox{ and }v_i\in(v_{i*},v^{i*}),\mbox{ or } i>n_v, \ \ \mbox{ for all } i=1,2,\cdots,n_{v+3},\\ \max\{0,d_i\},\ \ \mbox{ if } i\leq n_v \mbox{ and } v_i=v^{i*}. \end{array} \right.\end{aligned}$$ Then goto (S4-1). Otherwise, goto (S4-2). - If ${G}_\epsilon({x}^k,v^k,y^k)\leq \eta_k$, goto (S5-1); otherwise, goto (S5-3) - Using a line search method to find the next point $(v^{k+1},y^{k+1})$, replace $(v^k,y^k)$ by $(v^{k+1},y^{k+1})$, and goto (S1). - —Stopping criterion If ${G}_\epsilon({x}^k,v^k,y^k)\leq \eta_*$ and $\|P_\mathcal{V}[\nabla\mathcal{J}({x},v^k,y^k)]\|\leq \omega_*$, stop. Record the approximate solution $(v^{k},y^k)$ obtained. Otherwise, goto (S5-3). - —Tighten tolerance Set $\eta_{k+1}:=\alpha_3\eta_k$ and goto (S5-3). - —Increase penalty parameter Set $\varrho_{k+1}:=\alpha_1 \varrho_{k}$, $\omega_{k+1}:=\alpha_2\omega_{k+1}$, $k:=k+1$, and goto (S1). Note that since ${x} $ is an intermediate variable depending on $v$ and $p_i$ rather than an independent variable, the merit function $\mathcal{J}$ can, in essence, be regarded as a function of $v$ and $y$. Thus, the gradient of $\mathcal{J}$ only composes of the partial derivatives of $\mathcal{J}$ with respect to $v$ and $y$; it is a vector of dimension $n_v+3$. The case of continuous distributions ==================================== For the case of continuous distributions, the cost function $h(x(t_f;u,p))$ can be considered as a function of $u$ and $p$, because the state $x$ is only an intermediate variable depending on $u$ and $p$. For a fixed $u$, the inner sub-problem is given as follows. $$\begin{aligned} (\mbox{CISP})&&\sup_{F} \ \ \ \ \ \int_{\mathcal{F}}h(x(t_f;u,p))dF(p)\nonumber\\ &&\mbox{s.t.} \ \ \ \ \ \int_{\mathcal{F}}dF(p)=1,\label{sup-p1}\\ &&\ \ \ \ \ \ \ \ \ \int_{\mathcal{F}}pdF(p)=\mu,\label{sup-p2}\\ &&\ \ \ \ \ \ \ \ \ \int_{\mathcal{F}}p^2dF(p)=\mu^2+\sigma^2,\label{sup-p3}\\ &&\ \ \ \ \ \ \ \ \ dF(p)\geq0.\label{sup-p4}\end{aligned}$$ To extent the results obtained for the case of discrete distributions detailed in the previous section to the case of continuous distributions, we propose a scheme for the discretization of the continuous stochastic variable based on the control parametrization method. Suppose that the uncertain parameter $p$ is disturbed in an interval $[p_l,p_u]$. Let $\psi:[p_l,p_u]\rightarrow[0,\infty)$ be an element of $\mathcal{F}(\mu,\sigma)$, i.e., $\psi$ is a potential probability density function of $p$ satisfying (\[sup-p2\]) and (\[sup-p3\]). Let $p_l=p_0<p_1<p_2<\cdots<p_m=p_u$ be a set of time points on the interval $[p_l,p_u]$. Denote $[p_{i-1},p_i)$ by $I_i^p$, $i=1,2,\cdots,m-1$, and $[p_{m-1},p_m]$ by $I_m^p$. Let $\Delta p_i:=p_i-p_{i-1}$, and let $$\begin{aligned} \label{dp} \Delta p:=\max\limits_{i} \Delta p_i.\end{aligned}$$ Let $p_d^i$ be an arbitrarily but fixed element chosen from $[p_{i-1},p_i)$. It is referred to as a characteristic element of this subinterval. When the uncertain parameter takes values in $[p_{i-1},p_i)$, it is approximated as $p_d^i$ in the system. As a result, the uncertain parameter interval $[p_l,p_u]$ is approximated as a finite set $\{p_d^i\}_{i=1}^m$. Moreover, the probability $\mathbb{P}(p=p_d^i)$ is defined as $$\begin{aligned} \label{pmf} \mathbb{P}(p=p_d^i)=\int_{p_{i-1}}^{p_i}\psi(p)dp:=q_d^i, \ \ i=1,2,\cdots,m.\end{aligned}$$ On the discretization of the continuous distribution, the cost function is approximated as follows: $$\begin{aligned} \label{gd} \int_{\mathcal{F}}h(x(t_f;u,p))dF(p)\thickapprox\sum\limits_{i=1}^m q_d^ih(x^i(t_f;u,p_d^i)).\end{aligned}$$ The same idea can be used for the constraints. Thus, we can approximate the inner sub-problem (CISP) by the following discrete-distribution problem $$\begin{aligned} (\mbox{DISP})&&\sup_{F} \ \ \ \ \ \sum\limits_{i=1}^m q_d^ih(x^i(t_f;u,p_d^i))\nonumber\\ &&\mbox{s.t.} \ \ \ \ \sum\limits_{i=1}^m q_d^i=1,\label{sup-q1}\\ &&\ \ \ \ \ \ \ \ \ \sum\limits_{i=1}^m p_d^i q_d^i=\mu,\label{sup-q2}\\ &&\ \ \ \ \ \ \ \ \ \sum\limits_{i=1}^m (p_d^{i})^2 q_d^i=\mu^2+\sigma^2,\label{sup-q3}\\ &&\ \ \ \ \ \ \ \ \ q_d^i\geq0.\label{sup-q4}\end{aligned}$$ Note that Problem (DISP) is the same as Problem (ISP) detailed in the previous section. That is, it is a distributionally robust optimal control problem with discrete distribution. If the above discretization method is convergent, the solution of Problem (DROCP) with continuous distribution can be approximately obtained through solving a sequence of problems with discrete distributions. Therefore, we only need to verify the convergence of the above discretization scheme, which will be proved by investigating the relationships of the cost function and constraint functions between Problem (CISP) and Problem (DISP). From (\[pmf\]), it is obvious that constraints (\[sup-p1\]) and (\[sup-q1\]) are consistent. Besides, inequality (\[sup-p4\]) in Problem (CISP) also implies inequality (\[sup-q4\]) in Problem (DISP). Therefore, we only need to evaluate the differences of the cost functions and the two constraints between Problem (CISP) and Problem (DISP). Details are given in the following two theorems. \[th-x\] Given $u\in\mathcal{U}$ and any $p\in I_p^i$, let $x(\cdot;p)$ and $x^i(\cdot;p_d^i)$ be, respectively, the solution of $$\begin{aligned} \left\{ \begin{array}{lll} \dot{x}=f(x,u,p),\\ x(0)=x^0, \end{array} \right.\ \ t\in[0,t_f]\end{aligned}$$ and the solution of $$\begin{aligned} \left\{ \begin{array}{lll} \dot{x}=f(x,u,p_d^i),\\ x(0)=x^0, \end{array} \right.\ \ t\in[0,t_f].\end{aligned}$$ Then, there exists a constant $L_1>0$, which is independent of $p$ and $p_d^i$, such that the inequality $$\begin{aligned} \|x(t;p)-x^i(t;p_d^i)\|\leq L_1\Delta p,\end{aligned}$$ holds for all $t\in[0,t_f]$ with $\Delta p$ defined in (\[dp\]). Given $u\in\mathcal{U}$ and any $p\in I_p^i$, the solution of system (\[s1\]) can be expressed as $$x(t;p)=\int_0^{t}f(x,u,p)dt, \ \ \ \forall t\in[0,t_f].$$ It follows that $$\begin{aligned} \|x(t;p)-x^i(t;p_d^i)\|&=&\|\int_0^t f(x,u,p)dt-\int_0^t f(x,u,p_d^i)dt\|\nonumber \\ &\leq&\int_0^t\| f(x,u,p)- f(x,u,p_d^i)\|dt.\end{aligned}$$ Since $f$ is at least continuously differentiable with respect to $p$, it also satisfies Lipschitz condition in $p$ on $[p_l,p_u]$, that is, there exists a constant $L_1$ such that $$\begin{aligned} \| f(x,u,p)- f(x,u,p')\|\leq L_1|p-p'|, \ \ \forall p,p'\in[p_l,p_u].\end{aligned}$$ Therefore, we have $$\begin{aligned} \|x(t;p)-x^i(t;p_d^i)\|\leq L_1|p-p_d^i|\leq L_1 \Delta p, \ \ \forall t\in[0,t_f]\nonumber\end{aligned}$$ where $\Delta p$ is defined in (\[dp\]). This complete the proof. \[th-conv\] Let $\psi(p)$ be a probability density function satisfying (\[sup-p2\]) and (\[sup-p3\]) and let $F$ be the corresponding distribution function. Then, for any control $u\in\mathcal{U}$ and $\epsilon>0$, there exists $\delta>0$ such that the following inequalities - $\Big|\displaystyle\int_\mathcal{F}pdF(p)-\displaystyle\sum\limits_{i=1}^m p_d^i\cdot q_d^i\Big|=\Big|\displaystyle\int_{p_l}^{p_u} p\psi(p)dp-\displaystyle\sum\limits_{i=1}^m p_d^i\cdot q_d^i\Big|\leq\epsilon;$ - $\Big|\displaystyle\int_\mathcal{F}p^2dF(p)-\displaystyle\sum\limits_{i=1}^m(p_d^i)^2\cdot q_d^i\Big|=\Big|\displaystyle\int_{p_l}^{p_u} p^2\psi(p)dp-\displaystyle\sum\limits_{i=1}^m (p_d^i)^2\cdot q_d^i\Big|\leq\epsilon;$ - $\Big|\displaystyle\int_\mathcal{F} h(x(t_f;u,p))dF(p)-\displaystyle\sum\limits_{i=1}^m q_d^i h(x^i(t_f;u,p_d^i)) \Big|\leq \epsilon.$ hold provided the grid size $\Delta p\leq\delta$. \(a) The equality holds directly from the definition. Thus, it remains to prove the validity of the inequality. From (\[pmf\]), we have $$\begin{aligned} \Big|\displaystyle\int_{p_l}^{p_u} p\psi(p)dp-\displaystyle\sum\limits_{i=1}^m p_d^i\cdot q_d^i\Big|&=& \Big|\displaystyle\int_{p_l}^{p_u} p\psi(p)dp-\displaystyle\sum\limits_{i=1}^m \ p_d^i\int_{p_{i-1}}^{p_i}\psi(p)dp\Big|\\ &=&\Big|\displaystyle\int_{p_l}^{p_u} p\psi(p)dp-\displaystyle\sum\limits_{i=1}^m \int_{p_{i-1}}^{p_i}p_d^i\psi(p)dp\Big|\\ &=&\Big|\displaystyle\sum\limits_{i=1}^m\int_{p_{i-1}}^{p_i} p\psi(p)dp-\displaystyle\sum\limits_{i=1}^m \int_{p_{i-1}}^{p_i}p_d^i\psi(p)dp\Big|\\ &\leq&\sum\limits_{i=1}^m\int_{p_{i-1}}^{p_i}\Big|p-p_d^i\Big|\psi(p)dp\leq \sum\limits_{i=1}^m \Delta p\int_{p_{i-1}}^{p_i}\psi(p)dp=\Delta p\end{aligned}$$ \(b) Similar to the derivation given in (a), we obtain $$\begin{aligned} \Big|\displaystyle\int_{p_l}^{p_u} p^2\psi(p)dp-\displaystyle\sum\limits_{i=1}^m (p_d^i)^2\cdot q_d^i\Big|\leq\sum\limits_{i=1}^m\int_{p_{i-1}}^{p_i}\Big|p^2-(p_d^i)^2\Big|\psi(p)dp\leq 2p_u\Delta p\sum\limits_{i=1}^m \int_{p_{i-1}}^{p_i}\psi(p)dp=2p_u\Delta p\end{aligned}$$ \(c) Similarly, we have $$\begin{aligned} &&\Big|\displaystyle\int_{p_l}^{p_u} h(x(t_f;u,p))\psi(p)dp-\displaystyle\sum\limits_{i=1}^m q_d^i h(x^i(t_f;u,p_d^i))\Big|\nonumber\\ =&& \Big|\displaystyle\sum\limits_{i=1}^m \ \int_{p_{i-1}}^{p_i} \Big[h(x(t_f;u,p))-h(x^i(t_f;u,p_d^i))\Big]\psi(p)dp\Big|\nonumber\\ \leq&& \displaystyle\sum\limits_{i=1}^m \ \int_{p_{i-1}}^{p_i}\Big|h(x(t_f;u,p))-h(x^i(t_f;u,p_d^i))\Big|\psi(p)dp\label{c-1}\end{aligned}$$ Since $h$ is continuously differentiable in $x$, there exists, for any $\epsilon$, a $\delta_1>0$ such that $$\begin{aligned} \label{h-ineq} \Big|h(x)-h(x')\Big|\leq \epsilon, \ \ \ \mbox{if} \ \ \|x-x'\|\leq\delta_1.\end{aligned}$$ From Theorem \[th-x\], it follows that the following inequality $$\begin{aligned} \label{x-ineq} \|x(t_f;u,p)-x(t_f;u,p_d^i)\|\leq \delta_1, \forall p\in[p_{i-1},p_i)\end{aligned}$$ holds if $\Delta p\leq\displaystyle\frac{\delta_1}{L_1}$. Substitute (\[h-ineq\]) and (\[x-ineq\]) into (\[c-1\]). If $$\begin{aligned} \Delta p\leq \frac{\delta_1}{L_1},\nonumber\end{aligned}$$ then $$\begin{aligned} \Big|\displaystyle\int_{p_l}^{p_u} h(x(t_f;u,p))\psi(p)dp-\displaystyle\sum\limits_{i=1}^m q_d^i h(x^i(t_f;u,p_d^i))\Big|\leq\sum\limits_{i=1}^m\int_{p_{i-1}}^{p_i}\epsilon \psi(p)dp=\epsilon.\nonumber\end{aligned}$$ Let $\delta:=\max\{\epsilon,\displaystyle\frac{\epsilon}{2p_u},\frac{\delta_1}{L_1}\}$. Then, we conclude that inequalities (a), (b), (c) hold if $\Delta p\leq\delta$. Hence, the proof is completed. In Theorem \[th-conv\], the relationships of the cost functions and the constraints between Problem (DROCP) and its approximation problem with discrete distributions are given. Note that Theorem \[th-conv\] is not related to the issue of local or global optima. It holds for all controls and all feasible probability density functions, and hence also holds for global and local optima. Illustration example ==================== In this section, we choose an example to illustrate the application of distributionally robust optimal control model and to test the performance of the proposed algorithm. The illustration example is a distributionally robust optimal control of a microbial fed-batch process [@Ye2014], which is stated as follows. Let $X$ be the concentration of biomass (g/L), $S$ be the concentration of substrate (g/L) and $V$ be the volume of the solution (L). The control system of the fed-batch process is described by $$\begin{aligned} &&\dot{X}=(\mu_X-d_X)X,\\ &&\dot{S}=-q_SX+\frac{\rho_S-S}{V}u(t), \\ &&\dot{V}=u(t),\end{aligned}$$ where $u(t)$ is the input control of the substrate. $d_X$ is the specific decay rate of cells, and $\rho_S$ is the concentration of substrate in feed medium. $\mu_X$ is the specific growth rate of biomass, and $q_S$ is the specific consumption rate of substrate, which are, respectively, expressed as $$\begin{aligned} &&\mu_X=\mu_m\frac{S}{S+K_S}(1-\frac{S}{S^*}),\label{eq-mu}\\ &&q_S=m_S+\frac{\mu_X}{Y_S}.\label{eq-qs}\end{aligned}$$ In (\[eq-mu\]), $\mu_m$ is the maximum specific growth rate, $K_S$ is the saturation constant, and $S^*$ is the critical concentration of the substrate above which cells cease to grow. In (\[eq-mu\]), $m_S$ and $Y_S$ are, respectively, the maintenance requirement of substrate and the maximum growth yield. The above system is a typical kinetic model used in microbial fermentation process, see, e.g., Ye et al. [@Ye2014] and Zeng et al. [@Zeng1995]. In general, $m_S$ is regarded to be constant during the whole fermentation process. However, it is well-known that the maintenance consumption of substrate would vary during different fermentation stages. Thus, we consider $m_S$ as an uncertain parameter in this work. Assume that the mean and the standard deviation of the uncertain parameter are $m_\mu$ and $m_\sigma$, respectively. The problem is to control the input $u(t)$ such that the biomass at the terminal time $t_f$ is maximized. For convenience of presentation, let $x:=[x_1,x_2,x_3]^\top=[X,S,V]^{\top}$. Define the admissible set of controls as $\mathcal{U}:=\{u(t)|u_*\leq u(t)\leq u^*\}$ with $u_*$ and $u^*$ being the minimum and maximum input rates. Define $$\begin{aligned} f(x,u,m_S):=[(\mu_X-d_X)X,-q_SX+\frac{\rho_S-S}{V}u(t),u(t)]^{\top}.\nonumber\end{aligned}$$ By transforming the time interval \[0,$t_f$\] into $[0,1]$, the optimal control problem can be stated as follows: $$\begin{aligned} (\mbox{ OCP}):&&\min_{u}\max_F\,\,\, \mathbb{E}_Fh(x(1;u,m_S)):=-x_1(1;u,m_S)\nonumber\\ && \ \mbox{s.t.} \ \ \dot{x}(t)=t_f f(x,u,m_S), \ \ \ \ t\in [0,1],\ x(0)=x^0,\label{ocp-s1}\\ &&\ \ \ \ \ \ \ m_S\sim F\in \mathcal{F}(m_\mu,m_\sigma^2)=\{F:\mathbb{E}_{F}(p)=m_\mu, \mathbb{E}_{F}(m_S-m_\mu)^2=m_\sigma^2\} , \label{ocp-pc}\\ &&\ \ \ \ \ \ \ u\in\mathcal{U}. \label{ocp2}\end{aligned}$$ In this numerical example, we set $x^0=[0.1,20,3]^\top$, $m_\mu=2.2$, $m_\sigma=0.2$, $m_S\in[0.8m_\mu,1.2m_\mu]$, $d_X=0.05$, $\mu_m=2.7$, $K_S=280$, $Y_S=0.082$, $\rho_S=945$, $u_*=0$, $u^*=0.04$, $t_f=25$h. In Algorithm 3.1, even if Problem (Dual-DROCP) is linear with respect to $y$, we still optimize $y$ together with $u$ by using the nonlinear optimization techniques. In the numerical experiments, the time horizon is equidistantly divided into 25 subintervals for the parameterization of the control $u$. Since the microbial fermentation is a relatively slow time-varying process, the partition is adequate. In the discretization of the continuous distribution of the uncertain parameter $m_S$, we choose ten characteristic elements $\{m_S^i\}_{i=1}^{10}$ over $[0.8m_\mu,1.2m_\mu]$, where $m_S^i=0.8m_\mu+\displaystyle\frac{i-1}{9}0.4m_\mu$, $i=1,2,\cdots,10$. A good initial guess of the decision variables is important to help ensure the convergence of the algorithm. We use the following procedure to generate an initial guess: randomly generate a control $u$, and for the fixed $u$, the variable $y$ is optimized by a linear programming solver and the performance of (QP-Dual-DROCP) is computed; the process is repeated $M$(=200) times and the pair $(u,y)$ with the best performance is set to be an initial guess for a run of Algorithm 3.1. It is worth mentioning that the generated initial guess, if exists, is a feasible solution of Problem (Dual-DROCP). Starting from the initial guess, the proposed algorithm, which is encoded in Matlab 7.0, is implemented on an Intel dual-core i5 with 2450GHz. The height of the optimal control $u^*$ at each subinterval is listed in Table 1. Under the optimal input strategy and $m_S=m_S^i, i=1,2,\dots,10$, the trajectories of the biomass and the substrate are plotted in Figs. 1 and 2. After obtaining the optimal solution $(u^*,y^*)$ from Algorithm 3.1, we fix the control $u$ at $u^*$ and optimize $y$ again by solving Problem (Dual-ISP) with linear programming solver. The optimal solution is denoted by $y_{u^*}^*$. The values of the cost function at $(u^*,y^*)$ and $(u^*,y_{u^*}^*)$ are denoted as $\mathcal{J^*}$ and $\tilde{\mathcal{J}}^*$, respectively. We have $\mathcal{J^*}=-4.0232$ and $\tilde{\mathcal{J}}^*=-4.1217$. The difference between $\mathcal{J}^*$ and $\tilde{\mathcal{J}}^*$ is only 0.0985, which reflects the effectiveness of the proposed algorithm in some degree. On the other hand, we fix the control $u$ at $u^*$ and optimize $q$ directly by solving Problem (ISP) with linear programming solver. The optimal solution of $q$ is $q^*=[ 0, 0, 0, 0.3223,0.5132, 0, 0, 0, 0, 0.1645]^\top$. The terminal concentrations of biomass under the characteristic elements $\{m_S^i\}_{i=1}^{10}$ are $[4.1605, 4.1911, 4.1998, 4.1891, 4.1620, 4.1210,$ $ 4.0686, 4.0070, 3.9382, 3.8637].$ $t$ \[0,1\] \[1,2\] \[2,3\] \[3,4\] \[4,5\] \[5,6\] \[6,7\] \[7,8\] \[8,9\] \[9,10\] ----- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -- $u$ 0.0124 0.0291 0.0276 0.0093 0.0178 0.0137 0.0021 0.0075 0.0048 0.0106 $t$ \[10,11\] \[11,12\] \[12,13\] \[13,14\] \[14,15\] \[15,16\] \[16,17\] \[17,18\] \[18,19\] \[19,20\] $u$ 0.0042 0.0127 0.0041 0.0195 0.0167 0.0207 0.0203 0.0286 0.0108 0.0344 $t$ \[20,21\] \[21,22\] \[22,23\] \[23,24\] \[24,25\] $u$ 0.0343 0.0174 0.0383 0.0332 0.0261 : The optimal input strategy control.[]{data-label="table-ps-rs"} ![The concentrations of substrate under $u=u^*$ and $m_S$ varied from 0.8$m_\mu$ to 1.2$m_\mu$.[]{data-label="figure2"}](x1_final.eps){width="110.00000%"} ![The concentrations of substrate under $u=u^*$ and $m_S$ varied from 0.8$m_\mu$ to 1.2$m_\mu$.[]{data-label="figure2"}](x2_final.eps){width="110.00000%"} To illustrate the superiority of the optimal control strategy obtained from the proposed model, we simulate the system under a constant input $u(t)\equiv 0.01$. The trajectories of biomass and substrate with varied $m_S$ under this input strategy are shown in Figs. 3 and 4. A comparison of Fig. 1 and Fig. 3 reveals that, not only the the terminal concentration of biomass under the optimal strategy is significantly higher than that under the constant control input, but also the variation of biomass concentration is much smaller than the constant one. This shows that the system under the optimal control strategy could maintain a good performance even in the “worst” case. ![The concentrations of substrate under $u=0.01$ and $m_S$ varied from 0.8$m_\mu$ to 1.2$m_\mu$.[]{data-label="figure4"}](x1_constant_u.eps){width="110.00000%"} ![The concentrations of substrate under $u=0.01$ and $m_S$ varied from 0.8$m_\mu$ to 1.2$m_\mu$.[]{data-label="figure4"}](x2_constant_u.eps){width="110.00000%"} Conclusion ========== This paper introduced an optimal control problem in which both the objective function and the dynamic constraint contain an uncertain parameter. Since the distribution of this uncertain parameter is not exactly known, the objective function is taken as the worst-case expectation over a set of possible distributions of the uncertain parameter. To minimize the worst-case expectation over all possible distributions in an ambiguity set, the stochastic optimal control problem is converted into a finite-dimensional optimization problem via duality and discretization. Necessary conditions of optimality was derived and numerical results for an illustration example are reported. Numerical results in Section 5 show the success of the proposed model in producing an optimal control strategy under which a good performance is achieved. It also ensures that the variation of the performance is small subject to the changes in the value of the uncertain parameter. That is, the system is robust under the optimal control strategy obtained from the proposed model. The continuation of this work can be divided into two aspects: model aspect and algorithm aspect. The model should take more factors into account. For example, a further study could be on how to introduce proper terminal constraints or path constraints into the model. In the algorithm aspect, the current work transforms the proposed model into a combined optimal control and optimal parameter selection problem and solve it by using nonlinear optimization techniques. However, the special structure of the problem was not taken into detailed investigation. Problem (Dual-DROCP) is linear with respect to the optimization vector $y$ but nonlinear with respect the control $u$. An alternative direction optimization technique could be used to handle these two kinds of optimization variables separately. For example, the control $u$ can be fixed first, and the optimal solution $y_u^*$ is easily obtained by solving a linear programming problem. Then, the control $u$ is regulated by some nonlinear optimization methods. The procedure is repeated until a satisfactory pair of $(u,y)$ is found. Some stochastic techniques, such as PSO method, could also be combined with the alternative direction optimization technique to regulate the control $u$ in the outer level of the optimization process. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the National Natural Science Foundation for the Youth of China (Grants 11301081, 11401073), China Postdoctoral Science Foundation (Grant No. 2014M552027), the Fundamental Research Funds for Central Universities in China (Grant DUT15LK25), and Provincial National Science Foundation of Fujian (Grant No. 2014J05001). [00]{} A. L. Soyster. Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 1973, 21(5): 1154-1157. A. Ben-Tal, L. El Ghaoui, A. Nemirovski. Robust Optimization. Princeton University Press, Princeton, NJ, 2009. John R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer, 2011. A. Ruszczynski and A. Shapiro (eds.). Stochastic Programming: Handbook in Operations Research and Management Science. Elsevier Science, Amsterdam, 2003. J. Goh, M. Sim. Distributionally robust optimization and its tractable approximations. Oper. Res. 2010, 58(4-part-1): 902-917. M. Sim. Distributionally robust optimization: A marriage of robust optimization and stochastic programming. 3rd Nordic Optimization Symposium, March 13-14, 2009, Stockholm, Sweden. X. Chen, M. Sim, P. Sun. A robust optimization perspective on stochastic programming. Oper. Res. 2007, 55(6): 1058-1071. W. Q. Chen, M. Sim. Goal-driven optimization. Oper. Res. 2009, 57(2): 342-357. L El Ghaoui, H. Lebret. Robust solutions to least-squares problems with uncertain data. SIAM J. Matrix Anal. A. 1997, 18(4): 1035-1064. D. Erick, and Y. Ye. Distributionally robust optimization under moment uncertainty with application to data-driven problems. Oper. Res. 2010, 58(3): 595-612. S Zymler, D Kuhn, B Rustem. Distributionally robust joint chance constraints with second-order moment information. Math. Program. 2013, 137(1-2): 167-198. S. Mehrotra, H. Zhang. Models and algorithms for distributionally robust least squares problems. Math. Program. Ser. A. 2013, 1-19. Vladimir G. Boltyanski and Alexander S. Poznyak. The Robust Maximum Principle-Theory and Applications. Springer Science & Business Media, 2011. E. Polak. Optimization algorithms and consisitent approximations. Springer-Verlag, New York, Inc., 1997. A.E. Bryson, Applied optimal control: optimization, estimation and control. CRC Press, 1975. K. L. Teo, C. J. Goh. A computational method for combined optimal parameter selection and optimal control problems with general constraints. J. Austral. Math. Soc. Ser. B. 1989(30): 350-364. R. Loxton, K. L. Teo, V. Rehbock. Robust Suboptimal Control of Nonlinear Systems. Appl. Math. Comput. 2011(217): 6566-6576 W.F. Feehery, P.I. Barton. Dynamic optimization with state variable path constraints. Computer Chem. Enging. 1998(22): 1241-1256. E. Rosenwasser and R. Yusupov. Sensitivity of Automatic Control Systems. Tom Kurfess(eds). CRC Press, 2000. R.C. Loxton, K.L. Teo, V. Rehbock, et al, Optimal control problems with a continuous inequality constraint on the state and the control. Automatica, 2009, 45(10): 2250-2257. Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, 2006. J. Ye, H. Xu, E. Feng, Z. Xiu. Optimization of a fed-batch bioreactor for 1,3-propanediol production using hybrid nonlinear optimal control. J. Process Contr. 24(2014): 1556-1569. A.P. Zeng, W. D. Deckwer. A kinetic model for substrate and energy consumption of microbial growth under substrate-sufficient conditions. Biotechnol. Prog. 11(1995): 71-79.
--- abstract: 'Using the Cosmic Origins Spectrograph onboard the [*Hubble Space Telescope*]{}, we have obtained high-resolution ultraviolet observations of GD 362 and PG 1225-079, two helium-dominated, externally-polluted white dwarfs. We determined or placed useful upper limits on the abundances of two key volatile elements, carbon and sulfur, in both stars; we also constrained the zinc abundance in PG 1225-079. In combination with previous optical data, we find strong evidence that each of these two white dwarfs has accreted a parent body that has evolved beyond primitive nebular condensation. The planetesimal accreted onto GD 362 had a bulk composition roughly similar to that of a mesosiderite meteorite based on a reduced chi-squared comparison with solar system objects; however, additional material is required to fully reproduce the observed mid-infrared spectrum for GD 362. No single meteorite can reproduce the unique abundance pattern observed in PG 1225-079; the best fit model requires a blend of ureilite and mesosiderite material. From a compiled sample of 9 well-studied polluted white dwarfs, we find evidence for both primitive planetesimals, which are a direct product from nebular condensation, as well as beyond-primitive planetesimals, whose final compositions were mainly determined by post-nebular processing.' author: - 'S. Xu(许偲艺), M. Jura, B. Klein, D. Koester, B. Zuckerman' bibliography: - 'apj-jour.bib' - 'Ref.bib' title: 'Two Beyond-Primitive Extrasolar Planetesimals' --- [UTF8]{}[gbsn]{} INTRODUCTION ============ Planetesimals are building blocks of planets and their formation is a key step towards planet formation. How do planetesimals form? What determines their bulk composition? To answer these questions, we start by examining our own solar system. The overall configuration of the solar system is that volatile-depleted, dry rocky objects are ubiquitous relatively close to the Sun while volatile-rich, icy objects are found beyond the snow line. This correlation between the volatile fraction and heliocentric distance can be explained by primitive nebular condensation: refractory elements condensed closer to the Sun while volatile elements can only be incorporated into the planetesimals where the temperature is low enough. Many solar system objects have experienced some additional processing that changed their initial compositions. For example, it has been argued that a collision between a large asteroid and proto-Mercury stripped off most of Mercury’s silicate mantle, leaving it $\sim$70% iron by mass [@Benz1988]. Also, the “late veneer" has delivered a large amount of water and volatiles onto Earth [@Chyba1990]. Post-nebular processing, such as collisions, melting and differentiation, is important in redistributing the elements among solar system objects. Currently, the best way to measure the elemental compositions of planetesimals in the solar system is from meteorites, which are fragments from collisions among asteroids. Following @ONeillPlame2008, we classify all meteorites into two categories in this paper. (i) “Chondritic" is used to refer to chondrites, which are a direct product of nebular processing. Objects in this category are described as “primitive" planetesimals. (ii) “Non-chondritic" objects consist of achondrites, stoney-iron meteorites and iron meteorites. Examples of their parent bodies include the Moon, Mars or asteroids that have experienced various amounts of post-nebular processing. Planetesimals in this category are considered to be “beyond-primitive". What about planetesimal formation in extrasolar planetary systems? High-resolution, high-sensitivity spectroscopic observations of externally-polluted white dwarfs are a powerful tool for determining the bulk elemental compositions of extrasolar planetesimals [@Jura2013]. Calculations show that minor planets can survive the red giant stage of a star and persist into the white dwarf phase with most of their internal water and volatiles intact [@Jura2008; @JuraXu2010]. Orbital perturbations from one or multiple planets can cause these planetesimals to stray into the tidal radius of the white dwarf and get tidally disrupted [@DebesSigurdsson2002; @Bonsor2011; @Debes2012a], sometimes producing a dust disk that emits mostly in the infrared [@Jura2003; @Kilic2006b; @VonHippel2007; @Farihi2009; @XuJura2012]. Eventually, all this planetary debris is accreted onto the central white dwarf and pollutes its otherwise pure hydrogen or helium atmosphere. The first comprehensive abundance measurement of an externally-polluted white dwarf was performed by @Zuckerman2007, who identified 15 elements heavier than helium in the atmosphere of GD 362, including Mg, Si and Fe, which are often called the “common elements" [@Larimer1988]. The disrupted object had a minimum mass $\sim$10$^{22}$ g, which is comparable to that of a massive solar system asteroid. Three years later, the abundances of eight heavy elements were determined in the atmosphere of GD 40, including all the major rock-forming elements – O, Mg, Si and Fe [@Klein2010]. Now there are many more high-resolution optical spectroscopic studies of externally-polluted white dwarfs \[e.g., @Klein2011 [@Melis2011; @Zuckerman2011; @Farihi2011a; @Dufour2012; @Vennes2010; @Vennes2011a]\]. However, optical spectroscopy of externally-polluted white dwarfs typically does not enable sensitive detection of highly-volatile elements, such as carbon, nitrogen and sulfur, which are key to understanding the thermal history of the system. Ultraviolet spectroscopy is complimentary to optical observations in determination of volatile abundances. To-date, there are four white dwarfs with both published high-resolution optical and ultraviolet measurements[^1]; we are beginning to accumulate an atlas of the compositions of extrasolar planetesimals. To zeroth order, we find that they are strikingly similar to meteorites in the solar system: (i) O, Mg, Si and Fe are always dominant and their sum is more than 85% of the accreted mass; (2) volatile elements, especially C, are typically depleted by more than a factor of 10 compared to solar abundances[^2]. In this paper, we report ultraviolet spectroscopic observations of GD 362 and PG 1225-079 with the Cosmic Origins Spectrograph (COS) onboard the [*Hubble Space Telescope*]{} ([*HST*]{}), complimentary to previous optical studies from the Keck High Resolution Echelle Spectrometer (HIRES) [@Zuckerman2007; @Klein2011]. PG 1225-079 has been observed with the low-resolution International Ultraviolet Explorer (IUE) [@Wolff2002]; there is no previous ultraviolet spectroscopy for GD 362. The rest of the paper is organized as follows. Data reduction is summarized in section 2 and atmospheric abundance determinations are reported in section 3. In section 4, we used a reduced chi-squared analysis to look for solar system analogs to the accreted parent bodies. The formation mechanisms of extrasolar planetesimals are assessed in section 5 and conclusions are given in section 6. In Appendix A, we report the [*Herschel*]{} Photodetecting Array Camera and Spectrometer (PACS) observation of GD 362. In Appendix B, we extend the reduced chi-squared analysis to two additional externally-polluted helium white dwarfs with both high-resolution optical and ultraviolet observations. OBSERVATIONS AND DATA REDUCTION =============================== GD 362 and PG 1225-079 were observed during [*HST*]{}/COS Cycle 18 under program 12290. These two white dwarfs are too cool to be observed effectively with the G130M grating centering around 1300 [Å]{}, as was employed by @Jura2012 and @Gaensicke2012 for other hotter white dwarfs. Instead, the G185M grating was used with a central wavelength of 1921 [Å]{} and wavelength coverage of 1800 – 1840 [Å]{}, 1903 – 1940 [Å]{} and 2008 – 2044 [Å]{}. The spectral resolution was $\sim$18,000. Total exposure times were 7411 and 1805 sec for GD 362 and PG 1225-079, respectively. The raw data were processed using the standard pipeline CALCOS 2.13.6. The fluxes at 2030 [Å]{} are 2.9 $\times$ 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$ and 1.5 $\times$ 10$^{-14}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$ for GD 362 and PG 1225-079, respectively, in approximate agreement with broadband NUV fluxes from the [*GALEX*]{} satellite. The signal-to-noise ratio (SNR) in the original un-smoothed spectrum was 6 for PG 1225-079 and 4 for GD 362. Following previous data reduction procedures [@Klein2010; @Klein2011; @Jura2012], for PG 1225-079, equivalent widths (EWs) of each spectral line were measured in the un-smoothed spectra by fitting a Voigt profile with three different nearby continuum intervals in IRAF. The EW uncertainty is calculated by adding the standard deviation of the three EWs and the average uncertainty from the profile fitting in quadrature. The EW upper limit is obtained by artificially inserting a spectral line with different abundance into the model and comparing with the data. We adopt a different method to measure the EW for C I 1930.9 [Å]{} in GD 362, as described in section 3.1. The measured values are listed in Tables \[Tab: LinesGD\] and \[Tab: LinesPG\] for GD 362 and PG 1225-079, respectively. The average Doppler shift relative to the Sun for PG 1225-079 is 42 $\pm$ 13 km s$^{-1}$, in essential agreement with the value 49 $\pm$ 3 km s$^{-1}$ derived from optical studies [@Klein2011]. The large velocity dispersion in the ultraviolet is due to the low SNR of the spectrum and the $\sim$ 15 km s$^{-1}$ uncertainty of COS (COS Instrument Handbook). For GD 362, we marginally detected C I 1930.9 [Å]{} and it has a Doppler shift of 48 km s$^{-1}$, in agreement with 49.3 $\pm$ 1.0 km s$^{-1}$ from the optical study [@Zuckerman2007]. [lccccccc]{}\ Ion & $\lambda$ & E$_{low}$ & EW & log n(Z)/n(He)\ & (Å) & (eV) & (mÅ) &\ C I & 1930.905 & 1.26 & 560 $^{+230}_{-158}$ $^a$ & -6.70 $\pm$ 0.30\ \ S I & 1807.311 & 0 & $\lesssim$ 900 & $\lesssim$ -6.70\ S I & 1820.341 & 0.049 & $\lesssim$ 710 & $\lesssim$ -6.40\ S & & & & $\lesssim$ -6.70\ \[Tab: LinesGD\] $^a$ This is measured from the model spectra, as described in section 3.1. [lcccccc]{}\ \ Ion & $\lambda$ & E$_{low}$ & EW & log n(Z)/n(He)\ & (Å) & (eV) & (mÅ) &\ C I & 1930.905 & 1.26 & 1600 $\pm$ 200 & -7.80 $\pm$ 0.10\ \ S I & 1807.311 & 0 & $\lesssim$ 170 & $\lesssim$ -9.50\ S I & 1820.341 & 0.049 & $\lesssim$ 150 & $\lesssim$ -9.30\ S & & & & $\lesssim$ -9.50\ \ Mg I & 2026.477$^a$ & 0 & 288 $\pm$ 100$^b$ & $\lesssim$ -7.60\ \ Si II & 1808.013 & 0 & 936 $\pm$ 109 & -7.44 $\pm$ 0.10\ Si II & 1816.928 & 0.04 & 1232 $\pm$ 145 & -7.46 $\pm$ 0.10\ Si & & & & -7.45 $\pm$ 0.10\ \ Fe II & 1925.987 & 2.52 & 192 $\pm$ 72 & -7.62 $\pm$ 0.28\ Fe II & 2011.347 & 2.58 & 309 $\pm$ 97 & -7.35 $\pm$ 0.24\ Fe II & 2019.429 & 1.96 & 211 $\pm$ 67 & -7.56 $\pm$ 0.24\ Fe II & 2021.402 & 1.67 & 181 $\pm$ 68 & -7.71 $\pm$ 0.27\ Fe II & 2033.061 & 2.03 & 311 $\pm$ 67 & -7.24 $\pm$ 0.17\ Fe II & 2041.345 & 1.964 & 215 $\pm$ 50 & -7.24 $\pm$ 0.18\ Fe & & & & -7.45 $\pm$ 0.23\ \ Zn II & 2026.136 & 0 & 288 $\pm$ 47$^b$ & $\lesssim$ -11.30\ \[Tab: LinesPG\] $^a$ The atomic parameters for this line are taken from @KelleherPodobedova2008.\ $^b$ Mg I 2026.5 [Å]{} and Zn II 2026.1 [Å]{} are blended and the reported EW is for the entire feature. ATMOSPHERIC ABUNDANCE DETERMINATIONS ==================================== Because we are most interested in the abundance of an element relative to other heavy elements and these ratios are not strongly dependent upon the stellar temperature and surface gravity [@Klein2011], we only adopt one set of stellar parameters as listed in Table \[Tab: Properties\] and compute the model spectra following @Koester2010. Atomic data are mostly taken from the Vienna Atomic Line Database [@Kupka1999]. The computed model atmosphere spectra were convolved with the COS NUV line spread function[^3]. The abundance of each element was derived by comparing the EW of each spectral line with the value derived from the model atmosphere, as shown in Figures \[Fig: GD\_C\]-\[Fig: PG\_Zn\] and Tables \[Tab: LinesGD\] and \[Tab: LinesPG\]. The final abundances, combining ultraviolet with optical observations, are given in Tables \[Tab: AbundanceGD\] and \[Tab: AbundancePG\] for GD 362 and PG 1225-079, respectively. Our results mostly agree with previous reports but have a higher accuracy. For PG 1225-079, we newly derive the abundances of carbon and silicon and have tentative detections of sulfur and zinc. The magnesium abundance is updated while the iron abundance agrees with previous optical results. Because the data are noisier for GD 362, we are only able to crudely constrain the abundance of carbon and sulfur. --------------------- --------------- -------- ------------------- ------ ------------------------- ---------- -- -- star M$_*$ T log g D log M$_{cvz}$/M$_*$$^a$ Ref (M$_{\odot}$) (K) (cm$^2$ s$^{-1}$) (pc) GD 362 0.72 10,540 8.24 51 -6.71 \(1) (2) PG 1225-079 0.58 10,800 8.00 26 -5.02 \(3) (4) \[Tab: Properties\] --------------------- --------------- -------- ------------------- ------ ------------------------- ---------- -- -- : Adopted Stellar Properties $^a$ Newly-derived mass of the convective zone (see section 4).\ [**References.**]{}[(1) @Kilic2008b; (2) @Zuckerman2007; (3) @Klein2011; (4) @Farihi2005.]{}\ ------- ---------------------- --------------- -------------------------------- -- -- Z log n(Z)/n(He)$^a$ t$_{set}$$^b$ $\dot{M}$(Z$_i$)$^c$ (10$^5$ yr) (g s$^{-1})$ H -1.14 $\pm$ 0.10 ... ... C$^*$ -6.70 $\pm$ 0.30 2.1 2.5 $\times$ 10$^7$ N $<$ -4.14 2.2 $<$ 9.0 $\times$ 10$^9$ O $<$ -5.14 2.2 $<$ 1.1 $\times$ 10$^9$ Na -7.79 $\pm$ 0.20 2.2 3.7 $\times$ 10$^6$ Mg -5.98 $\pm$ 0.25 2.2 2.5 $\times$ 10$^8$ Al -6.40 $\pm$ 0.20 1.6 1.5 $\times$ 10$^8$ Si -5.84 $\pm$ 0.30 1.2 7.2 $\times$ 10$^8$ S$^*$ $\lesssim$ -6.70$^d$ 0.79 $\lesssim$ 1.7 $\times$ 10$^8$ Ca -6.24 $\pm$ 0.10 0.99 5.1 $\times$ 10$^8$ Sc -10.19 $\pm$ 0.30 0.93 6.8 $\times$ 10$^4$ Ti -7.95 $\pm$ 0.10 0.94 1.2 $\times$ 10$^7$ V -8.74 $\pm$ 0.30 0.95 2.1 $\times$ 10$^6$ Cr -7.41 $\pm$ 0.10 1.0 4.3 $\times$ 10$^7$ Mn -7.47 $\pm$ 0.10 1.0 4.0 $\times$ 10$^7$ Fe -5.65 $\pm$ 0.10 1.1 2.5 $\times$ 10$^9$ Co -8.50 $\pm$ 0.40 0.99 4.1 $\times$ 10$^6$ Ni -7.07 $\pm$ 0.15 1.0 1.1 $\times$ 10$^8$ Cu -9.20 $\pm$ 0.40 0.83 1.1 $\times$ 10$^6$ Sr -10.42 $\pm$ 0.30 0.56 1.3 $\times$ 10$^5$ Total 4.4 $\times$ 10$^9$ ------- ---------------------- --------------- -------------------------------- -- -- : Atmospheric Abundances for GD 362\[Tab: AbundanceGD\] $^*$ New measurements from this paper. The rest are from @Zuckerman2007 but we reference abundances relative to He, the dominant element in GD 362’s atmosphere, rather than H, as presented in @Zuckerman2007. Consequently, there is a possible systematic offset up to 0.1 dex in all entries derived from that paper.\ $^a$ The final abundance of an element combining optical and ultraviolet data.\ $^b$ Newly-derived settling times in the convective zone (see section 4); they are typically a factor of 2-3 longer than previously-derived values in @Koester2009a.\ $^c$ Accretion rates calculated from Equation (1).\ $^d$ The equality sign corresponds to the red model fit shown in figures. -------- ------------------- ------------- -------------------------------- Z log n(Z)/n(He) t$_{set}$ $\dot{M}$(Z$_i$) (10$^6$ yr) (g s$^{-1})$ H -4.05 $\pm$ 0.10 ... ... C$^*$ -7.80 $\pm$ 0.10 5.5 3.1 $\times$ 10$^6$ O $<$ -5.54 4.5 $<$ 9.1$\times$ 10$^8$ Na $<$ -8.26 4.4 $<$ 2.6 $\times$ 10$^6$ Mg$^*$ -7.50 $\pm$ 0.20 4.8 1.4 $\times$ 10$^7$ Al $<$ -7.84 3.6 $<$ 9.5 $\times$ 10$^6$ Si$^*$ -7.45 $\pm$ 0.10 3.0 3.0 $\times$ 10$^7$ S$^*$ $\lesssim$ -9.50 1.7 $\lesssim$ 5.2 $\times$ 10$^5$ Ca -8.06 $\pm$ 0.03 1.9 1.6 $\times$ 10$^7$ Sc -11.29 $\pm$ 0.07 1.8 1.1 $\times$ 10$^4$ Ti -9.45 $\pm$ 0.02 1.8 8.3 $\times$ 10$^5$ V -10.41 $\pm$ 0.10 1.8 9.6 $\times$ 10$^4$ Cr -9.27 $\pm$ 0.06 1.9 1.3 $\times$ 10$^6$ Mn -9.79 $\pm$ 0.14 2.0 4.0 $\times$ 10$^5$ Fe -7.42 $\pm$ 0.07 2.1 9.0 $\times$ 10$^7$ Ni -8.76 $\pm$ 0.14 2.3 4.0 $\times$ 10$^6$ Zn$^*$ $\lesssim$ -11.30 2.2 $\lesssim$ 1.3 $\times$ 10$^4$ Sr $<$ -11.65 1.2 $<$ 1.4 $\times$ 10$^4$ Total 1.6 $\times$ 10$^8$ -------- ------------------- ------------- -------------------------------- : Atmospheric Abundances for PG 1225-079\[Tab: AbundancePG\] $^*$ New results from this paper. The rest are from @Klein2011.\ [**Notes.**]{} The columns are defined the same as Table \[Tab: AbundanceGD\]. Carbon ------ There is only one useful carbon line in the observed wavelength interval, C I 1930.9 [Å]{}, as shown in Figures \[Fig: GD\_C\] and \[Fig: PG\_C\]. Because it arises from an excited level, it cannot be contaminated by interstellar absorption. However, this line can be blended with Mn II 1931.4 [Å]{}. Fortunately, accurate Mn abundances have been determined for both stars from optical data [@Zuckerman2007; @Klein2011] and the predicted EW for Mn II 1931.4 [Å]{} is less than 50 m[Å]{} in the model spectrum. Considering the measured EW of this feature is more than 500 m[Å]{} for both stars (see Tables \[Tab: LinesGD\] and \[Tab: LinesPG\]), we conclude that the line is dominated by C I 1930.9 [Å]{}. For PG 1225-079, our derived carbon abundance[^4] \[C\]/\[He\] = -7.80 $\pm$ 0.10 agrees with the IUE upper limit of -7.5 [@Wolff2002]. For GD 362, the largest uncertainty is from the low SNR of the data; the measured continuum flux is (3.1 $\pm$ 1.0) $\times$ 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$. It is hard to measure the EW of C I 1930.9 [Å]{} directly from the noisy data. Instead, we computed model spectra with different carbon abundance to match the observed spectrum. In Figure \[Fig: GD\_C\], we present three best-fit models with \[C\]/\[He\] = - 6.4, \[C\]/\[He\] = -6.7, \[C\]/\[He\] = -7.0 and a continuum flux at 4.1 $\times$ 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$, 3.1 $\times$ 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$, 2.1 $\times$ 10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ [Å]{}$^{-1}$, respectively. The final abundance is \[C\]/\[He\] = -6.7 $\pm$ 0.3 and the EW reported in Table \[Tab: LinesGD\] is measured from the model spectra. Sulfur ------ There are two useful sulfur lines, S I 1807.3 [Å]{} and S I 1820.3 [Å]{}. However, at best, we have only a tentative detection of sulfur in each star. S I 1807.3 [Å]{}, the stronger line, is adjacent to Si II 1808.0 [Å]{}. Fortunately, for GD 362, the silicon abundance is determined from previous optical data [@Zuckerman2007]; for PG 1225-079, other ultraviolet lines can be used to derive the silicon abundance (see section 3.4). The data and model atmosphere spectra for GD 362 and PG 1225-079 are presented in Figures \[Fig: GD\_S\] and \[Fig: PG\_S\], respectively. Considering the apparent match between the model and data for both S I lines, tentative sulfur abundances of -6.7 for GD 362 and -9.5 for PG 1225-079 can be assigned. Conservatively, these results are upper limits. Magnesium and Zinc ------------------ In PG 1225-079, Mg I 2026.4 [Å]{} and Zn II 2026.1 [Å]{} are heavily blended. As shown in Figure \[Fig: PG\_Zn\], our best fit model which matches the measured EW of the absorption feature requires \[Mg\]/\[He\] = -7.6 and \[Zn\]/\[He\] = -11.3. These values are individually taken as upper limits due to the blending. However, the reported magnesium abundance is -7.27 $\pm$ 0.06 from the optical data [@Klein2011], which is largely based on three Mg lines but the detections for two lines are only 2$\sigma$. @Wolff2002 reported \[Mg\]/\[He\] to be -7.6 $\pm$ 0.6 from the IUE data. Averaging these measurements, our final magnesium abundance is -7.50 $\pm$ 0.20. Because of the blending, the zinc abundance is only an upper limit. This provides the first stringent constraint on zinc in an extrasolar planetesimal. Silicon ------- In PG 1225-079, we measured two silicon lines, Si II 1808.0 [Å]{} and Si II 1816.9 [Å]{}, as shown in Figure \[Fig: PG\_S\]. Si II 1808.8 [Å]{} arises from the ground state and the photospheric line can be distorted by interstellar absorption. However, its measured EW is only 87 $\pm$ 11 m[Å]{} in $\zeta$ Oph, a star at a distance of 112 pc with a large amount of foreground interstellar gas [@Morton1975]. Considering PG 1225-079 is only 26 pc away, it has much less interstellar absorption. The measured EW is 936 $\pm$ 109 m[Å]{} and we conclude that Si II 1808.0 [Å]{} is largely photospheric and essentially free from interstellar absorption. The shape of Si II 1808.0 [Å]{} in the model does not quite fit the data; but the measured EW of the data, which is key in the abundance determination, has a good agreement with that in the model. Using these two Si II lines, we derive a final silicon abundance of -7.45 $\pm$ 0.10, in agreement with, but much better than the reported IUE abundance of -7.5 $\pm$ 0.5 [@Wolff2002] and the previous optical upper limit of -7.27 [@Klein2011]. Iron ---- In the COS data for PG 1225-079, there are six Fe II lines with EWs larger than 100 m[Å]{}. Four of them are shown in Figures \[Fig: PG\_C\] and \[Fig: PG\_Zn\]. We derived an iron abundance of -7.45 $\pm$ 0.23, in good agreement of the optical value of -7.42 $\pm$ 0.07, which is based on 28 high-SNR iron lines [@Klein2011]. Because the ultraviolet data are noisier, we adopt the optically-derived iron abundance. COMPARISON WITH SOLAR SYSTEM OBJECTS ==================================== Combined with previous data, we now have determined the abundances of 16 elements heavier than helium in the atmosphere of GD 362 and 11 heavy elements in PG 1225-079. However, the measured composition need not be identical to the composition of the accreted planetesimal because different elements gravitationally settle at different rates in a white dwarf atmosphere. Three major phases are proposed for a single accretion event: build-up, steady-state and decay [@Dupuis1993a; @Koester2009a]. Because an infrared excess is found for GD 362 and PG 1225-079 [@Becklin2005; @Kilic2005; @Farihi2010b], the accretion should be either in the build-up or steady-state phase. The timescale for build-up stage is comparable to the settling times [@Koester2009a]; it is $\sim$ 10$^5$ yr, for GD 362 and PG 1225-079 (see Tables \[Tab: AbundanceGD\] and \[Tab: AbundancePG\]). The rest of the disk-host stage should all be under the steady-state approximation. The dust disk lifetime has been under intensive studies for a few years but the values are still very uncertain, including 10$^5$ yr [@Farihi2009; @Rafikov2011b], 10$^6$ yr [@Rafikov2011a; @Girven2012; @Farihi2012b] and up to 10$^7$ yr [@Barber2012]. The true disk lifetime might have a range but it is likely to be longer than the settling times. Furthermore, @Zuckerman2010 suggested that steady-state approximation is the dominant situation for white dwarf accretion event based on a study of helium dominated stars; the settling times are only 0.1% of their cooling times but 30% of them show atmospheric pollution. GD 362 and PG 1225-079 are more likely to be under the steady-state approximation and that is the main focus of this paper. In the steady-state model, the observed concentration of an element is dependent on the time it takes to sink out of the convective envelope. To derive the theoretical settling times and obtain an improved understanding of the uncertainties, we formulated several numerical experiments with the code for the envelope structure and corrected two errors found in our previous calculations of diffusion timescales. In the course of changing the equations describing element diffusion from the version in @Paquette1986 (Equation 4) to the one in @Pelletier1986 (Equation 5), which is more accurate in the case of electron degeneracy, one of us (D.K.) discovered an error in the former paper. A factor of $\rho^{1/3}$ is missing in the second alternative of Equation 21, which we had not noticed before. A rederivation of all our equations uncovered another error in our implementation of the contribution of thermal diffusion. These errors have only a very small effect in stars with relatively shallow convection zones, like the hydrogen-dominated white dwarfs. However, for helium-dominated white dwarfs with T $<$ 15,000 K and a deep convection zone, the diffusion timescales can be slower by factors 2-3 relative to our earlier calculations[^5]. The accretion rate $\dot{M}(Z_i)$ of an element Z is calculated as [@Koester2009a] $$\dot{M}(Z_i) = \frac{M_{cvz} X(Z_i)}{t_{set}(Z_i)}$$ where M$_{cvz}$ is the mass of the convective envelope. X(Z$_i$) is the mass fraction of the element Z$_i$ relative to the dominant element in the atmosphere, either hydrogen or helium; t$_{set}$(Z$_i$) is the settling time. A longer settling time corresponds to a lower diffusion flux. Fortunately, the relative timescales for different elements, which are important for the determination of the abundances in the accreted matter, change much less. For GD 362 and PG 1225-079, compared to previously published values, the settling times listed in Tables \[Tab: AbundanceGD\] and \[Tab: AbundancePG\] typically increase by factors of 2-3 while the mass of the convective zone is 0.13 dex smaller for GD 362 and 0.05 dex larger for PG 1225-079 (Table \[Tab: Properties\]). These corrections lead to smaller total accretion rates by a factor of 3 for both stars. The next step is to compare the composition of the accreted parent body with those of solar system objects. We choose the summed mass of all the major elements as the normalization factor so that the analysis is independent of the chemical property and abundance uncertainty of each individual element. However, one complication is that no oxygen lines are detected in either GD 362 or PG 1225-079 due to their low photospheric temperatures relative to other helium-dominated white dwarfs; only upper limits were obtained for this major element. Therefore, our approach is to compare the mass fraction of an element relative to the summed mass of the common elements Mg, Si and Fe. For solar system objects, we include 80 representative and well-analyzed meteorite samples mostly from @Nittler2004. We also include the bulk composition of Earth from @Allegre2001 and an updated carbon abundance from @Marty2012. For our purpose, Earth appears to be chondritic and its bulk composition approaches CV chondrites even though Earth has experienced some post-nebular processing, such as differentiation and collisions. GD 362: Accretion from a Mesosiderite Analog? --------------------------------------------- In Figure \[Fig: GD362\], we compare the abundances of all 18 elements, including upper limits, of the accreted material in GD 362 with CI chondrites, which are the most primitive material in the solar system. The composition of CI chondrites is almost identical to the solar photosphere, with the exception of depletion of volatile elements C, N as well as H and noble gases. The parent body accreted onto GD 362 looks nothing like a CI chondrite, as first pointed out in @Zuckerman2007. For the volatile elements, the mass fraction of C and S are depleted by at least a factor of 7 and 3, respectively, relative to CI chondrites; refractory elements, such as V, Ca, Ti and Al, are all enhanced. Though oxygen is not detected in GD 362, its stringent upper limit can still provide useful insights. Following @Klein2010, we can calculate the required number of oxygen atoms to form oxides Z$_{p(Z)}$O$_{q(Z)}$ as $$n(O)=\sum_Z \frac{q(Z)}{p(Z)} n(Z)$$ Hydrogen is excluded here because GD 362 has an enormous amount and it might not be associated with the parent body or bodies currently in its atmosphere (see Appendix A). Under the steady-state approximation, \[O\]/\[He\] = -5.07 is required to form MgO, Al$_2$O$_3$, SiO$_2$ and CaO; this value is comparable to the observed oxygen upper limit of -5.14. However, Fe is the most abundant heavy element in the atmosphere of GD 362 and there is insufficient oxygen to tie it up in either FeO or Fe$_2$O$_3$. Thus, most, if not all the iron in the parent body is in metallic form, which is very different from CI chondrites where most iron is in oxides [@Nittler2004]. @ONeillPlame2008 suggested that \[Mn\]/\[Na\] can be used as an indicator of post-nebular processing. For example, \[Mn\]/\[Na\] is -0.79 for all chondrites as well as the solar photosphere while non-chondritic objects have a much higher value. Interestingly, \[Mn\]/\[Na\] is 0.65 $\pm$ 0.22 for GD 362, which is larger than -0.01 for Mars and 0.32 for the Moon [@ONeillPlame2008]. This suggests that the planetesimal accreted onto GD 362 is likely to be non-chondritic and have experienced some post-nebular processing. @Zuckerman2007 compared the \[Na\]/\[Ca\] ratio in GD 362 with solar system objects and reached a similar conclusion; the accreted planetesimal was non-chonridtic. The only other polluted white dwarf with both Mn and Na detections is WD J0738+1835 wherein \[Mn\]/\[Na\]= -0.54 $\pm$ 0.19 [@Dufour2012]; this agrees with the chondritic value within the uncertainties. To find the best solar system analog to the parent body accreted onto GD 362, we calculated a reduced chi-squared value for each object in our sample ($\chi^2_{red}$), defined as: $$\chi^2_{red}= \frac{1}{N} \sum ^N _{i=1} \frac{(M_{wd}(Z_i)-M_{mtr}(Z_i))^2}{\sigma_{wd}^2(Z_i)}$$ where N is the total number of elements considered in the analysis. M$_{wd}$(Z$_i$) and M$_{mtr}$(Z$_i$) represent the mass fraction of an element Z$_i$ relative to the summed mass of Mg, Si and Fe in the extrasolar planetesimal and solar system objects, respectively. $\sigma_{wd}$(Z$_i$) is the propagated uncertainty in mass fraction. For GD 362, we calculated $\chi^2_{red}$ for 11 heavy elements, C, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe and Ni, which have detections both in GD 362 and the meteorite sample[^6]. The results are shown in Figure \[Fig: GD\_chi\] for both steady-state and build-up approximations. There is no qualitative difference between these two models and mesosiderites provide the best fit considering all 11 elements. In particular, the mesosiderite ALH 77219 can match the overall abundance pattern to 95% confidence level. As shown in Figure \[Fig: GD362\], the abundance of individual elements agrees within 2$\sigma$ between mesosiderites and the planetesimal accreted onto GD 362. Mesosiderites are a rare type of stoney-iron meteorite with equal amounts of silicates and metallic iron and nickel. One mystery about mesosiderties is that the Si-rich crust and Fe, Ni-rich core materials are abundant but the olivine Mg-rich mantle seems to be missing. One model for the formation of mesosiderites is that a 200-400 km diameter asteroid with a molten core was nearly catastrophically disrupted by a 50-150 km diameter projectile at 4.42-4.52 Gyr ago [@Scott2001]. The collision mixed the target’s molten core with its crustal material but excluded the large and hot mantle fragments. The planetesimal accreted onto GD 362 may have been formed in a similar way. While mesosiderites may be a prototype for the accreted planetesimal onto GD 362, there are three major hurdles for this hypothesis to overcome. First, in the model of @Scott2001, only half of the original mass of a 200-400 km diameter asteroid was maintained after the collision and the final product only contains about 10% mesosiderite-like material by mass. This is equivalent to a 75-150 km diameter object. Mesosiderites that fall on Earth are only small fragments and the 180 kg NWA 2924 is among the largest (Meteorite Bulletin Database[^7]). However, the parent body accreted onto GD 362 has a minimum mass of 2.7 $\times$ 10$^{22}$ g, $\sim$260 km in diameter for an assumed density of 3 g cm$^{-3}$. It is unclear whether the same kind of collision can produce a mesosiderite parent body this big. Second, the mass fraction of hydrogen in mesosiderites is less than 0.2%; it cannot explain how there is 5 $\times$ 10$^{24}$ g hydrogen in the atmosphere of GD 362. Possibly, hydrogen was accreted during earlier events and it has been atop the atmosphere ever since (see Appendix A for more discussion). Third, GD 362 is currently accreting from its circumstellar disk and the disk material should also resemble the composition of mesosiderites. However, the shape of the mid-infrared spectrum for mesosiderite, which is dominated by a sharp peak at 9.13 $\mu$m and several other bands at 10.6 $\mu$m and 11.3 $\mu$m [@Morlok2012], cannot fully account for the broad 10 $\mu$m silicate emission feature observed for GD 362 [@Jura2007a]. This does not completely exclude the mesosiderite hypothesis but emission from some additional material is required to fully reproduce the observed infrared spectrum for GD 362. Mesosiderites are a good candidate for the parent body accreted onto GD 362 but there are remaining unresolved issues. PG 1225-079: Accretion from a Planetesimal with No Single Solar System Analog ----------------------------------------------------------------------------- In Figure \[Fig: PG1225\], we show a comparison of the mass fractions of 16 elements, including upper limits between PG 1225-079 and CI chondrites. Though the carbon abundance is approaching the chondritic value, the accreted planetesimal differs a lot from CI chondrites; the mass fraction of S is depleted by at least a factor of 40 while Zn is depleted by at least a factor of 8. In contrast, refractories, such as V, Ca, Ti and Sc are all enhanced. The overall pattern of relatively high carbon abundance and enhanced mass fractions of refractory elements does not follow a single condensation sequence and post-nebular processing is required. \(a) (b) As shown in Figure \[Fig: C-Si-S\], PG 1225-079 has a \[C\]/\[S\] value that is no smaller than the solar ratio, which is very different from other polluted white dwarfs and meteorites. Carbon and sulfur are among the most volatile elements that we can measure and their 50% condensation temperatures are 40 K and 655 K, respectively [@Lodders2003]. Most of the meteorites as well as polluted white dwarfs have a \[C\]/\[S\] ratio lower than the solar value, which can be explained by condensation at a temperature between 40 and 665 K though this is not necessarily true for all of them. The only solar system analog to PG 1225-079 with similar high carbon, low sulfur pattern is ureilites, a type of primitive achondrites. Ureilites are the second largest achondrite group and it is suggested that its high carbon abundance is derived from a carbon-rich parent body, but the exact formation mechanism is not well understood [@Goodrich1992]. However, as can be seen in Figure \[Fig: PG1225\](a), ureilites fail to match the overall composition of the parent body accreted onto PG 1225-079. We performed a $\chi^2_{red}$ analysis between solar system objects and the accreted planetesimal in PG 1225-079, comparing 9 elements, C, Mg, Si, Ca, Ti, Cr, Mn, Fe and Ni[^8]. The result is shown in Figure \[Fig: PG\_chi\]. There is no single solar system object that can match all nine elements; the closest is carbonaceous chondrite. Regardless, as shown in Figure \[Fig: PG1225\], the accreted abundance in PG 1225-079 is not at all identical to CI chondrites. The infrared excess around PG 1225-079 corresponds to $\sim$500 K dust [@Farihi2010b]; so far, only two white dwarfs are known to have such cool dust. The other 28 known disk-host stars all have $\sim$1000 K dust [@XuJura2012]. One hypothesis is that the inner disk region was recently impacted by another asteroid and all the material was dissipated [@Farihi2010b; @Jura2008]. If that is the case, PG 1225-079 can be accreting from a blend of two planetesimals, rather than one single parent body. After testing different combinations of the 80 meteorites in our database, the best fit model to the steady-state approximation consists of 30% ureilite North Haig and 70% mesosiderite Dyarrl Island by mass. This blend is also marked in Figure \[Fig: PG\_chi\]. Detailed abundance comparison is shown in Figure \[Fig: PG1225\](b); the abundances of S, Mn and Ca do not agree as well as the other elements but are all within 2$\sigma$. A possible scenario is that one extrasolar ureilite (mesosiderite) analog first got tidally disrupted and more recently, another mesosiderite (ureilite) analog impacted the disk and was blended with the previous material. ASSESSING THE FORMATION MECHANISMS OF EXTRASOLAR PLANETESIMALS ============================================================== Having established that the parent bodies accreted onto GD 362 and PG 1225-079 are beyond primitive, we now extend our analysis to other extrasolar planetesimals. We are most interested in understanding the formation mechanisms of extrasolar planetesimals and whether these are dominated by nebular or post-nebular processing. @JuraXu2013 suggested collisional rearrangement is important in determining the final composition of extrasolar planetesimals based on the scatter in \[Mg\]/\[Ca\] ratios in 60 externally-polluted white dwarfs. Here, we compile a sample of well-studied externally-polluted white dwarfs with abundance determinations of at least 9 elements. There are 9 stars in total, as listed in Table \[Tab: WDs\] and now we assess the formation mechanism for individual objects. star Dom. Dust Volatile Intermediate Refractory Process Ref --------------- ------ ------ -------------- ------------------- ------------------ --------------------- ------------ -- GD 40 He Y C,S:,O,Mn, P Cr,Si,Fe,Mg,Ni Ca,Ti,Al primitive 1,2 WD J0738+1835 He Y O,Na,Mn Cr,Si,Fe,Mg,Co,Ni V,Ca,Ti,Al,Sc primitive 3,4 PG 0843+517 H Y C,S,O,P Cr,Si,Fe,Mg,Ni Al beyond-primitive(?) 5 PG 1225-079 He Y C,Mn Cr,Si,Fe,Mg,Ni V,Ca,Ti,Sc beyond-primitive 6,7 NLTT 43806 H N Na Cr,Si,Fe,Mg,Ni Ca,Ti,Al beyond-primitive 8 GD 362 He Y C,Na,Cu,Mn Cr,Si,Fe,Mg,Co,Ni V,Sr,Ca,Ti,Al,Sc beyond-primitive 7,9 WD 1929+012 H Y C,S,O,Mn,P Cr,Si,Fe,Mg,Ni Ca,Al ??? 5,10,11,12 G241-6 He N S,O,Mn,P Cr,Si,Fe,Mg,Ni Ca,Ti primitive 2,6,13 HS 2253+8023 He N O,Mn Cr,Si,Fe,Mg,Ni: Ca,Ti primitive 6 \[Tab: WDs\] [**Note.**]{} This is a compiled sample of externally polluted white dwarfs with detections of at least 9 elements heavier than helium. Columns are defined as follows. “Dom" lists the dominant element in the atmosphere. “Dust" indicates whether a star has an infrared excess (“Y") or not (“N"). Following the classification scheme in @Lodders2003, “Volatile" lists the detected volatile elements, defined as having a 50% condensation temperature lower than 1290 K in a solar-system composition gas [@Lodders2003]; “Intermediate" lists the elements with a condensation temperature between 1290-1360 K –the same range as that of the common elements, Si, Fe and Mg; “Refractory" elements have a 50% condensation temperature higher than 1360 K. The elements are ordered with increasing condensation temperature. “Process" shows our proposed dominant mechanism that determines the final composition of the accreted extrasolar planetesimal (see section 5).\ [**References.**]{} (1) @Klein2010; (2) @Jura2012; (3) @Dufour2010; (4) @Dufour2012; (5) @Gaensicke2012; (6) @Klein2011; (7) this paper; (8) @Zuckerman2011; (9) @Zuckerman2007; (10) @Vennes2010; (11) @Vennes2011a; (12) @Melis2011; (13) @Zuckerman2010. [*GD 40*]{}: As discussed in @Jura2012 and Appendix B, the overall abundance pattern in GD 40 matches with carbonaceous chondrites and bulk Earth. Nebular condensation is sufficient to explain its observed composition. [*WD J0738+1835*]{}: @Dufour2012 found that there is a correlation between the abundance of an element and its condensation temperature: refractory elements are depleted while volatile elements are enhanced compared to bulk Earth. This indicates that the accreted planetesimal might be formed in a low temperature environment under nebular condensation. [*PG 0843+517*]{}: This star has the highest mass fraction of iron among all polluted white dwarfs. [@Gaensicke2012] found that all core elements, including Fe, Ni, S and Cr are enhanced relative to the values for bulk Earth while lithophile refractory Al is depleted. This star might be accreting from the core of a differentiated object. Nevertheless, considering the uncertainty for each element is at least 0.2 dex, the conclusion is still preliminary. [*PG 1225-079*]{}: As discussed in section 4.2, this star has a near chondritic carbon abundance but also enhanced mass fractions of refractory elements relative to CI chondrite; it cannot be formed solely under nebular processing. [*NLTT 43806*]{}: Compared to chondritic values, the accreted planetesimal is depleted in Fe and enhanced in Al. @Zuckerman2011 found that the best fit model corresponds to “30% crust 70% upper mantle". With detections of 9 elements, evidence is strong that NLTT 43806 has accreted the outer layer of a differentiated parent body. [*GD 362*]{}: As discussed in section 4.1, mesosiderite is the best solar system analog to the accreted parent body and post-nebular processing is required. [*WD 1929+012*]{}: @Gaensicke2012 showed that this star has a high iron content. However, the situation is perplexing in that different analyses yield different stellar parameters and atmospheric abundances. For example, both @Melis2011 and @Gaensicke2012 derived that \[Si\]/\[Fe\] is -0.25 but @Vennes2010 found that \[Si\]/\[Fe\] is 0.19. No final conclusion can be drawn before resolving such discrepancies. [*G241-6*]{}: This star is a near twin of GD 40 with a similar abundance pattern but without an infrared excess. One possible scenario is that G241-6 has accreted a planetesimal with a similar composition to GD 40 and now it is at the beginning of a decaying phase; all heavier elements appear to be depleted relative to GD 40 due to their short settling times [@Klein2011; @Jura2012]. As discussed in @Jura2012 and Appendix B, the overall abundances resemble those of chondrites and no post-nebular processing is required. [*HS 2253+8023*]{}: @Klein2011 showed that the composition of its parent body agrees with bulk Earth, except for the enhanced calcium abundance. Nebular processing can produce the observed abundance pattern. As summarized in Table \[Tab: WDs\], at least 4 out of the 9 white dwarfs have accreted planetesimals that can be formed under nebular processing while post-nebular processing is required for another 3 of them. It should be noted that some objects that we identify as primitive might still have undergone some post-nebular processing. For example, GD 40 has accreted from a planetesimal that has a similar composition as bulk Earth, whose overall abundance pattern is chondritic. However, it is still possible that the parent body was differentiated; when the entire object is accreted, the composition appears to be “chondritic". We can only put an upper limit on the number of objects formed under nebular condensation. From this sample of 9 stars, we see that post-nebular processing appears to play an important role in determining the final abundance of extrasolar planetesimals; beyond-primitive planetesimals might be as common as primitive planetesimals. In contrast, chondrites comprise more than 90% of all meteorites found on Earth by number (Meteorite Bulletin Database[^9]). Possibly, extrasolar planetesimals around white dwarfs have violent evolutionary histories with more collisions. This difference is not surprising since dynamical rearrangement of planetary systems at white dwarfs is expected to increase the frequency of collisions and produce more beyond-primitive extrasolar planetesimals. So far, 19 elements heavier than helium, including C, S, O, Na, Cu, Mn, P, Cr, Si, Mg, Fe, Co, Ni, V, Sr, Ca, Ti, Al and Sc, have been detected in the atmospheres of polluted white dwarfs, as shown in Table \[Tab: WDs\]. In terms of mass fraction in the accreted planetesimal, the lowest limit is $\sim$5 ppm, for Sc in WD J0738+1835 [@Dufour2012]. Studying externally-polluted white dwarfs proves to be a very sensitive probe of the bulk compositions of extrasolar planetesimals. CONCLUSIONS =========== We present [*HST*]{}/COS ultraviolet observations for GD 362 and PG 1225-079, two heavily polluted helium white dwarfs. In GD 362, the mass fractions of carbon and sulfur are depleted by at least a factor of 7 and 3 respectively, compared to CI chondrites. In PG 1225-079, a similar volatile depletion pattern is found: C by a factor of 2, S by at least a factor of 40 and Zn by at least a factor of 8. We provide good evidence for the presence of beyond-primitive extrasolar planetesimals: 1. Mesosiderites provide a good match to the composition of the parent body accreted onto GD 362. However, there are several unresolved issues for this hypothesis, especially the apparent difference between the mid-infrared spectrum of mesosiderites and the dust disk around GD 362. Additional material is required. 2. No single meteorite can reproduce the abundance pattern in PG 1225-079. A blend of 30% North Haig ureilite and 70% Dyarrl Island mesosiderite can provide a good fit to the overall composition. 3. Spectroscopic observations of externally-polluted white dwarfs enable sensitive measurement of the bulk compositions of extrasolar planetesimals, including 19 heavy elements down to a mass fraction of 5 ppm. Based on a sample of 9 well-studied white dwarfs, we find that post-nebular processing is as important as nebular condensation in determining the compositions of extrasolar planetesimals. Support for program \# 12290 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work also has been partly supported by NSF grants to UCLA to study polluted white dwarfs. [**APPENDIX**]{} The [*Herschel*]{}/PACS Observation of GD 362 ============================================= While hydrogen is detected in some helium-dominated white dwarfs [@Voss2007], GD 362 has an anomalously large amount. The helium-to-hydrogen number ratio is 14 in its convective zone, corresponding to 5 $\times$ 10$^{24}$ g of hydrogen; this is lower than 7 $\times$ 10$^{24}$ g reported in @Jura2009b because the mass of the convective zone for GD 362 is 0.13 dex lower in the updated calculation (Table \[Tab: Properties\]). The origin of the hydrogen is a mystery. Unlike heavy elements which have short settling times compared to the white dwarf cooling age, hydrogen never sinks and can be accumulated over the entire cooling history of the star [@Bergeron2011; @JuraXu2012]. If GD 362 has always been a helium-dominated white dwarf and all this hydrogen is from accretion of tidally disrupted objects, it can either be one Callisto-size object or $\sim$100 Ceres-like asteroids [@Jura2009b]. In the latter case, likely there would be many more asteroids orbiting the star and mutual collisions among them would generate a cloud of cold dust. We were awarded 1.1 hours of [*Herschel*]{}/PACS [@Poglitsch2010] observation time to look for cold dust around GD 362. The “mini-scan map" mode was used to observe in “blue" (85-125 $\mu$m) and “red" (125-210 $\mu$m) bands simultaneously with a medium scan speed of 20 s$^{-1}$ and a scan leg length of 4. The scan map size is 345 $\times$ 374 and the repetition number is 25. Two different scan angles, 45 degrees and 135 degrees were used and the total integration time was 1200 sec. Data reduction was performed using HIPE (Herschel Interactive Processing Environment) on a combined mosaic of level 2 products from pipeline SPG 7.1.0. The pixel scale is 1 pixel$^{-1}$ and 2 pixel$^{-1}$ for the blue and red band, respectively. Correcting for its proper motion, we expect GD 362 at $\alpha$ = 17:31:34.355, $\delta$ = +37:05:18.331 on the date of the observation. Because there is no detection, aperture photometry was performed at 25 locations within 5 pixels of the nominal position of GD 362. The aperture radius was 20 with a sky annulus between 61 and 70. The background intensity was estimated using the median sky estimation algorithm ([*Herschel*]{} Data Analysis Guide[^10]). Aperture correction factors are 0.949 for blue and 0.897 for red (PACS Observer’s Manual[^11]). Based on the dispersion of the 25 measurements, 3$\sigma$ upper limits are 5.1 mJy for blue and 5.6 mJy for red. What does this imply about dust mass? GD 362 has shrunk in mass from 3 M$_{\odot}$ on the main-sequence to its current mass of 0.72 M$_{\odot}$ [@Kilic2008b]. Consequently, asteroids initially at 3-5 AU are now orbiting at 13-21 AU. Currently, GD 362 has a stellar temperature of 10,540 K and cooling age $\sim$ 0.9 Gyr [@Farihi2009]. Extrapolating from white dwarf cooling models[^12] [@Bergeron2011], for GD 362, its stellar temperature is lower than 20,000 K for 90% of its cooling time. We approximate the stellar luminosity as a time-averaged luminosity of 0.01 L$_\odot$. Poynting-Robertson drag was able to remove particles smaller than 20 $\mu$m at a distance of 15 AU for a grain density of 3 g cm$^{-3}$. We therefore assume a dust particle radius of 20 $\mu$m in the putative asteroid belt orbiting GD 362. If the grains function as blackbodies with negligible albedo, then their temperature can be calculated as $$T_d = T_* \sqrt{\frac{R_*}{2D_{orb}}}$$ T$_*$, R$_*$ are the stellar temperature and radius; D$_{orb}$ is the orbital distance. The dust temperature is 14-11 K between 13-21 AU. The mass of the dust disk is $$M_{d}=\frac{ F_{\nu} D_*^2}{\chi B_{\nu}(T)}$$ where D$_*$ is the distance to GD 362, 51 pc [@Kilic2008b] and $\chi$ is the dust opacity. For a particle radius of 20 $\mu$m, $\chi$ = 100 cm$^2$ g$^{-1}$ in the geometric optics limit. As shown in Figure \[Fig: MD\], the upper limit of dust mass is between 10$^{25}$ g and 10$^{26}$ g at 13-21 AU; this mass is at least twice the hydrogen mass in the atmosphere of GD 362 and one order of magnitude larger than the mass of solar system’s asteroid belt [@Krasinsky2002]. The upper limit is not stringent enough to rule out the hypothesis that hydrogen in GD 362 is from accretion of multiple asteroids. So, the large hydrogen abundance in GD 362 remains an unsolved puzzle. Looking for Solar System Analogs to Extrasolar Planetesimals ============================================================ The $\chi^2_{red}$ analysis has proven to be an effective way to look for solar system analogs to the compositions of extrasolar planetesimals. Two other helium-dominated white dwarfs have reported volatile and refractory abundances from high-resolution optical and ultraviolet observations that are suitable for this kind of analysis[^13] – GD 40 and G241-6. Updated settling times and accretion rates are listed in Table \[Tab: GD40G241-6\] while the mass of the convective zone stays the same. Since all the major elements are determined, we compare the mass fraction of an element relative to the sum of O, Mg, Si and Fe. ------- --------------- ------------------------- ------------------------- -- t$_{set}$$^a$ $\dot{M}$(Z)$_{GD 40}$ $\dot{M}$(Z)$_{G241-6}$ Z (10$^6$ yr) (g s$^{-1}$) (g s$^{-1}$) C 1.1 2.2 $\times$ 10$^6$ $<$ 4.4 $\times$ 10$^5$ N 1.1 $<$ 2.6 $\times$ 10$^5$ $<$ 2.1 $\times$ 10$^5$ O 1.1 4.5 $\times$ 10$^8$ 4.3 $\times$ 10$^8$ Mg 1.2 1.7 $\times$ 10$^8$ 1.5 $\times$ 10$^8$ Al 1.2 1.4 $\times$ 10$^7$ $<$ 6.1 $\times$ 10$^6$ Si 1.0 1.3 $\times$ 10$^8$ 8.7 $\times$ 10$^7$ P 0.79 1.1 $\times$ 10$^6$ 4.7 $\times$ 10$^5$ S 0.64 1.0 $\times$ 10$^7$: 5.6 $\times$ 10$^7$ Cl 0.51 $<$ 8.0 $\times$ 10$^5$ $<$ 5.8 $\times$ 10$^5$ Ca 0.51 1.3 $\times$ 10$^8$ 5.1 $\times$ 10$^7$ Ti 0.49 3.2 $\times$ 10$^6$ 1.4 $\times$ 10$^6$ Cr 0.53 6.4 $\times$ 10$^6$ 4.5 $\times$ 10$^6$ Mn 0.53 3.1 $\times$ 10$^6$ 2.4 $\times$ 10$^6$ Fe 0.56 4.4 $\times$ 10$^8$ 2.0 $\times$ 10$^8$ Ni 0.61 1.8 $\times$ 10$^7$ 8.9 $\times$ 10$^6$ Cu 0.58 $<$ 1.8 $\times$ 10$^5$ $<$ 1.8 $\times$ 10$^5$ Ga 0.50 $<$ 2.9 $\times$ 10$^4$ $<$ 2.9 $\times$ 10$^4$ Ge 0.43 $<$ 1.4 $\times$ 10$^5$ $<$ 1.4 $\times$ 10$^5$ Total 1.4 $\times$ 10$^9$ 9.9 $\times$ 10$^8$ ------- --------------- ------------------------- ------------------------- -- : Updated Settling Times and Accretion Rates for GD 40 and G241-6 \[Tab: GD40G241-6\] $^a$ This column is for both GD 40 and G241-6 because their atmospheric conditions are similar. The total accretion rate for GD 40 is a factor of 2 lower than the value derived in @Klein2010, but the relative abundances change much less. The result of a $\chi^2_{red}$ analysis is presented in Figure \[Fig: GD40\_chi\]. When including all 13 detected elements, both carbonaceous chondrites and bulk Earth can match the composition to 95% confidence level for both steady-state and build-up approximations. The accreted planetesimal appears to be primitive and can be formed under nebular condensation, similar to what was concluded by @Jura2012. The newly-derived total accretion rate for G241-6 is about a factor of 2 lower than previously reported [@Zuckerman2010]. The non-detection of an infrared excess and the slight depletion of heavier elements suggest that it may be at the beginning of a decay phase [@XuJura2012; @Klein2011]. We assess both steady-state and decay phase for the $\chi^2_{red}$ analysis; in the latter case, we assume that accretion stopped 0.6 $\times$ 10$^6$ yr ago, approximately one settling time for Fe because its mass fraction is depleted by a factor of 2 relative to CI chondrites. The composition of the parent body is calculated following @Zuckerman2011 and Equation (5) in @Koester2009a. A fuller exploration of different time-varying models will be presented in the future in the spirit of @JuraXu2012. As shown in Figure \[Fig: G241-6\_chi\], both carbonaceous chondrites and ordinary chondrites provide good matches to all 11 elements, including O, Mg, Si, P, S, Ca, Ti, Cr, Mn, Fe and Ni. However, the carbon upper limit in G241-6, which is not included in the $\chi^2_{red}$ analysis, is at least one order of magnitude lower than most carbonaceous chondrites [@Jura2012]. Thus, ordinary chondrites are a more promising solar system analog to the parent body accreted onto G241-6 and nebular condensation is sufficient to produce the observed abundance pattern. The $\chi^2_{red}$ analysis for GD 40 and G241-6 confirms the previous results [@Jura2012]; the accreted extrasolar planetesimals can be formed under nebular condensation and their compositions resemble primitive chondrites in the solar system. [^1]: The four white dwarfs are: GD 61 [@Desharnais2008; @Farihi2011a]; GD 40, G241-6 [@Klein2010; @Klein2011; @Zuckerman2010; @Jura2012] and WD 1929+012 [@Vennes2010; @Vennes2011a; @Melis2011; @Gaensicke2012]. [^2]: Very recently, @Koester2012 reported several white dwarfs with solar carbon-to-silicon ratio. However, the source of this pollution is unclear and more analysis is forthcoming. [^3]: http://www.stsci.edu/hst/cos/performance/spectral\_resolution/nuv\_model\_lsf [^4]: Here, log n(X)/n(Y) is abbreviated as \[X\]/\[Y\]. [^5]: Updated diffusion timescales can be obtained at http://www.astrophysik.uni-kiel.de/ koester/astrophysics/ [^6]: For a couple of meteorites with no reported carbon abundance, we compute the $\chi^2_{red}$ for the other 10 elements. [^7]: http://www.lpi.usra.edu/meteor/ [^8]: Similar to the case of GD 362, for the meteorites with no reported carbon abundance, we only calculated $\chi^2_{red}$ for the other 8 elements. [^9]: http://www.lpi.usra.edu/meteor/ [^10]: http://herschel.esac.esa.int/hcss-doc-8.0/print/howtos/howtos.pdf [^11]: http://herschel.esac.esa.int/Docs/PACS/pdf/pacs\_om.pdf [^12]: http://www.astro.umontreal.ca/ bergeron/CoolingModels/ [^13]: GD 61 also has high-resolution optical and ultraviolet observations [@Desharnais2008; @Farihi2011a]. However, with a total of 5 detected elements, it is hard to make a comparison using the $\chi^2_{red}$ analysis.
--- abstract: | Let $\mathbf{H}=(h_{ij})$ and $\mathbf{G}=(g_{ij})$ be two $m\times n$, $m\leq n$, random matrices, each with i.i.d complex zero-mean unit-variance Gaussian entries, with correlation between any two elements given by $\mathbb{E}[h_{ij}g_{pq}^\star]=\rho\,\delta_{ip}\delta_{jq}$ such that $|\rho|<1$, where ${}^\star$ denotes the complex conjugate and $\delta_{ij}$ is the Kronecker delta. Assume $\{s_k\}_{k=1}^m$ and $\{r_l\}_{l=1}^m$ are unordered singular values of $\mathbf{H}$ and $\mathbf{G}$, respectively, and $s$ and $r$ are randomly selected from $\{s_k\}_{k=1}^m$ and $\{r_l\}_{l=1}^m$, respectively. In this paper, exact analytical closed-form expressions are derived for the joint probability distribution function (PDF) of $\{s_k\}_{k=1}^m$ and $\{r_l\}_{l=1}^m$ using an Itzykson-Zuber-type integral, as well as the joint marginal PDF of $s$ and $r$, by a bi-orthogonal polynomial technique. These PDFs are of interest in multiple-input multiple-output (MIMO) wireless communication channels and systems. author: - 'Shuangquan Wang[^1]' - 'Ali Abdi${}^*$' bibliography: - 'IEEEabrv.bib' - 'IEEE\_sw27.bib' title: Joint Singular Value Distribution of Two Correlated Rectangular Gaussian Matrices and Its Application --- correlated complex random matrices, joint singular value distribution, bi-orthogonal polynomials 15A52, 15A18, 62E15, 33C45 Introduction ============ Random singular values have found numerous applications such as hypothesis testing and principal component analysis in statistics[@IEEE_sw27:MuirheadBook82], nuclear energy levels and level spacing in nuclear physics[@IEEE_sw27:MehtaBook04], and calculation of the multiple-input multiple-output (MIMO) channel capacity in wireless communications[@IEEE_sw27:Telatar99_MIMO_Cap]. The singular value distribution of a *single* Gaussian random matrix is given in[@IEEE_sw27:Shen01_SVD_PDF]. However, the joint singular value distribution of *correlated* Gaussian random matrices have received less attention so far, although it has important applications in wireless MIMO communications, say, the second-order statistics of the *eigen*-channels[@IEEE_sw27:Wang_EigenChannel_GlobeCom06] and instantaneous mutual information[@IEEE_sw27:Wang05_LCR_AFD_SISO_Capacity; @IEEE_sw27:Wang05_IT; @IEEE_sw27:NanZhang05]. To the best of our knowledge, correlated random matrices have been studied to some extent[@IEEE_sw27:MehtaBook04; @IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices; @IEEE_sw27:Mehta98_Coupled_Hermitian_Matrix_Chain], where only Hermitian matrices were considered. Different from [@IEEE_sw27:MehtaBook04; @IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices; @IEEE_sw27:Mehta98_Coupled_Hermitian_Matrix_Chain], we consider the situation where the elements, with the same indices, of the two rectangular complex Gaussian random matrices are correlated by a *complex* number, and derive exact analytical closed-form expressions for the joint PDF of their singular values. This paper is organized as follows. Section \[sec:problem\_desc\] introduces the two rectangular complex Gaussian random matrices. The joint PDF of singular values are studied in section \[sec:JPDF\_Singular\_Values\] using an Itzykson-Zuber-type integral. The joint marginal PDF of singular values is derived in section \[sec:Marginal\_JPDF\] and its application to wireless MIMO communications is presented in section \[sec:Applications\]. Finally, concluding remarks are summarized in section \[sec:Conclusion\]. [*Notation*]{}: $\cdot^{\dag}$ is reserved for matrix Hermitian, $\cdot^T$ for matrix transpose, $\cdot^{\star}$ for complex conjugate, ${\operatorname{tr}}[\cdot]$ for the trace of a matrix, $\jmath$ for $\sqrt{-1}$, $\mathbb{E}[\cdot]$ for mathematical expectation, $\mathbf{I}_m$ for the $m\times m$ identity matrix, $\otimes$ for the Kronecker product, and $\Re[\cdot]$ and $\Im[\cdot]$ for the real and imaginary parts of a complex number, respectively. In addition, $\diag(\mathbf{s})$ denotes a diagonal matrix with $\mathbf{s}$ on the main diagonal, $t\!\!\in\!\![m,n]$ implies that $t$, $m$, and $n$ are all integers such that $m\leq t\leq n$ with $m\leq n$, and $\det\left|x_{kl}\right|$ is the determinant of the matrix, where $x_{kl}$ resides on the $k^\mathrm{th}$ row and $l^\mathrm{th}$ column. Moreover, lower-case bold letters represent row vectors, whereas upper-case bold letters are used for matrices. Finally $\mathcal{C\,\!N}$ means complex normal, and ${\operatorname{vec}}(\cdot)$ stacks all the columns of its matrix argument into one tall column vector. Problem Description {#sec:problem_desc} =================== There are two $m\times n$ random matrices $\mathbf{H}=(h_{ij})$ and $\mathbf{G}=(g_{ij})$, $i\in[1,m]$, $j\in[1,n]$, each with i.i.d complex zero-mean unit-variance Gaussian entries, i.e., $\mathbb{E}[h_{ij}]=\mathbb{E}[g_{ij}]=0, \forall i, j$, $\mathbb{E}[h_{ij}h_{pq}^\star]=\mathbb{E}[g_{ij}g_{pq}^\star] =\delta_{ip}\delta_{jq}$, where the Kronecker symbol $\delta_{ij}$ is $1$ or $0$ when $i=j$ or $i\neq j$. Therefore $\mathbf{H}, \mathbf{G}\thicksim \mathcal{C\,\!N}(\mathbf{0}, \mathbf{I}_{mn})$. Moreover, the correlation among the two random matrices is given by $$\label{eq:autocorrelation} \mathbb{E}[h_{ij}g_{pq}^\star]=\rho\,\delta_{ip}\delta_{jq}, \quad \forall i,j,p,q,$$ where $\rho=|\rho|e^{\jmath\theta}$ is a complex number with $|\rho|<1$. Without loss of generality, we assume $m\leq n$ and set $\nu=n-m$. Based on the singular value decomposition (SVD), $\mathbf{H}$ and $\mathbf{G}$ can be, respectively, diagonalized as[@IEEE_sw27:HuaBook63] $$\begin{aligned} \mathbf{H}&=\mathbf{U}\mathbf{S}\mathbf{V}^\dag,\label{eq:diagonalization_H}\\ \mathbf{G}&=\widetilde{\mathbf{U}}\mathbf{R} \widetilde{\mathbf{V}}^\dag,\label{eq:diagonalization_G}\end{aligned}$$ where $\mathbf{S}=\begin{bmatrix}\diag(\mathbf{s})\ \mathbf{0}\end{bmatrix}$ and $\mathbf{R}=\begin{bmatrix}\diag(\mathbf{r}) \ \mathbf{0}\end{bmatrix}$ with $\mathbf{s}=[s_1, s_2, \cdots, s_m]$ and $\mathbf{r}=[r_1, r_2, \cdots, r_m]$, respectively. We assume that the singular values of $\mathbf{G}$, $r_1, r_2, \cdots, r_m$, are unordered and the singular values of $\mathbf{H}$, $s_1, s_2, \cdots, s_m$, are also unordered. Now we would like to know the joint PDF of $\{r_l\}_{l=1}^m$ and $\{s_l\}_{l=1}^m$. Moreover, with $r$ randomly selected from $r_1, r_2, \cdots, r_m$, and $s$ randomly selected from $s_1, s_2, \cdots, s_m$, it is of interest to derive the joint PDF of $r$ and $s$ as well. These two PDFs are derived in Section \[sec:JPDF\_Singular\_Values\] and \[sec:Marginal\_JPDF\], respectively. Joint PDF of $\{s_l\}_{l=1}^m$ and $\{r_l\}_{l=1}^m$ {#sec:JPDF_Singular_Values} ==================================================== \[lem:jpdf\_H\_G\] For two correlated rectangular complex Gaussian random matrices, $\mathbf{H}, \mathbf{G}\thicksim \mathcal{C\,\!N}(\mathbf{0}, \mathbf{I}_{mn})$, with the correlation between $\mathbf{H}$ and $\mathbf{G}$ given by (\[eq:autocorrelation\]), the joint PDF of $\mathbf{H}$ and $\mathbf{G}$ is given by $$\label{eq:jpdf_H_G} p(\mathbf{H}, \mathbf{G})=\frac{1}{\pi^{2mn}\left(1-|\rho|^2\right)^{mn}} \exp\left[-\frac{{\operatorname{tr}}\!\left(\mathbf{H}\mathbf{H}^\dag+ \mathbf{G}\mathbf{G}^\dag-\rho^\star\mathbf{H}\mathbf{G}^\dag- \rho\mathbf{G}\mathbf{H}^\dag\right)}{1-|\rho|^2}\right].$$ We set $\mathbf{h}={\operatorname{vec}}(\mathbf{H})$, $\mathbf{g}={\operatorname{vec}}(\mathbf{G})$, and $\mathbf{x}=[\mathbf{h}^T\ \mathbf{g}^T]^T$. Based on $\mathbf{H}, \mathbf{G}\thicksim \mathcal{C\,\!N}(\mathbf{0}, \mathbf{I}_{mn})$ and (\[eq:autocorrelation\]), we have the mean and covariance matrix of $\mathbf{x}$ as $\mathbb{E}[\mathbf{x}]=\mathbf{0}$ and $\Sigma_\mathbf{x}=\Sigma_\tau\otimes\mathbf{I}_{mn}$ with $\Sigma_\tau=\left[\begin{smallmatrix}1& \rho\\\rho^\star& 1\end{smallmatrix}\right]$, respectively. Therefore the PDF of $\mathbf{x}$ is given by[@IEEE_sw27:James64_MatrixVariate] $$\label{eq:pdf_x} p(\mathbf{x})=\frac{1}{\pi^{2mn}\det|\Sigma_\mathbf{x}|} \exp\left(-\mathbf{x}^\dag\Sigma_\mathbf{x}^{-1}\mathbf{x}\right),$$ where $\det|\Sigma_\mathbf{x}|= \left(\det|\Sigma_\tau|\right)^{mn}=\left(1-|\rho|^2\right)^{mn}$. With $\Sigma_\tau^{-1}=\frac{1}{1-|\rho|^2}\left[\begin{smallmatrix}1& -\rho\\-\rho^\star& 1\end{smallmatrix}\right]$, we obtain $\Sigma_\mathbf{x}^{-1}=\Sigma_\tau^{-1}\otimes\mathbf{I}_{mn}= \frac{1}{1-|\rho|^2}\left[\begin{smallmatrix}\mathbf{I}_{mn}& -\rho\mathbf{I}_{mn}\\-\rho^\star\mathbf{I}_{mn}& \mathbf{I}_{mn}\end{smallmatrix}\right]$. Therefore $\mathbf{x}^\dag\Sigma_\mathbf{x}^{-1}\mathbf{x}$ in (\[eq:pdf\_x\]) can be rewritten as $$\label{eq:trace} \begin{split} \mathbf{x}^\dag\Sigma_\mathbf{x}^{-1}\mathbf{x} &={\operatorname{tr}}\left(\Sigma_\mathbf{x}^{-1}\mathbf{x}\mathbf{x}^\dag\right) ={\operatorname{tr}}\left(\frac{1}{1-|\rho|^2}\left[\begin{smallmatrix}\mathbf{I}_{mn}& -\rho\mathbf{I}_{mn}\\-\rho^\star\mathbf{I}_{mn}& \mathbf{I}_{mn}\end{smallmatrix}\right]\left[\begin{smallmatrix} \mathbf{h}\mathbf{h}^\dag& \mathbf{h}\mathbf{g}^\dag\\ \mathbf{g}\mathbf{h}^\dag& \mathbf{g}\mathbf{g}^\dag\end{smallmatrix}\right]\right),\\ &=\frac{{\operatorname{tr}}\left(\mathbf{h}\mathbf{h}^\dag +\mathbf{g}\mathbf{g}^\dag-\rho^\star\mathbf{h}\mathbf{g}^\dag -\rho\mathbf{g}\mathbf{h}^\dag\right)}{1-|\rho|^2} =\frac{{\operatorname{tr}}\!\left(\mathbf{H}\mathbf{H}^\dag+ \mathbf{G}\mathbf{G}^\dag-\rho^\star\mathbf{H}\mathbf{G}^\dag- \rho\mathbf{G}\mathbf{H}^\dag\right)}{1-|\rho|^2}, \end{split}$$ where ${\operatorname{tr}}\left(\mathbf{A}\mathbf{B}^\dag\right)= {\operatorname{vec}}(\mathbf{B})^\dag{\operatorname{vec}}(\mathbf{A}) ={\operatorname{tr}}\left[{\operatorname{vec}}(\mathbf{A}) {\operatorname{vec}}(\mathbf{B})^\dag\right]$[@IEEE_sw27:GuptaBook99] is used in the last “=” of (\[eq:trace\]). Substitution of (\[eq:trace\]) into (\[eq:pdf\_x\]) leads to (\[eq:jpdf\_H\_G\]). From (\[eq:diagonalization\_H\]), we know that the unitary matrix pair $(\mathbf{U}, \mathbf{V})$ parameterizes the coset space $\mathcal{U}(m)\times\mathcal{U}(n)/\left[\mathcal{U}(1)\right]^m$, where $\mathcal{U}(p)$ is the unitary group of order $p$, and the integration measure, $d[\mathbf{H}]=\prod_{i=1}^m\prod_{j=1}^nd\left[\Re{h_{ij}}\right] d\left[\Im{h_{ij}}\right]$, can be represented by[@IEEE_sw27:Jackson96] $$\label{eq:dH} d[\mathbf{H}]=\Omega J(\mathbf{s})d[\mathbf{s}]d\mu(\mathbf{U}, \mathbf{V}),$$ where $J(\mathbf{s})=\triangle^2(\mathbf{s}^2) \prod_{k=1}^ms_k^{2\nu+1}$ with the $m$-dimensional Vandermonde determinant $\triangle(\mathbf{s}^2)=\det\left|s_k^{2(l-1)}\right| =\prod_{k>l}(s^2_k-s^2_l)$ and $\triangle^2(\cdot)=\left[\triangle(\cdot)\right]^2$, $d[\mathbf{s}]=\prod_{l=1}^m ds_l$, $d\mu(\mathbf{U}, \mathbf{V})$ is the Haar measure of $\mathcal{U}(m)\times\mathcal{U}(n)/ \left[\mathcal{U}(1)\right]^m$[@IEEE_sw27:Jackson96], and the constant $\Omega$ is given by[@IEEE_sw27:Nagao91; @IEEE_sw27:Jackson96] $$\label{eq:Omega} \Omega=\frac{2^m\pi^{mn}}{\prod_{j=1}^mj!(j+\nu-1)!} =\frac{2^m\pi^{mn}}{m!\prod_{j=0}^{m-1}j!(j+\nu)!}.$$ Similarly, we have $$\label{eq:dG} d[\mathbf{G}]=\Omega J(\mathbf{r})d[\mathbf{r}]d\mu(\widetilde{\mathbf{U}}, \widetilde{\mathbf{V}}),$$ where $J(\mathbf{r})=\triangle^2(\mathbf{r}^2) \prod_{k=1}^mr_k^{2\nu+1}$ with the $m$-dimensional Vandermonde determinant $\triangle(\mathbf{r}^2)=\det \left|r_k^{2(l-1)}\right| =\prod_{k>l}(r^2_k-r^2_l)$ and $d[\mathbf{r}]=\prod_{l=1}^m dr_l$. In order to obtain the joint PDF of $\{r_l\}_{l=1}^m$ and $\{s_l\}_{l=1}^m$, we need the following proposition. \[pro:Itzykson-Zuber-Integral\] $$\begin{split} &\hspace{1em}\int d\mu(\mathbf{U},\mathbf{V})\exp\left\{-\frac{{\operatorname{tr}}\left[(\mathbf{H}-\mathbf{G}) (\mathbf{H}-\mathbf{G})^\dag\right]}{t}\right\}\\ &=\frac{2^m\pi^{mn}t^{mn-m} \det\left|\exp\left(-\frac{s^2_k+r^2_l}{t}\right) I_\nu\!\!\left(\frac{2s_kr_l}{t}\right)\right|} {m!\Omega\triangle(\mathbf{s}^2) \triangle(\mathbf{r}^2)\prod_{k=1}^m (s_kr_k)^\nu}, \end{split}$$ where $\Omega$ is given by (\[eq:Omega\]) and $I_k(z)=\frac{1}{\pi}\!\int_0^\pi e^{z\cos\theta} \cos(k\theta)\text{d}\theta$ is the $k^\text{th}$ order modified Bessel function of the first kind. \[th:jpdf\_singular\_values\] The joint PDF of the singular values of $\mathbf{H}$ and $\mathbf{G}$ is given by $$\label{eq:jpdf_singular_values} p(\mathbf{s}, \mathbf{r}) =\frac{\exp\left(-\frac{\sum_{k=1}^ms^2_k+r^2_k} {1-|\rho|^2}\right)\triangle(\mathbf{s}^2) \triangle(\mathbf{r}^2)\prod_{k=1}^m (s_kr_k)^{\nu+1} \det\left| I_\nu\!\!\left(\frac{2|\rho|s_kr_l}{1-|\rho|^2}\right)\right|} {2^{-2m}m!m!\prod_{j=0}^{m-1}j!(j+\nu)!|\rho|^{mn-m}(1-|\rho|^2)^m}.$$ By combining (\[eq:jpdf\_H\_G\]) with (\[eq:dH\]) and (\[eq:dG\]), we obtain $$\label{eq:jpdf_singular_values_derivation} p(\mathbf{s}, \mathbf{r})= \frac{\Omega^2 J(\mathbf{s})J(\mathbf{r})}{\pi^{2mn}(1-|\rho|^2)^{mn}} \Phi(\mathbf{s}, \mathbf{r}),$$ where $$\label{eq:Phi_s_r} \begin{split} \Phi(\mathbf{s}, \mathbf{r})\!&=\!\int\!\! d\mu(\widetilde{\mathbf{U}},\widetilde{\mathbf{V}})\!\int\!\! d\mu(\mathbf{U},\mathbf{V})\exp\left[-\frac{{\operatorname{tr}}\!\left( \mathbf{H}\mathbf{H}^\dag \!+\!\mathbf{G}\mathbf{G}^\dag\!-\!\rho^\star\mathbf{H}\mathbf{G}^\dag\!-\! \rho\mathbf{G}\mathbf{H}^\dag\right)}{1-|\rho|^2}\right],\\ \!&=\!\int\!\! d\mu(\widetilde{\mathbf{U}},\widetilde{\mathbf{V}})\!\!\int\!\! d\mu(\mathbf{U},\mathbf{V})\exp\left\{\!-\frac{{\operatorname{tr}}\left[(\mathbf{H}\!-\!\rho\mathbf{G}) (\mathbf{H}\!-\!\rho\mathbf{G})^\dag\right]}{1-|\rho|^2}\!-\! {\operatorname{tr}}(\mathbf{G}\mathbf{G}^\dag)\!\right\}\!,\\ \!&=\!\int\!\! d\mu(\widetilde{\mathbf{U}},\widetilde{\mathbf{V}}) \,e^{-{\operatorname{tr}}(\mathbf{G}\mathbf{G}^\dag)}\!\int\!\! d\mu(\mathbf{U},\mathbf{V})\exp\left\{\!-\frac{{\operatorname{tr}}\left[(\mathbf{H}-\rho\mathbf{G}) (\mathbf{H}-\rho\mathbf{G})^\dag\right]}{1-|\rho|^2}\!\right\},\\ \!&=\!\int\!\! d\mu(\widetilde{\mathbf{U}},\widetilde{\mathbf{V}}) \frac{e^{-\sum_{k=1}^mr_k^2}(1-|\rho|^2)^{mn-m} \det\left|e^{-\frac{s^2_k+|\rho|^2r^2_l}{1-|\rho|^2}} I_\nu\!\!\left(\frac{2|\rho|s_kr_l}{1-|\rho|^2}\right)\right|} {2^{-m}m!\pi^{-mn}\Omega\triangle(\mathbf{s}^2) \triangle(|\rho|^2\mathbf{r}^2)\prod_{k=1}^m (|\rho|s_kr_k)^\nu},\\ \!&=\!\frac{(1-|\rho|^2)^{mn-m}\exp\left(-\frac{\sum_{k=1}^ms^2_k+r^2_k} {1-|\rho|^2}\right) \det\left| I_\nu\!\!\left(\frac{2|\rho|s_kr_l}{1-|\rho|^2}\right)\right|} {2^{-m}m!\pi^{-mn}\Omega|\rho|^{mn-m}\triangle(\mathbf{s}^2) \triangle(\mathbf{r}^2)\prod_{k=1}^m (s_kr_k)^\nu}. \end{split}$$ Derivation of the second and third lines of (\[eq:Phi\_s\_r\]) are straightforward. The fourth line comes from $$\rho\mathbf{G}=\widehat{\mathbf{U}}\widehat{\mathbf{R}} \widehat{\mathbf{V}}^\dag$$ with $\widehat{\mathbf{R}}=|\rho|\mathbf{R}$, and Proposition \[pro:Itzykson-Zuber-Integral\] with the replacements $t\rightarrow1-|\rho|^2$ and $\mathbf{G}\rightarrow\rho\mathbf{G}$. The last line is based on the convention that $\int\! d\mu(\widetilde{\mathbf{U}},\widetilde{\mathbf{V}})=1$[@IEEE_sw27:Jackson96]. Plugging (\[eq:Omega\]) and the last line of (\[eq:Phi\_s\_r\]) into (\[eq:jpdf\_singular\_values\_derivation\]), we obtain (\[eq:jpdf\_singular\_values\]). By relating the eigenvalues of $\mathbf{G}\mathbf{G}^\dag$ to the singular values of $\mathbf{G}$ through $\alpha_l=r_l^2$, $l\in[1,m]$, and the eigenvalues of $\mathbf{H}\mathbf{H}^\dag$ to the singular values of $\mathbf{H}$ through $\beta_l=s_l^2$, $l\in[1,m]$, we can derive the joint PDF of $\mbox{\boldmath{$\alpha$}}=\left[\alpha_1, \alpha_2, \cdots, \alpha_m\right]$ and $\mbox{\boldmath{$\beta$}}=\left[\beta_1, \beta_2, \cdots, \beta_m\right]$, presented in the following corollary. \[coro:jpdf\_eigenvalues\] The joint PDF of the unordered eigenvalues of $\mathbf{H}\mathbf{H}^\dag$ and $\mathbf{G}\mathbf{G}^\dag$ is $$\label{eq:jpdf_eig_values} p(\mbox{\boldmath{$\beta$}}, \mbox{\boldmath{$\alpha$}})=\frac{\exp \left(-\frac{\sum_{k=1}^m\beta_k+\alpha_k} {1-|\rho|^2}\right)\triangle(\mbox{\boldmath{$\beta$}}) \triangle(\mbox{\boldmath{$\alpha$}})\prod_{k=1}^m (\sqrt{\beta_k\alpha_k})^\nu \det\left| I_\nu\!\!\left(\frac{2|\rho|\sqrt{\beta_k\alpha_l}}{1-|\rho|^2}\right)\right|} {m!m!\prod_{j=0}^{m-1}j!(j+\nu)!|\rho|^{mn-m}(1-|\rho|^2)^m},$$ where $m$-dimensional Vandermonde determinants are defined by $\triangle(\mbox{\boldmath{$\beta$}})=\det \left|\beta_k^{l-1}\right|=\prod_{k>l}(\beta_k-\beta_l)$ and $\triangle(\mbox{\boldmath{$\alpha$}})=\det \left|\alpha_k^{l-1}\right|=\prod_{k>l}(\alpha_k-\alpha_l)$. It is straightforward to obtain (\[eq:jpdf\_eig\_values\]) from (\[eq:jpdf\_singular\_values\]) by $2m$ one-to-one nonlinear mappings. Joint Marginal PDF {#sec:Marginal_JPDF} ================== In this section, with $\beta=s^2$ and $\alpha=r^2$, we calculate the joint marginal PDF of $\beta$ and $\alpha$, $p(\beta, \alpha)$, using the techniques and results presented in [@IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices; @IEEE_sw27:Mehta98_Coupled_Hermitian_Matrix_Chain]. Then the joint PDF of $s$ and $r$, $p(s,r)$, is easily derived. If the polynomials $P_k(\beta)$ and $Q_l(\alpha)$, satisfy $\int w(\beta, \alpha)P_k(\beta)Q_l(\alpha)d\beta d\alpha=\delta_{kl}$, then we call $P_k(\beta)$ and $Q_l(\alpha)$ as bi-orthogonal polynomials, associated with the weight function $w(\beta, \alpha)$[@IEEE_sw27:MehtaBook04]. With this definition, we have the following Lemma. \[lem:jpdf\_by\_bi-orthonomal\_weight\_function\] There exist bi-polynomials, $P_k(\beta)$ and $Q_l(\alpha)$, and a weight function, $w(\beta, \alpha)$, which reduce (\[eq:jpdf\_eig\_values\]) to the following form $$\label{eq:jpdf_by_bi-orthonomal_weight_function} p(\mbox{\boldmath{$\beta$}}, \mbox{\boldmath{$\alpha$}})=C_1\,\det|P_{k-1}(\beta_l)| \det|w(\beta_k, \alpha_l)| \det|Q_{k-1}(\alpha_l)|,$$ where $C_1$ is a normalization constant. In this paper, $\nu$ is a non-negative integer. Using the Hille-Hardy formula[@IEEE_sw27:BeckmannBook73 pp. 185, (46)] $$\label{eq:Hille-Hardy-Formula} \sum_{k=0}^\infty\frac{k!z^k}{(k+\nu)!}L_k^\nu(x)L_k^\nu(y) =\frac{(xyz)^{-\frac{\nu}{2}}}{1-z}\exp\left(-z\frac{x+y}{1-z}\right) I_\nu\!\!\left(\frac{2\sqrt{xyz}}{1-z}\right), |z|<1,$$ with $L_k^\nu(x)=\frac{1}{k!} e^xx^{-\nu}\frac{d^k}{dx^k}(e^{-x}x^{k+\nu})$ as the associated Laguerre polynomial, we can rewrite (\[eq:jpdf\_eig\_values\]) as $$\label{eq:jpdf_eig_values_Laguerre} p(\mbox{\boldmath{$\beta$}}, \mbox{\boldmath{$\alpha$}})=\frac{\triangle(\mbox{\boldmath{$\beta$}}) \triangle(\mbox{\boldmath{$\alpha$}})\det\left| \beta_k^\nu e^{-\beta_k}\alpha_l^\nu e^{-\alpha_l}\sum_{j=0}^\infty \frac{j!|\rho|^{2j}L_j^\nu(\beta_k)L_j^\nu(\alpha_l)}{(j+\nu)!}\right|} {m!m!\prod_{j=0}^{m-1}j!(j+\nu)!|\rho|^{m(m-1)}}.$$ We set the weight function, $w(\beta, \alpha)$, as $$\label{eq:weight_function_w_beta_alpha} \begin{split} w(\beta, \alpha)&=\beta^\nu\alpha^\nu e^{-(\beta+\alpha)}\sum_{j=0}^\infty \frac{j!|\rho|^{2j}L_j^\nu(\beta)L_j^\nu(\alpha)}{(j+\nu)!},\\ &=\frac{(\beta\alpha)^\frac{\nu}{2} e^{-\frac{\beta+\alpha}{1-|\rho|^2}} I_\nu\!\!\left(\frac{2|\rho|\sqrt{\beta\alpha}} {1-|\rho|^2}\right)}{(1-|\rho|^2)|\rho|^\nu}. \end{split}$$ It is easy to check that the corresponding bi-orthogonal polynomials are given by $$\begin{aligned} P_k(\beta) &=\sqrt{\frac{k!}{(k+\nu)!}}|\rho|^{-k}L_k^\nu(\beta), \label{eq:Bi-orthogonal_Poly_P}\\ Q_l(\alpha) &=\sqrt{\frac{l!}{(l+\nu)!}}|\rho|^{-l}L_l^\nu(\alpha), \label{eq:Bi-orthogonal_Poly_Q}\end{aligned}$$ using the following integral equality[@IEEE_sw27:BeckmannBook73 pp. 267, 7.414.3] $$\label{eq:Orthorgonality_Laguerre_Poly} \int_0^\infty e^{-x}x^\nu L_k^\nu(x)L_l^\nu(x) =\frac{(k+\nu)!}{k!}\delta_{kl}$$ Moreover, by the addition of multiples of rows of lower order which do not change the determinant of the Vandermonde matrix, then each of the rows can be expressed in terms of orthogonal polynomials with respect to the weight function $w(\beta, \alpha)$. Therefore two $m$-dimensional Vandermonde determinants, $\triangle(\mbox{\boldmath{$\beta$}})$ and $\triangle(\mbox{\boldmath{$\alpha$}})$, can be represented as $$\begin{aligned} \triangle(\mbox{\boldmath{$\beta$}}) &=\det\left|\beta_k^{l-1}\right| =C_2\det\left|P_{k-1}(\beta_l)\right|,\label{eq:Vandermode_det_P}\\ \triangle(\mbox{\boldmath{$\alpha$}}) &=\det\left|\alpha_k^{l-1}\right| =C_3\det\left|Q_{k-1}(\alpha_l)\right|,\label{eq:Vandermode_det_Q}\end{aligned}$$ where we use the fact that the matrix transpose does not change the determinant, i.e., $\det\left|P_{l-1}(\beta_k)\right| =\det\left|P_{k-1}(\beta_l)\right|$ and $\det\left|Q_{l-1}(\beta_k)\right| =\det\left|Q_{k-1}(\beta_l)\right|$. The coefficient of $x^k$ in $L_k^\nu(x)$ is $\frac{(-1)^k}{k!}$, then the coefficient of $x^k$ in $P_k(x)$ is $(-1)^k|\rho|^{-k}\frac{1}{\sqrt{k!(k+\nu)!}}$, therefore we have $C_2=\prod_{j=0}^{m-1}\!(-1)^j|\rho|^j\sqrt{j!(j\!+\!\nu)!}= (-1)^{\frac{m(m-1)}{2}}\\\times\sqrt{|\rho|^{m(m-1)}\! \prod_{j=0}^{m-1}\!j!(j\!+\!\nu)!}$, obtained by plugging (\[eq:Bi-orthogonal\_Poly\_P\]) into (\[eq:Vandermode\_det\_P\]). Similarly, substitution of (\[eq:Bi-orthogonal\_Poly\_Q\]) into (\[eq:Vandermode\_det\_Q\]) gives $C_3=C_2$. Now the product of (\[eq:Vandermode\_det\_P\]) and (\[eq:Vandermode\_det\_Q\]) results in $$\label{eq:Vandermode_det_relationship} \triangle(\mbox{\boldmath{$\beta$}})\triangle(\mbox{\boldmath{$\alpha$}}) =|\rho|^{m(m-1)}\prod_{j=0}^{m-1}j!(j+\nu)!\det|P_{k-1}(\beta_l)| \det|Q_{k-1}(\alpha_l)|.$$ Based on (\[eq:weight\_function\_w\_beta\_alpha\]) and (\[eq:Vandermode\_det\_relationship\]), one can see that (\[eq:jpdf\_eig\_values\_Laguerre\]) is equal to (\[eq:jpdf\_by\_bi-orthonomal\_weight\_function\]) with $C_1=\frac{1}{m!m!}$. \[th:jpdf\_beta\_alpha\] The joint PDF of $\beta$ and $\alpha$ is given by $$\begin{gathered} \label{eq:jpdf_beta_alpha} p(\beta, \alpha)=\frac{(\beta\alpha)^\frac{\nu}{2} e^{-\frac{\beta+\alpha}{1-|\rho|^2}}I_\nu\!\!\left(\frac{2|\rho|\sqrt{\beta\alpha}} {1-|\rho|^2}\right)}{m^2(1-|\rho|^2)|\rho|^\nu} \sum_{k=0}^{m-1}\frac{k!}{(k+\nu)!}\frac{L_k^\nu(\beta)L_k^\nu(\alpha)} {|\rho|^{2k}}\\ +\frac{(\beta\alpha)^\nu e^{-(\beta+\alpha)}}{m^2}\sum_{0\leq k<l}^{m-1}\bigg\{\frac{k!l!}{(k+\nu)!(l+\nu)!} \Big\{\left[L_k^\nu(\beta)L_l^\nu(\alpha)\right]^2 +\left[L_l^\nu(\beta)L_k^\nu(\alpha)\right]^2\\ -\left[|\rho|^{2(l-k)}+|\rho|^{2(k-l)}\right] L_k^\nu(\beta)L_l^\nu(\beta)L_k^\nu(\alpha)L_l^\nu(\alpha)\Big\}\bigg\}.\end{gathered}$$ Based on Lemma \[lem:jpdf\_by\_bi-orthonomal\_weight\_function\], and the results presented in [@IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices (3.7)] [@IEEE_sw27:Mehta98_Coupled_Hermitian_Matrix_Chain], $p(\beta, \alpha)$ can be expressed as $$\label{eq:jpdf_beta_alpha_general} m^2\,p(\beta, \alpha)=w(\beta, \alpha)\sum_{k=0}^{m-1}P_k(\beta)Q_k(\alpha)+\sum_{0\leq k<l}^{m-1}\det\!\left|\!\!\! \begin{array}{cc} P_k(\beta) & \!\!\overline{P}_k(\alpha) \\ P_l(\beta) & \!\!\overline{P}_l(\alpha) \\ \end{array} \!\!\!\right| \!\det\!\left|\!\!\! \begin{array}{cc} \overline{Q}_k(\beta) & \!\!Q_k(\alpha) \\ \overline{Q}_l(\beta) & \!\!Q_l(\alpha) \\ \end{array} \!\!\!\right|,$$ where $P_k(x)$ and $Q_k(x)$ are defined in (\[eq:Bi-orthogonal\_Poly\_P\]) and (\[eq:Bi-orthogonal\_Poly\_Q\]), respectively, the weight function is presented in (\[eq:weight\_function\_w\_beta\_alpha\]), and $\overline{P}_k(\alpha)$ and $\overline{Q}_l(\beta)$ are similarly defined as [@IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices] $$\begin{aligned} \overline{P}_k(\alpha) &=\int P_k(\beta)w(\beta, \alpha)d\beta =\sqrt{\frac{k!}{(k+\nu)!}}\alpha^\nu e^{-\alpha}|\rho|^kL_k^\nu(\alpha), \label{eq:Weighted_Bi-orthogonal_Poly_P}\\ \overline{Q}_l(\beta) &=\int Q_l(\alpha)w(\beta, \alpha)d\alpha =\sqrt{\frac{l!}{(l+\nu)!}}\beta^\nu e^{-\beta}|\rho|^lL_l^\nu(\beta).\label{eq:Weighted_Bi-orthogonal_Poly_Q}\end{aligned}$$ Plugging (\[eq:weight\_function\_w\_beta\_alpha\]), (\[eq:Bi-orthogonal\_Poly\_P\]), (\[eq:Bi-orthogonal\_Poly\_Q\]), (\[eq:Weighted\_Bi-orthogonal\_Poly\_P\]) and (\[eq:Weighted\_Bi-orthogonal\_Poly\_Q\]) into (\[eq:jpdf\_beta\_alpha\_general\]), we arrive at (\[eq:jpdf\_beta\_alpha\]). It is straightforward to obtain the joint PDF of $s$ and $r$ from (\[eq:jpdf\_beta\_alpha\]), according to these one-to-one mappings $s=\sqrt{\beta}$ and $r=\sqrt{\alpha}$. The joint PDF in (\[eq:jpdf\_beta\_alpha\]) includes many existing PDF’s as special cases. - By integration over $\beta$, (\[eq:jpdf\_beta\_alpha\]) reduces to the marginal PDF $$\label{eq:PDF_alpha} p(\alpha)=\frac{1}{m}\sum_{k=0}^{m-1}\frac{k!}{(k+\nu)!} \left[L_k^\nu(\alpha)\right]^2\alpha^\nu e^{-\alpha},$$ which is the same as the PDF presented in [@IEEE_sw27:Telatar99_MIMO_Cap]. When $m=1$, (\[eq:PDF\_alpha\]) further reduces to $$\label{eq:PDF_alpha_m=1} p(\alpha)=\frac{1}{(n-1)!}\alpha^{n-1} e^{-\alpha},$$ which is the $\chi^2$ distribution with $2n$ degrees of freedom[@IEEE_sw27:SimonBook02 (2.32)]. - With $m=1$, (\[eq:jpdf\_beta\_alpha\]) reduces to[@IEEE_sw27:Wang06_Corr_Coeff_MRC], $$\label{eq:jpdf_alpha_beta_MRC} p(\alpha,\beta)=\frac{(\alpha \beta)^\frac{n-1}{2}\exp\left(\!-\frac{\alpha+\beta}{1-|\rho|^2}\!\right) I_{n-1}\!\left(\frac{2|\rho|\sqrt{\alpha\beta}}{1-|\rho|^2}\right)} {(n-1)!\left(1-|\rho|^2\right)|\rho|^{n-1}}.$$ Furthermore, when $n=1$, (\[eq:jpdf\_alpha\_beta\_MRC\]) simplifies to $$\label{eq:jpdf_alpha_beta_SISO} p(\alpha, \beta)=\frac{1}{1-|\rho|^2}\exp\left(\!-\frac{\alpha+\beta} {1-|\rho|^2}\!\right) I_0\!\left(\frac{2|\rho|\sqrt{\alpha\beta}}{1-|\rho|^2}\right),$$ which is identical to (8-103)[@IEEE_sw27:DavenportBook87 pp. 163], after two one-to-one nonlinear transformations. For the application discussed in section \[sec:Applications\], we need the joint marginal PDF of $\phi$ and $\varphi$, $p(\phi,\varphi)$, where $\phi$ and $\varphi$ are randomly selected from $\{\alpha_k\}_{k=1}^m$, $m\geq2$. Using the technique in [@IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices; @IEEE_sw27:Mehta98_Coupled_Hermitian_Matrix_Chain], we have the following theorem. \[th:jpdf\_phi\_varphi\] If $\phi$ and $\varphi$ are randomly selected from $\{\alpha_k\}_{k=1}^m$, their joint PDF is given by $$\label{eq:jpdf_phi_varphi} p(\phi,\varphi)\!=\!\frac{(\phi\varphi)^\nu e^{-(\phi+\varphi)}}{m(m-1)}\!\sum_{\begin{subarray}{c}k,l=0\\k\neq l\end{subarray}}^{m-1}\!\frac{k!l!}{(k+\nu)!(l+\nu)!} \!\left\{\!\left[L_k^\nu(\phi)L_l^\nu(\varphi)\right]^2 \!-\!L_k^\nu(\phi)L_l^\nu(\phi)L_k^\nu(\varphi)L_l^\nu(\varphi)\!\right\}\!.$$ According to (1.6) and (2.14) in [@IEEE_sw27:Mehta94_Two_Coupled_Hermitian_Matrices] we have $$p(\phi,\varphi)=\frac{1}{m(m-1)}\det\!\left|\!\!\! \begin{array}{cc} K(\phi, \phi) & K(\phi, \varphi) \\ K(\varphi, \phi) & K(\varphi, \varphi) \\ \end{array} \!\!\!\right|,$$ where $K(x_1,x_2)=\sum_{k=0}^{m-1}P_k(x_1)\overline{Q}_k(x_2)$. With $P_k(x_1)$ in (\[eq:Bi-orthogonal\_Poly\_P\]) and $\overline{Q}_k(x_2)$ in (\[eq:Weighted\_Bi-orthogonal\_Poly\_Q\]), we obtain (\[eq:jpdf\_phi\_varphi\]) after some simple algebraic manipulations. Application to Wireless MIMO Communication Systems {#sec:Applications} ================================================== For an $N_{\!R}\times N_{\!T}$ MIMO time-varying Rayleigh flat fading channel[@IEEE_sw27:TseBook] with $N_{\!T}$ transmitters and $N_{\!R}$ receivers, the channel impulse response at time instant $t$ is given by $$\label{eq:H(t)} \mathbf{H}(t)=\begin{bmatrix} h_{1,1}(t) &\cdots & h_{1,N_{\!T}}(t) \\ \vdots & \ddots & \vdots \\ h_{N_{\!R},1}(t) & \cdots & h_{N_{\!R},N_{\!T}}(t) \\ \end{bmatrix}\!.$$ We assume all the $N_{\!R}N_{\!T}$ sub-channels in the MIMO system, $\left\{h_{i,j}(t)\right\}_{(i=1,j=1)}^{(N_{\!R},N_{\!T})}$ are i.i.d., with the same temporal correlation coefficient, i.e., $$\label{eq:Channel_Corr_Assumption} \mathbb{E}[h_{ij}(t)h_{pq}^\star(t-\tau)]=\delta_{ip}\delta_{jq} \rho_h(\tau),$$ where $\rho_h(\tau)=J_0(2\pi f_{\!D}\tau)$[@IEEE_sw27:JakesBook94] in isotropic scattering environments[^2], with $J_0(x)=I_0(-\jmath x)$[@IEEE_sw27:RyzhikBook_5th pp. 961, 8.406.3] and $f_{\!D}$ is the maximum Doppler frequency shift. We set $n=\max(N_{\!R},N_{\!T})$ and $m=\min(N_{\!R},N_{\!T})$. According to (\[eq:diagonalization\_H\]), $\mathbf{H}(t)$ can be diagonalized as $$\label{eq:H(t)_SVD} \mathbf{H}(t)=\mathbf{U}(t)\mathbf{S}(t)\mathbf{V}^\dag(t),$$ where $\mathbf{S}(t)=\begin{bmatrix}\diag(\mathbf{s}(t))\ \mathbf{0}\end{bmatrix}$ with $\mathbf{s}(t)=[s_1(t), s_2(t), \cdots, s_m(t)]$ for $N_{\!R}\leq N_{\!T}$, and $\mathbf{S}(t)=\begin{bmatrix}\diag(\mathbf{s}(t))\\ \mathbf{0}\end{bmatrix}$ for $N_{\!R}>N_{\!T}$. Therefore the MIMO channel, $\mathbf{H}(t)$, is decomposed to $m$ identically distributed *eigen*-channels $\lambda_k(t)=s_k^2(t)$, $k\in[1,m]$, by SVD. In wireless MIMO communication systems, we are interested in the correlation coefficient between any two *eigen*-channels, which is defined by $$\rho_{k,l}(\tau)=\frac{\mathbb{E}\left[\lambda_k(t) \lambda_l(t-\tau)\right]-\mathbb{E}\left[\lambda_k(t)\right] \mathbb{E}\left[\lambda_l(t)\right]} {\sqrt{\mathbb{E}\left[\lambda_k^2(t)\right] -\left\{\mathbb{E}\left[\lambda_k(t)\right]\right\}^2} \sqrt{\mathbb{E}\left[\lambda_l^2(t)\right] -\left\{\mathbb{E}\left[\lambda_l(t)\right]\right\}^2}}.$$ For simplicity, in this paper we only consider a $2\times2$ MIMO system, $N_{\!R}=N_{\!T}=2$, where the correlation coefficient, $\rho_{k,l}(\tau)$, can be shown to be $$\label{eq:Coeff_Eigen-Channel} \rho_{k,l}(\tau)=\begin{cases} 1-\frac{3}{2}\left(1-\delta_{kl}\right),&\tau=0,\\ \frac{|\rho_h(\tau)|^2}{4}=\frac{J_0^2\left(2\pi f_{\!D}\tau\right)}{4}, &\tau\neq0, \end{cases}, k,l=1,2,$$ with $J^2_0(\cdot)=\left[J_0(\cdot)\right]^2$. To derive (\[eq:Coeff\_Eigen-Channel\]), we note that for $\tau=0$ and $k=l$, $\rho_{k,l}(0)=1$ because of the definition of the correlation coefficient. Since $m=2$, for any *eigen*-channel at the time instant $t$, it is easy to show that the mean value of $\lambda_k(t)$ is $\mathbb{E}\left[\lambda_k(t)\right]=2$, $\forall k$, and the second moment of $\lambda_k(t)$ is $\mathbb{E}\left[\lambda^2_k(t)\right]=8$, $\forall k$, using the PDF in (\[eq:PDF\_alpha\]). For $\tau=0$ and $k\neq l$, we obtain $\mathbb{E}\left[\lambda_k(t) \lambda_l(t)\right]=2$ by (\[eq:jpdf\_phi\_varphi\]), hence $\rho_{k,l}(0)=-\frac{1}{2}$, $\forall k\neq l$. For $\tau\neq0$ and $\forall k,l$, it is not difficult to get $\mathbb{E}\left[\lambda_k(t) \lambda_l(t-\tau)\right]=4+|\rho_h(\tau)|^2$ using (\[eq:jpdf\_beta\_alpha\]), therefore we have the second line in (\[eq:Coeff\_Eigen-Channel\]). ![The channel correlation coefficient, $\rho_h(\tau)$, and correlation coefficient of any two eigen-channels, $\rho_{k,l}(\tau)$, in a $2\times2$ MIMO system with Clarke’s correlation model. Note that the sampling period, $T_{\!s}$, is $\frac{1}{1000f_{\!D}}$ in Monte Carlo simulations, therefore the first non-zero $\tau$ is $T_{\!s}$, i.e., $\frac{1}{1000f_{\!D}}$, which corresponds to $f_{\!D}\tau=\frac{1}{1000}$ in the horizontal axis.[]{data-label="fig:CrossCoeff_Iso"}](2X2_CrossCoeff_Iso_Near0.eps){width=".9\linewidth"} Monte Carlo simulations are performed to verify the result in (\[eq:Coeff\_Eigen-Channel\]). In all simulations[^3], the maximum Doppler frequency $f_{\!D}$ is set to $1$ Hz, and the sampling period, $T_{\!s}$, is equal to $\frac{1}{1000f_{\!D}}$. The simulation results are shown in [Fig]{}. \[fig:CrossCoeff\_Iso\], where the upper figure shows the channel correlation coefficient $\rho_h(\tau)=J_0\left(2\pi f_{\!D}\tau\right)$, Clarke’s correlation model, whereas the lower figure presents the correlation coefficient between any two *eigen*-channels or for any individual eigen-channel, Eq. (\[eq:Coeff\_Eigen-Channel\]). Since $J_0(2\pi f_{\!D}\tau)$ is an even function of $\tau$, the correlation coefficients are plotted for $\tau\geq0$. In all figures, “Simu.” indicates the curve is obtained by Monte Carlo simulations, whereas “Theo.” means theoretical. From [Fig]{}. \[fig:CrossCoeff\_Iso\] we can conclude that the new theoretical result in (\[eq:Coeff\_Eigen-Channel\]) is confirmed by simulation very well. Conclusion {#sec:Conclusion} ========== In this paper, the joint distribution of singular values of two correlated rectangular complex Gaussian random matrices is derived, as well as the joint marginal distribution. The derived distributions play an important role in the analysis and design of wireless MIMO communication systems. As an example, the correlation coefficient of any two *eigen*-channels of a $2\times2$ MIMO system is obtained and verified by the Monte Carlo simulations in this paper. [^1]: Center for Wireless Communications and Signal Processing Research (CWCSPR), Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102 ([sw27@njit.edu]{}, [ali.abdi@njit.edu]{}). [^2]: In the non-isotropic scattering environment, $\rho_h(\tau)$, in general, is a complex-value function[@IEEE_sw27:Wang05_LCR_AFD_SISO_Capacity; @IEEE_sw27:Wang05_IT], and $|\rho_h(\tau)|$ indicates its amplitude at the time delay $\tau$. [^3]: The spectral method[@IEEE_sw27:Acolatse03] is used to generate the MIMO channels.
--- author: - Fabien Nugier bibliography: - 'bibliography.bib' title: 'From GLC to double-null coordinates and illustration with static black holes' ---   Introduction {#SecIntro} ============ Physical coordinates have a long history in cosmology, from Temple’s “optical co-ordinates" derived in 1938 [@1938RSPSA.168..122T], for which the initial motivation consisted in introducing “some new systems of normal co-ordinates which are especially adapted to the discussion of problems of astronomical optics", to Saunders’ “observational coordinates" in 1968 [@saunders_observations_1968; @saunders_observations_1969] and their revival with Maartens’ work in 1980 [@Maartens1; @Maartens2] (which led to applications in cosmography [@1985PhR...124..315E]), we can say that the idea of using physical coordinates directly related to observable quantities has been a source of concern for quite some time in the scientific community. Astrophysics and cosmology are indeed two fields for which our local observer position is complexifying our understanding of the physics. On the other hand, if one wants to address questions without relying on strong philosophical assumptions (such as the cosmological principle), the use of observation-adapted systems of coordinates can be a very good alternative. The recent years have not been without efforts toward the goal of using coordinates directly adapted to what we measure. Observation-adapted schemes have been employed in simulations [@Bester:2013fya; @Bester:2015gla] in order to apply the “observational cosmology programme” [@1985PhR...124..315E] in the restricted spherically symmetric dust universe case. Independently from observational motivations, the so-called geodesic light-cone (GLC) coordinates [@P1] were first introduced in the context of the averaging problem in cosmology [@Li:2008yj; @Rasanen:2008be; @Kolb:2009hn; @Buchert:2011yu; @Clarkson:2011zq; @Buchert:2011sx]. They were nevertheless later employed to address tangible issues in cosmology, such as computing the effect of the large scale structure on the luminosity distance-redshift relation [@P2; @P3; @P4; @P5; @Marozzi:2014kua; @Fanizza:2015swa], number counts of galaxies [@DiDio:2014lka], lensing [@Fanizza:2013doa; @P6], and the propagation of ultra-relativistic particles [@Fanizza:2015gdn]. It was also tested on toy models such as the Lemaître-Tolman-Bondi [@P6] and Bianchi I spacetimes [@P7]. We propose in the present paper another system of coordinates, close to the GLC coordinates but now using two null-like coordinates instead of one null and one time-like coordinates. This system, *nicknamed* here as “double light cone" (DLC) coordinates for convenience, shares the same nice properties as GLC. *We also show that these coordinates are equivalent to the “double-null” coordinates of Brady, Droz, Israel and Morsink (1995)* (*hence the nickname for DLC, in reference to both double-null and GLC*) [@Brady:1995na]. As our system of coordinates also carries some residual gauge freedoms, we explain how to fix them. This paper hence adds to the weight of interest for GLC coordinates by showing their compatibility with double-null coordinates. We also propose an illustration of these coordinates in the spirit of Temple’s motivational sentence (i.e. for astrophysical objects), describing static black holes and trajectories around them, and comment on the propagation of ultra-relativistic particles. The structure of this paper is as follows. In Sec. \[SecGLC\] we recall facts about GLC coordinates and their most interesting properties. In Sec. \[SecDLC\] we introduce the “new" double light-cone coordinates (*again renamed for convenience*), and study their properties in comparison to the GLC ones. Sec. \[SecDN\] is devoted to showing that DLC coordinates are equivalent to the double-null coordinates and studying their gauge fixing. In Sec. \[SecDLCBH\] we illustrate these coordinates first by describing static black holes (Schwarzschild and Reissner-Nordström), and second, by deriving particles and photon trajectories around them. We finally comment on the time-of-flight difference between two ultra-relativistic particles in Sec. \[SecCommentURP\], draw some conclusions in Sec. \[SecConclusion\], and address some technical points in Apps. \[AppSecondDLCDerivation\] to \[AppZforStaticBH\]. Recalling Geodesic Light-Cone (GLC) coordinates {#SecGLC} =============================================== We give a short introduction to GLC coordinates and present some of their basic properties, mainly for comparison with the double light-cone coordinates presented in Sec. \[SecDLC\]. General definitions {#SecGLC:GenDef} ------------------- The geodesic light-cone (GLC) coordinates $(\tau,w,{\underline{\theta}}^a)$ ($a=1,2$) [@P1] form a system of four coordinates centered on a fundamental (or “geodesic") observer worldline. In details, $\tau$ is the proper time of this observer in geodetic motion and $w$ is a null coordinate setting the past light cones centered on this observer. Finally the angles ${\underline{\theta}}^a = (\theta,\phi)$ are parameterizing a topological 2-sphere $\Sigma(\tau,w)$ embedded into the intersection of the $\tau = {\mathrm{cst}}$ and $w = {\mathrm{cst}}$ hypersurfaces. The line element in the GLC coordinates is given by [@P1; @P7]: \[GLCds2\] s\_[[GLC]{}]{}\^2 = \^2 w\^2-2w +\_[ab]{}(\^a-U\^a w)(\^b-U\^b w)   , involving 6 arbitrary functions of $\tau$, $w$, and ${\underline{\theta}}^a$. These coordinates are hence perfectly general but contain a residual gauge freedom that can be fixed by simple conditions [@Fanizza:2013doa; @P7] (see also Sec. \[subsecDN:gaugefixing\]). The metric and its inverse, in GLC coordinates $(\tau,w,{\underline{\theta}}^a)$, can thus be written as: \[GLCmetric\] g\^[[GLC]{}]{}\_ = ( [ccc]{} 0 & - &\ -& \^2 + U\^2 & -U\_b\ 0\^[T]{} &-U\_a\^T & \_[ab]{}\ )      ,      g\_[[GLC]{}]{}\^ = ( [ccc]{} -1 & -\^[-1]{} & -U\^b/\ -\^[-1]{} & 0 &\ -(U\^a)\^T/& \^[ T]{} & \^[ab]{} )   , where we dropped the tildes on top of angles (as in Ref. [@P7]) and underlined them, differently from the notation usually employed in the “GLC literature" (Refs. [@P1; @P2; @P3; @P4; @P5; @P6; @Marozzi:2014kua; @Fanizza:2015swa; @Fanizza:2013doa; @DiDio:2014lka]). When needed, we will denote by $\bar\theta^a$ the homogeneous angles, i.e. the angles in an homogeneous spacetime. Interesting properties {#SecGLC:IntProp} ---------------------- There are several advantages of using the GLC coordinates. First they make light propagation very simple. Indeed, photons propagate with $(w,{\underline{\theta}}^a) = \vec{{\mathrm{cst}}}$ and we can define their covariant 4-momentum as $k_\mu = \pa_\mu w$, giving the contravariant $k^\mu = g_{{\rm GLC}}^{\mu\nu} k_\nu = g_{{\rm GLC}}^{\mu w} = -\delta^\mu_{\tau} / {\Upsilon}$. A direct consequence is that the geodesic deviation equation becomes trivial in these coordinate: k\^\_ k\^= 0 \^\_ = \^\_ , which is confirmed from a direct calculation of the Christoffel symbols [@Fanizza:2013doa]. This simplicity translates into other quantities. The redshift of a source is for example given in terms of the metric function ${\Upsilon}$: \[redshift\] 1+z\_s =   , extending the homogeneous relation $1+z_s = a_{\text{o}}/ a_s$ ($a$ the scale factor) to the inhomogeneous regime. Similarly, the angular distance to a source located on the observer’s past light cone is: \[AngDist\] d\_A = (\_[ab]{}) =   , depending solely on the (source-located) $\gamma_{ab}$ part of the metric describing the geometry in $\Sigma(\tau,w)$. It assumes an homogeneous neighborhood for the observer (see Eq. [(\[dAandMu\])]{} otherwise). Other advantages of GLC can be found by studying lensing from the viewpoint of the Jacobi formalism. In that case one starts with the geodesic deviation eq. (GDE): \_\^2 \^= R\^\_[ ]{} k\^k\^\^ \_/k\^\_ , $\lambda$ an affine parameter along the photon path starting at a source $S$ and ending at an observer $O$, and $\xi^\mu$ an orthogonal displacement with respect to the rays led by $k^\mu$. We project the GDE on the Sachs basis $\{ {\hat{s}}^\mu_A \}_{A=1,2}$ (two zweibeins with flat index $A=1,2$) satisfying: \[eq:Sachs\] & g\_ \^\_A \^\_B = \_[AB]{}   ,   \^\_A u\_= 0   ,   \^\_A k\_= 0   ,\ & \^\_\_\^\_A = 0         \^\_=\^\_--   . with $u_\mu \equiv \partial_\mu \tau$ the peculiar velocity of the comoving fluid ($S$ and $O$ comoving too), $\Pi^\mu_\nu$ a “screen" projector orthogonal to two 4-vectors: \^\_ u\_= 0 , \^\_ n\_= 0 n\_u\_+( u\^k\_)\^[-1]{} k\_ . ![Illustration of the Jacobi map formalism in the presence of a lens. $(0,0)$ denotes the origin of angles in the sky. We present two neighbor light rays (red and black) going from $S$ to $O$. The lens is in orange.[]{data-label="fig:JacobiScheme"}](Jacobi-Illustration.pdf){width="9cm"} We define the Jacobi map $J^A_B$ from the relation between the observed sky angle ${\bar{\theta}}_{\text{o}}^A$ and the screen displacement $\xi^A \equiv \xi^\mu \, {\hat{s}}^A_\mu$ (see Fig. \[fig:JacobiScheme\]): \^A() = J\^A\_B(,\_) [|]{}\_\^A . Projected quantities $\xi^A$ and $R^A_B \equiv R_{\alpha\beta\nu\mu}k^\alpha k^\nu {\hat{s}}^\beta_B {\hat{s}}^\mu_A$ (optical tidal matrix) bring us the Jacobi equation and its two initial conditions (see e.g. Refs. [@Fleury:2013sna; @Fanizza:2013doa]): \[EqEvolJAB\] & J\^A\_B (,\_) = R\^A\_C () J\^C\_B (,\_)   ,\ \[eq:initialConditions\] & J\^A\_B(\_,\_) = 0         J\^A\_B (\_,\_) = (k\^u\_)\_ \^A\_B   . The (unlensed or “real”) angular position of the source $\bar\theta^A_s$ and the observed lensed position $\bar\theta^A_{\text{o}}$ (of the image) are given by: |\^A\_s=(\^A / |d\_A)\_s , |\^A\_=( k\^\_\^A / k\^u\_)\_, where $\bar{d}_{A}$ is the angular distance in the homogeneous and isotropic background our model refers to. This allows us to define the so-called amplification matrix as: \[DefinitionA\] [A]{}\^A\_B = = ( [cc]{} 1 - - \_1 & - \_2 +\ - \_2 - & 1 - + \_1 )\ defining the lensing quantities $\kappa$ (convergence), $\hat\omega$ (vorticity) and $|\hat\gamma| \equiv \sqrt{(\hat\gamma_1)^2 + (\hat\gamma_2)^2}$ (shear). In GLC coordinates, the zweibeins are written as $\hat{s}^\mu_A=(\hat{s}^\tau_A,0,\hat{s}^a_A)$ and we have $u_\mu \propto \partial_\mu \tau = \delta_{\mu}^{\tau}$ leading to $u^\mu = - \delta^\mu_\tau - \frac{1}{{\Upsilon}} \delta^\mu_w -\frac{U^a}{{\Upsilon}} \delta^\mu_a$ (for equality) and $k_\mu = \partial_\mu w$ leading to $k^\mu \equiv - {\Upsilon}^{-1}\delta^\mu_\tau$. The screen projector can thus be written as: \^\_= \^\_- \^\_\_\^- \^\_w \_\^w - U\^a \^\_a \_\^w and we notice that the screen projector has no dependence from ${\Upsilon}$ or $\gamma_{ab}$. Second, the solution to Eqs. [(\[EqEvolJAB\])]{} and [(\[eq:initialConditions\])]{} is: \[JABandCaB\] J\^A\_B(,\_) = \_a\^A()\_\^B\_b(\_) where $(\ldots)^{{\mbox{\Large $\cdot$}}} \equiv \partial_\tau (\ldots)$. The angular distance, given by d\_A(\_s) = , and the magnification $\mu \equiv 1/(\det{\mathcal A})$, become: \[dAandMu\] d\_A = ,= ()\^2 =   , involving $\bar{d}_A$ and $\Phi$ ($\bar{\Phi}$) the flux in the in(homogeneous) geometry. The homogeneous distance can be chosen as $\bar{d}_A = a(\tau) r$, with $r \equiv w - \int {\mathrm{d}}\tau / a(\tau)$ measured from the observer (as in Refs. [@P1; @P2; @P3; @P4; @P5; @P6; @Marozzi:2014kua; @Fanizza:2015swa; @Fanizza:2013doa; @DiDio:2014lka]), but that is not the only possible choice (see Ref. [@P7]). Eq. [(\[dAandMu\])]{} simplifies to Eq. [(\[AngDist\])]{} when considering an homogeneous neighborhood for $O$. [^1] Expressions for the zweibeins can be obtained in the GLC coordinates [@P6], but it is more convenient to compute the squared lensing quantities, combined with ${\hat{s}}^A_a {\hat{s}}^A_b=\gamma_{ab}$ and $\epsilon_{AB}\, {\hat{s}}^A_a {\hat{s}}^B_b=\sqrt{\gamma}\,\epsilon_{ab}$ ($\epsilon$ the anti-symmetric symbol), to get: \[LensingCombinationsInGLC\] ( [c]{} ( 1-)\^2+\^2\ \_1\^2+\_2\^2 ) = ( )\^2 ( \_\^[ad]{} )   . Hence all lensing quantities are expressed with only 3 metric functions (of $\gamma_{ab}$), showing the great advantage of working in GLC coordinates. Introducing Double Light-Cone (DLC) coordinates {#SecDLC} =============================================== Let us consider an observer and his/her worldline ${\mathscr{L}}_{\text{o}}$ in a 4-dimensional Minkowski spacetime. At any given time, this observer can define a past light cone by the use of one null coordinate $w_v$ and a future light cone with another null coordinates $w_u$. There are several choices that the observer can do to define these values locally, a possible one is his/her proper time (e.g. $w_v = w_u = \tau_{\text{o}}$ [@P7]) or a function of it. If one considers two surfaces $w_v = {\mathrm{cst}}$ and $w_u = {\mathrm{cst}}$ such as $T_u$ (the tip of the $w_u = {\mathrm{cst}}$ cone) is in the past of $T_v$ (the tip of the $w_v = {\mathrm{cst}}$ cone) and along ${\mathscr{L}}_{\text{o}}$, we then have an intersection of the two cones that we can denote as $\Sigma(w_u,w_v)$, a topological sphere on which we can define two angular coordinates $\theta^a$ ($a=1,2$). This is true unless the null (past and future) cones have some caustics, which is not considered here. Metric form {#SecDLC:Metric} ----------- Let us temporarily call $x^\mu \equiv (\tau,w,{\underline{\theta}}^a)$ the GLC coordinates and call $y^\mu \equiv (w_u,w_v,\theta^a)$ the new system of coordinates that we wish to satisfy the above-mentioned properties. Hence we shall now refer to these coordinates as *double light-cone* (DLC) coordinates. The general transformation of coordinates between them is given by the relation: \[eq:TransfoCoordinates\] g\_\^[[DLC]{}]{}(y) = g\_\^[[GLC]{}]{}(x) . We choose to impose $w = w_v$ as we want the DLC past light cone to match with the GLC one[^2], so $\partial w / \partial w_v = 1$. We also want the new coordinate $w_u$ to be independent from $w_v$ and thus require that $\partial w / \partial w_u = 0$. Because in GLC we have ${\underline{\theta}}^a$ independent from $w$, we also impose $\partial {\underline{\theta}}^a / \partial w_v = 0$. This being said, one finds that the DLC metric has the following components: { [ccl]{} \[TransformationGLCtoDLC\] g\^[[DLC]{}]{}\_[w\_u w\_u]{} &=& \_[ab]{} ,\ g\^[[DLC]{}]{}\_[w\_v w\_v]{} &=& (\^2 + U\^2) - 2 ,\ g\^[[DLC]{}]{}\_[w\_u w\_v]{} &=& - - U\_a ,\ g\^[[DLC]{}]{}\_[w\_u a]{} &=& \_[bc]{} ,\ g\^[[DLC]{}]{}\_[w\_v a]{} &=& -( + ) + (\^2 + U\^2) - U\_b ,\ g\^[[DLC]{}]{}\_[ab]{} &=& -2 ( + ) + (\^2 + U\^2)\ & & - 2 U\_c ( + ) + \_[cd]{} . . We can further ask that light rays are independent from the future light-cone coordinate $w_u$. This translates into $\partial {\underline{\theta}}^a / \partial w_u = 0$ and thus gives: \[DLCmetricTMP\] g\^[[DLC]{}]{}\_ = ( [ccc]{} 0 & g\^[[DLC]{}]{}\_[w\_u w\_v]{} &\ g\^[[DLC]{}]{}\_[w\_u w\_v]{} & g\^[[DLC]{}]{}\_[w\_v w\_v]{} & g\^[[DLC]{}]{}\_[w\_v a]{}\ 0\^[T]{} & (g\^[[DLC]{}]{}\_[w\_v a]{})\^T & g\^[[DLC]{}]{}\_[ab]{}\ ) { [ccc]{} g\^[[DLC]{}]{}\_[w\_u w\_v]{} = -   ,\ g\^[[DLC]{}]{}\_[w\_v w\_v]{} = (\^2 + U\^2) - 2   , . and the angular components $g^{{\rm DLC}}_{w_v a}$ and $g^{{\rm DLC}}_{ab}$ are unchanged with respect to Eq. [(\[TransformationGLCtoDLC\])]{}. Imposing now that the angles in DLC are equal to the ones of GLC (as it is allowed by the residual gauge freedom on $\Sigma(w_u,w_v)$, see Sec. \[subsecDN:gaugefixing\]), we have $\partial {\underline{\theta}}^a / \partial \theta^b \equiv \delta^a_b$ and we further impose that $\partial w / \partial \theta^a = 0$ to get: g\^[[DLC]{}]{}\_[ab]{} = \_[ab]{} ,g\^[[DLC]{}]{}\_[w\_v a]{} = - U\_a - - \_a . Taking the inverse of $g^{{\rm DLC}}_{\mu\nu}$ we obtain: \[DLCinvmetricTMP\] g\_[[DLC]{}]{}\^ = ( [ccc]{} g\_[[DLC]{}]{}\^[w\_u w\_u]{} & -2/\^2 & -2\^b/\^2\ -2/\^2 & 0 &\ -2(\^a)\^T/\^2 & \^[ T]{} & \^[ab]{} )   , where we have introduced ${\widetilde{U}}^a$ and ${\widetilde{\Upsilon}}$ such that: \[eq:tUtUps\] \^a \^[ab]{} \_b = U\^a + \^[ab]{} , = . We can also verify that: \[eq:gUwuUwu\] g\_[[DLC]{}]{}\^[w\_u w\_u]{} = and we can see from Eq. [(\[DLCinvmetricTMP\])]{} that the only condition to make $w_u$ null is given by $g_{{\rm DLC}}^{w_u w_u} = 0$. This condition and the definition of ${\widetilde{\Upsilon}}$ are equivalent to the following conditions on $\tau$: \[AnsatzTau\] = , = - \^a + \^[ab]{} . As we can see these conditions are not trivial and they define ${\widetilde{\Upsilon}}$ and ${\widetilde{U}}^a$ in a particular way. Once they are satisfied, we get the inverse metric: \[DLCinvmetric\] g\_[[DLC]{}]{}\^ = ( [ccc]{} 0 & -2/\^2 & -2\^b/\^2\ -2/\^2 & 0 &\ -2(\^a)\^T/\^2 & \^[ T]{} & \^[ab]{} )   , and the direct metric is: \[DLCmetric\] g\^[[DLC]{}]{}\_ = ( [ccc]{} 0 & - \^2/2 &\ -\^2/2 & \^2 & -\_b\ 0\^[T]{} &-\_a\^T & \_[ab]{}\ )   , where we can appreciate the separation of ${\widetilde{\Upsilon}}^2$ and ${\widetilde{U}}^2$ in comparison with GLC. It is also important to notice that ${\Upsilon}$ and $U^a$ disappeared from the metric, being replaced only by $\tilde {\Upsilon}$ and ${\widetilde{U}}^a$. The line element in DLC coordinates $(w_u,w_v,\theta^a)$ has the following form: \[DLCds2\] s\_[[DLC]{}]{}\^2 = -\^2 w\_u w\_v + \_[ab]{}(\^a-\^a w\_v)(\^b-\^b w\_v)   , where we can notice that ${\mathrm{d}}\theta^a$ and ${\widetilde{\Upsilon}}$ (as well as ${\Upsilon}$) are dimensionless quantities, ${\mathrm{d}}w_u$ and ${\mathrm{d}}w_v$ have dimension of a distance (assuming the speed of light $c =1$), while $\gamma_{ab}$ has the dimension of a squared distance and $U^a$ an inverse distance. To summarize, we have computed here the DLC metric from the GLC one, introducing simplifying relations along the way until a double null coordinate formulation was reached. A different derivation, based on the transformation of coordinates, is also possible. We show this derivation in App. \[AppSecondDLCDerivation1\] and find that the two approaches are equivalent. More importantly, we can show that $w_u$ has a well defined expression in terms of GLC coordinates, and thus that GLC and DLC coordinates are perfectly consistent with each other. This derivation, made order by order in a perturbed FLRW geometry, is slightly technical and hence reported in App. \[AppSecondDLCDerivation2\]. We also sketch the perturbative transformation of coordinates in the Newtonian gauge in App. \[AppDLCNearFLRW\]. Finally, we found here that the DLC coordinates replace the geodesic-observer proper time $\tau$ of GLC coordinates (see Fig. \[fig:GLC\_coordinates\]) by a null coordinate $w_u$, having for consequence to redefine the functions ${\Upsilon}$ into ${\widetilde{\Upsilon}}$ and $U^a$ into ${\widetilde{U}}^a$. As for the other quantities – $\gamma_{ab}$, $w \equiv w_v$ and $\theta^a$ – they keep the same exact definitions between the two sets of coordinates. Finally, the $w_v = {\mathrm{cst}}$ and $w_u = {\mathrm{cst}}$ hypersurfaces respectively correspond to the past and future light cones intersecting on the 2-sphere $\Sigma(w_u,w_v)$, as illustrated in Fig. \[fig:DLC\_coordinates\]. [0.45]{} ![\[fig:GLCDLC\_coordinates\] Illustration of GLC (left) and DLC (right) coordinates.](GLC_coordinates.pdf "fig:"){width="7cm"} [0.45]{} ![\[fig:GLCDLC\_coordinates\] Illustration of GLC (left) and DLC (right) coordinates.](DLC_coordinates.pdf "fig:"){width="7cm"} \ Simple quantities {#SecDLC:SimpQuant} ----------------- We can now derive simple physical quantities directly from these new coordinates, in order to make use of them later. The photon momentum 4-vector, for example, is defined as: \[DLCphotonMomentum\] k\_\_w\_v = \_\^[w\_v]{} k\^= - \^\_[w\_u]{}   , while the observer velocity, defined as in GLC coordinates $u_\mu \equiv \partial_\mu \tau$ and using Eqs. [(\[eq:tUtUps\])]{} and [(\[AnsatzTau\])]{}, is found to be: \[eq:DLCumu1\] u\_ \_\^[w\_u]{} + ( ) \_\^[w\_v]{} + ( ) \^a\_. This implies that: \[eq:DLCumu2\] u\^= - \^\_[w\_u]{} - \^\_[w\_v]{}- \^\_a , where we can see that the components $u^{w_v}$ and $u^a$ are identical to GLC. It is interesting to notice that because the observer peculiar velocity is here defined from the GLC coordinates condition $u_\mu \equiv \partial_\mu \tau$, its explicit form in DLC coordinates depends on both GLC and DLC functions ${\Upsilon}$, ${\widetilde{\Upsilon}}$, $U^a$, and ${\widetilde{U}}^a$. Also, $\tau$ having the dimension of a distance, we see that $u_{w_u}$ and $u_{w_v}$ are dimensionless while $u_a$ has the dimension of a distance. The photon momentum and the observer 4-velocity lead to the product $k_\mu u^\mu = - 1 / {\Upsilon}$ and the redshift expression: 1+z\_s = where ${\text{o}}$ and $s$ denote an observer $O$ (i.e. not redshifted) and a source $S$ belonging to the same past null ray. One can notice that ${\widetilde{\Upsilon}}$ has disappeared from $k_\mu u^\mu$, hence the result, to give an expression identical to the one in GLC. Extra physical relations {#SecDLC:ExtraRel} ------------------------ From the last subsection we can see that the null geodesic equation is non-trivial only for $\mu = w_u$ and gives: \[eq:NGEinDLC\] k\^\_k\^k\^\_k\^+ \^\_ k\^k\^= 0 \^\_[w\_u w\_u]{} = 2 \^\_[w\_u]{}   . This is an interesting relation that we can check by a direct computation of the Christoffel symbols, as presented in App. \[AppChristoffel\]. On the other hand, in GLC we have $\tau$ which stands as the proper time of the observer defining a geodesic flow. We can conserve this property by imposing some conditions between the GLC and DLC metric functions. Indeed, the geodesic flow is defined by $g_{{\rm GLC}}^{\tau\tau} = -1$, thus: u\^\_u\_= 0 , and using Eqs. [(\[eq:DLCumu1\])]{} and [(\[eq:DLCumu2\])]{} we find the following evolution equations to be satisfied: = , \_a = U\_a , (\^2 - U\^2) = , with: ( )\^2 \_[w\_u]{} + \_[w\_v]{} + U\^a \_a . Finally, if we require the null energy condition to be satisfied by Einstein equations [@Parikh:2015wae], we have: \[eq:nullenergycondition\] T\_ k\^k\^0 R\_ k\^k\^0 R\_[w\_u w\_u]{} 0   . The component $R_{w_u w_u}$, expressed in DLC coordinates, is shown in Appendix \[AppChristoffel\]. Sachs vectors {#SecDLC:SachsVec} ------------- With a view on lensing, one can introduce the Sachs basis defined in Eq. [(\[eq:Sachs\])]{} and show that the explicit expression of the screen projector $\Pi^\mu_\nu$ in DLC coordinates is: \^\_= \^\_- \^\_[w\_u]{} \_\^[w\_u]{} - \^\_[w\_v]{} \_\^[w\_v]{} + 2 \^\_[w\_u]{} \_\^[w\_v]{} - 2 \^\_[w\_u]{} \_\^a - U\^a \^\_a \_\^[w\_v]{}   . It is interesting to notice that in DLC coordinates the screen projector relies mostly on its angular part and the metric functions $U^a$ and ${\widetilde{U}}^a$. It also has a very simple expression when ${\widetilde{U}}^a = U^a = 0$, as it is for a spherically symmetric case. We can check explicitly from Eqs. [(\[DLCphotonMomentum\])]{}, [(\[eq:DLCumu1\])]{} and [(\[eq:DLCumu2\])]{} that: \^\_     k\_   k\^, \^\_     u\_   u\^, or any of their combinations. This is an interesting property revealing that the screen projector is orthogonal to the photon momentum and the geodesic observer peculiar velocity, as expected for such a quantity. Writing down the conditions of Eq. [(\[eq:Sachs\])]{}, we find the relations satisfied by the Sachs vectors: \^[w\_u]{}\_A = - 2 ( ) \^a\_A ,\^[w\_v]{}\_A = 0 ,\_[ab]{} \^a\_A \^b\_B = \_[AB]{} ,\_\^[a]{}\_A = 0 , with $\lambda$ and affine parameter along the photon light ray. Because ${\hat{s}}^\mu_A$ are defined orthogonal to $k_\mu$, they define a screen for the future light rays that the observer can emit. And as $\hat{s}^a_A$ are constant over the propagation, for which $\lambda = w_v$ is also a possible choice, we have that the evolution of ${\hat{s}}^{w_u}_A$ is only determined by $({\widetilde{U}}^a - U^a) / {\widetilde{\Upsilon}}^2$. On the other hand, the covariant Sachs vector ${\hat{s}}^A_\mu = g_{\mu\nu}^{{\rm DLC}} {\hat{s}}^\nu_A$ is orthogonal to $k^\mu$ and thus defines a screen for past light rays received by the observer, for which we can choose $\lambda = w_u$ (like in Fig. \[fig:JacobiScheme\]). We have the components: \_[w\_u]{}\^A = 0 ,\_[w\_v]{}\^A = - \^[w\_u]{}\_A - \_a \^a\_A ,\_[a]{}\^A = \_[ab]{} \^b\_A . Let us notice finally that for ${\widetilde{U}}^a = U^a$ we get that the Sachs vectors ${\hat{s}}^{\mu}_A$ are only expressed in terms of their angular components and are hence constant between the different spheres embedded in the past and future light cones. This is also true for ${\hat{s}}_{\mu}^A$ when the extra condition ${\widetilde{U}}^a = 0$ is imposed (as it is for a spherically symmetric geometry). These properties indicate that DLC coordinates may be better adapted than GLC for some specific physical applications. Lensing quantities {#SecDLC:LensQuant} ------------------ No significant changes happen for lensing quantities when we use the DLC coordinates. Indeed, the Jacobi map formalism leading to their expression does not depend on a particular system of coordinates [@P6]. On the other hand, the Jacobi map of Eq. [(\[EqEvolJAB\])]{} does depend on an affine parameter $\lambda$. This affine parameter can be chosen in different ways[^3], but one can show that $\lambda = \alpha \tau + \beta$ with $\alpha \neq 0$. Hence we obtain the lensing quantities following the same procedure as before, using the definition of the amplification matrix given in Eq. [(\[DefinitionA\])]{} with the Jacobi map that did not change (still given by Eq. [(\[JABandCaB\])]{}), and we get exactly like in GLC that the lensing quantities are given by Eq. [(\[LensingCombinationsInGLC\])]{}. Nevertheless, we should recall that $\gamma_{ab} = \hat{s}_a^A \hat{s}_b^A$ and the zweibeins take a different form in DLC coordinates with respect to GLC, so calculations may be simpler in some specific cases if we employ DLC coordinates. Double-null coordinates and gauge fixing {#SecDN} ======================================== Here we compare the DLC coordinates with the well-known double-null coordinates of Ref. [@Brady:1995na], describing the (2+2)-splitting of a 4-dimensional spacetime in terms of two null-like hypersurfaces and two spacelike surfaces at their intersections. In the DLC case, we have the two null hypersurfaces corresponding respectively to the past and future light cones centered on the observer worldline, denoted by $w^A = (w_u,w_v)$. We then shortly address the extra gauge fixing conditions that can be imposed to the DLC coordinates. DLC coordinates are double-null coordinates {#subsecDN:Israel} ------------------------------------------- According to Ref. [@Brady:1995na] we can define generators $\ell^{(A)}$ ($A=0,1$) for the two null hypersurfaces ${\mathcal V}^A$ defined by $w^A = {\mathrm{cst}}$. These 4-vectors are proportional to the gradient of $w^A$ and can be defined as[^4]: \_\^[(A)]{} = e\^[|]{} \_w\^A , which, associated with $g^{\alpha\beta} \partial_\alpha w^A \partial_\beta w^B = e^{-\bar{\lambda}} \eta^{AB}$, give the relation: \_[(A)]{} \^[(B)]{} = g\^ \_[AC]{} \_\^[(C)]{} \_\^[(B)]{} = e\^[|]{} \_A\^B , with $\eta_{AB} \equiv \mbox{anti-diag}(-1,-1)$. For DLC, i.e. with $g^{\alpha\beta} = g^{\alpha\beta}_{\rm DLC}$, we easily find that: \[eq:barlambda\] | = ( \^2 / 2 ) , and we can show that: \_[(1)]{}\^= \^\_[w\_u]{} , \_[(2)]{}\^= \^\_[w\_v]{} + \^a \^\_a . The other two vectors tangent to any embedded spatial surface $\Sigma$ at the intersection of ${\mathcal V}^0$ and ${\mathcal V}^1$ can be chosen as $e^\alpha_{(a)} = \delta^\alpha_a$ ($a=2,3$). These vectors satisfy the relations $g_{ab}^{\rm DLC} \equiv \gamma_{ab} = e_{(a)} \cdot e_{(b)}$ (metric inside $\Sigma$) and $\ell^{(A)} \cdot e_{(a)} = 0 \quad \forall A=0,1 ~;~ a = 2,3$ (orthogonality with the null hypersurface generators), as expected. In general, the foliation of the 4-dimensional spacetime is given by an embedding relation $x^\alpha = x^\alpha(w^A,\theta^a)$, here we chose the DLC embedding which is trivially $x^\alpha = (w^A,\theta^a)$. This choice breaks the manifest 4- and 2-dimensional covariance of the equations but guarantees that angles remain constant along both sets of generators $\ell^{(A)}$. With these simple quantities within our hands, we can derive the line element in the double-null coordinates $x^\alpha$ and compare it to the DLC one. Indeed, using the DLC metric and the relation: \[eq:dxalpha\] x\^= \^\_[(A)]{} w\^A + (s\^a\_A w\^A + \^a)e\^\_[(a)]{} , in which we introduced the shift vector $s^a_A$ (see Ref. [@Brady:1995na]), we get: x\^0 = w\_u , x\^1 = w\_v , x\^a = s\^a\_[w\_u]{} w\_u + (s\^a\_[w\_v]{} + \^a) w\_v + \^a , and these total derivatives can be used in ${\mathrm{d}}s^2 = g_{\alpha\beta}^{\rm DLC} {\mathrm{d}}x^\alpha {\mathrm{d}}x^\beta$ to bring the identities: \[eq:shiftvectcorresp\] s\^a\_[w\_u]{} = 0 , s\^a\_[w\_v]{} = - \^a . This shows that the DLC metric functions ${\widetilde{U}}^a$ can be interpreted as a shift vector in the (2+2) decomposition. Reasoning only in the double-null coordinates system, we find from the orthonormality conditions of $\ell^{(A)}$ and $e_{(a)}$ that: g\_ = e\^[-]{} \_[AB]{} \_\^[(A)]{} \_\^[(B)]{} + g\_[ab]{} e\_\^[(a)]{} e\_\^[(b)]{} . Combined with Eq. [(\[eq:dxalpha\])]{}, this leads to the line element in the double-null coordinates: s\^2 = g\_ x\^x\^= e\^ \_[AB]{} w\^A w\^B + g\_[ab]{} (\^a + s\^a\_A w\^A) (\^b + s\^b\_B w\^B) , which directly gives the DLC line element of Eq. [(\[DLCds2\])]{} once we use Eqs. [(\[eq:barlambda\])]{}, [(\[eq:shiftvectcorresp\])]{}, and $g_{ab} = \gamma_{ab}$. *This shows that the DLC coordinates correspond to a gauge fixing of the double-null coordinates*. More generally, we have proved that GLC coordinates are compatible with the well-known double-null coordinates under the simple transformation of Sec. \[SecDLC:Metric\]. Gauge fixing of DLC coordinates {#subsecDN:gaugefixing} ------------------------------- The DLC coordinates are general and gauge fixed from the six metric functions composing it. Nevertheless, some residual gauge freedoms remain. We are now going to analyse these extra gauge freedoms and explain how to fix them. In fact, the derivations presented here are very close to Sec. 2.3 of Ref. [@P7], due to the fact that $(w_v,\theta^a)$ in DLC directly translate into $(w,\theta^a)$ in GLC coordinates. Hence $w_u$ plays in calculations almost the same role as $\tau$. #### Relabeling light cones: The GLC metric is invariant under the relabeling of light cones, $w_u \rightarrow w_u'(w_u)$ and $w_v \rightarrow w_v'(w_v)$, assuming the metric functions ${\widetilde{\Upsilon}}$ and ${\widetilde{U}}^a$ transform as: ’ = , \_a \_a’ = \_a . The dependence on both null coordinates for ${\widetilde{\Upsilon}}$ is justified from the different role played by ${\widetilde{\Upsilon}}$ with respect to ${\Upsilon}$ (whose transformation in GLC only depends on $w$), and we can understand this difference by looking at Eqs. [(\[AnsatzTau\])]{}. As in GLC we can use this gauge freedom to fix a condition on the observer, namely ${\widetilde{\Upsilon}}({\mathscr{L}}_{\text{o}}) = 1$. By analogy, once this gauge fixing is done we can say that we are working in the *temporal gauge* (see remark after though). #### Relabeling light rays: Light rays can also be relabeled when going from one sphere $\Sigma(w_u,w_v)$ to another $\Sigma(w_u',w_v')$. According to the choice made in defining the DLC coordinates, namely that the angular part of the metric is only related to the past light-cone coordinate $w_v$, such a relabeling is equivalent to the transformation $\theta^a \rightarrow {\varphi}^a (w_v,\theta^a)$ and the DLC metric is invariant if $\gamma^{ab}$ and ${\widetilde{U}}^a$ follow the transformation: \^[ab]{} ’\^[ab]{} = \^[cd]{} \_c \^a \_d \^b , \^a ’\^a = \^c \_c \^a - \_[w\_v]{} \^a . The check of this invariance is exactly the same as in GLC as $w_u$ does not play a role in it. We can thus use it like in GLC, imposing ${\widetilde{U}}^a({\mathscr{L}}_{\text{o}}) = 0$, hence defining the *photocomoving gauge*. The further requirements that $\theta^a$ are regular spherical angles at the observer and that the observer is non-rotating give the already GLC-defined *non-rotating observational gauge*. #### Reparameterizing light rays: We have already derived the photon covariant momentum $k_\mu = \delta_\mu^{w_v}$ and its contravariant form $k^\mu = - (2 / {\widetilde{\Upsilon}}^2) \delta^\mu_{w_u}$. Assuming a more general form $k_\mu = k_{w_v} (\pa_\mu w_v)$ and $k^\mu \propto \delta^\mu_{w_u}$, we can show that the geodesic equation $k^\nu \nabla_\nu k^\mu = 0$ imposes $\pa_{w_u} k_{w_v} = 0$ (exactly as $\pa_\tau k_w = 0$ in GLC). In a similar manner, keeping the same parameterization from one light ray to another leads to $\pa_a k_{w_v} = 0$. So $k_{w_v} = k_{w_v} (w_v)$ (*isotropic affine parameterization*) and we can show that $k_{w_v}^{\text{o}}= - \omega_{\text{o}}{\Upsilon}_{\text{o}}$ as in GLC, with $\omega_{\text{o}}= - (u_\mu k^\mu)_{\text{o}}$ the pulsation of the photon evaluated at the observer. This exact similarity with GLC, despite $u_\mu$ being now given by Eq. [(\[eq:DLCumu1\])]{}, is related to $g^{w_v w_v}_{{\rm DLC}} = 0$. Imposing the *static affine parameterization*, namely that the relation $\delta x^\mu = k^\mu \delta \lambda = - (2 k_{w_v}/{\widetilde{\Upsilon}}^2_{\text{o}}) \delta \lambda \, \delta^\mu_{w_u}$ is independent from $w_v$ (here again $\lambda$ is the affine parameter of photon trajectories), results in the condition $\partial_{w_v} \left( k_{w_v} / {\widetilde{\Upsilon}}^2_{\text{o}}\right) = - ({\Upsilon}_{\text{o}}/ {\widetilde{\Upsilon}}^2_{\text{o}}) \, \partial_{w_v} \omega_{\text{o}}= 0$. We thus have that $k_{w_v}$ is a pure constant that we can set to one as already used in Sec. \[SecDLC:SimpQuant\] on DLC properties. #### Conformal transformations: Finally the DLC coordinates are also invariant under conformal transformations $g_{\mu\nu}^{\rm DLC} \rightarrow (g_{\mu\nu}^{\rm DLC})' = \Omega^{-2} g_{\mu\nu}^{\rm DLC}$, assuming the coordinates and metric functions change as: & w\_u’ = w\_u , w\_v’ = w\_v , (\^a)’ = \^a ,\ & ()’ = \^[-1]{} , \_[ab]{}’ = \^[-2]{} \_[ab]{} , (\^a)’ = \^a . As always, conformal transformations do not affect the photon trajectories. #### Remarks on the observer and gauges: It was shown in Ref. [@P1] that for a geodesic observer with peculiar velocity $\hat{n}^\mu = - \partial_\mu \tau \equiv - u^\mu$, the GLC coordinates near the observer vary as $\Delta x^\mu = \hat{n}^\mu \Delta \tau \equiv - u^\mu \Delta \tau$, where $u^\mu$ and the variations of coordinates are evaluated on the observer worldline ${\mathscr{L}}_{\text{o}}$. Using DLC coordinates and Eq. [(\[eq:DLCumu2\])]{}, this leads to the relations: \[eq:Deltas\] w\_u = ,w\_v = ,\^a = . The first and second equalities are related to the relabeling of light cones and we can notice that the second and third are identical to the GLC case [@P1]. If we now require *consistency conditions between GLC and DLC observers*, we can impose: ${\widetilde{\Upsilon}}_{\text{o}}= {\Upsilon}_{\text{o}}$ and ${\widetilde{U}}^a_{\text{o}}= U^a_{\text{o}}$. We then see that the temporal gauge requires ${\Upsilon}_{\text{o}}= 1$ and the photocomoving gauge imposes $U^a_{\text{o}}= 0$ (and so $\Delta \theta^a = 0$ as the observer sees isotropy locally). This also means, under these choices, that $\partial_{w_u} \tau = \partial_{w_v} \tau$ on the observer’s worldline (as supported by Eqs. [(\[eq:Appdtaudwu\])]{} to [(\[eq:Appdwudw\])]{}). Static black holes in DLC coordinates {#SecDLCBH} ===================================== Black holes have already been studied within double-null coordinates [@Eilon:2015axa]. We propose here to study static black holes with DLC coordinates in order to check their consistency, understand these coordinates better, and show that GLC coordinates can be used for astrophysical objects. Static black holes, simple transformation {#SecDLCBH:StatBH} ----------------------------------------- As an illustrative exercise we can consider a static black hole described by the metric: \[GenSchBH\] s\_[stat.]{}\^2 = - N\^2 t\^2 + N\^[-2]{} r\^2 + r\^2 \^2   . One can introduce two null-like coordinates $(u,v)$ satisfying the differential relations [@Hwang:2011mn]: \[DiffrAndt\] r = r\_[,u]{} u + r\_[,v]{} v , t = (- + ) . This leads to an equivalent formulation of the line element in terms of double null coordinates: \[GenSchBHnullcoord\] s\_[stat.]{}\^2 = - N\^2 u v + r(u,v)\^2 \^2   . To be more explicit we can choose $N^2 = 1 - \frac{2 G M}{r}$ and we have a Schwarschild black hole metric. As for the two null coordinates $u$ and $v$, they are then respectively called the ingoing and outgoing Eddington-Finkelstein coordinates: \[TransfoCoordDLCstaticBH\] u = t - r\^, v = t + r\^, r\^= r + 2 G M ( | - 1 | ) , and $r^\ast$ is the tortoise coordinate. It is then easy to check that Eq. [(\[GenSchBH\])]{}, with Eq. [(\[TransfoCoordDLCstaticBH\])]{}, is indeed giving Eq. [(\[GenSchBHnullcoord\])]{}. We can thus compare the form of Eq. [(\[GenSchBHnullcoord\])]{} with the DLC metric of Eq. [(\[DLCds2\])]{}, using that ${\widetilde{U}}^a = 0$ in a spherically symmetric case and assuming the light cones to be centered on $r=0$ (the observer’s worldline here is the black hole center’s worldline). The identification of the diverse metric elements in then obvious, giving: \[DLCfunctionsStaticBH\] & w\_u = u , w\_v = v , \^a = |\^a ,\ & = N\_[Sch.]{} = , \^a = 0 , \_[ab]{} = r\^2(u,v) \_[ab]{} , with $\delta_{ab} = {\rm diag}( 1 , \sin^2 \theta )$ in spherical coordinates. Let us finally comment that the explicit expression of $r(u,v)$ requires to invert the following equality: r + 2 G M ( | - 1 | ) = . The case of a static Reissner-Nordström (charged) black hole is not more complicated. It is simply given by another choice of $N$ which is $N^2 = 1 - \frac{2 G M}{r} + \frac{Q^2}{r^2}$. We thus have a perfect description of it within the DLC coordinates with: = N\_[Q]{} = , \^a = 0 , \_[ab]{} = r\^2(u,v) \_[ab]{} . Though we considered two particular cases here, this identification is correct for any static black hole, as proved in App. \[AppZforStaticBH\]. Redshift for static black holes {#SecDLCBH:RedshiftStatBH} ------------------------------- One can now present considerations on the observer proper time and redshift. We can show (see App. \[AppZforStaticBH\]), that Eq. [(\[AnsatzTau\])]{} for a static black hole simplifies as: \[TauDerivN\] \_[w\_u]{} = \_[w\_v]{} = N=N(r(u,v)) . We consider the specific case of a Schwarzschild metric and study the relationship between $\tau$ and $t$. We have $w_u = u = t - r^\ast$ and $w_v = v = t + r^\ast$ which, combined with Eq. [(\[TauDerivN\])]{}, lead to: \_t = N\_[Sch.]{} , \_[r]{} = 0 = N\_[Sch.]{} t = t , and we used here that $r$ and $t$ are two independent coordinates. This relation is well-known in the literature and it relates the coordinate time $t$ to the proper time $\tau$ of a static observer in geodetic motion. Hence $\tau$ is again defining a geodesic flow, like in GLC. We can now give a look at the redshift in Schwarschild geometry and use $k^\mu u_\mu = - 1 / {\Upsilon}\equiv - \omega$ (photon pulsation). We then directly get the well known relation: \[eq:RedshitStatic1\] 1+z\_s = = = . We can guess easily that this relation holds for any type of static black holes according to the relations $1 + z_s = {\Upsilon}_{\text{o}}/ {\Upsilon}_s$ and ${\Upsilon}= N$. We prove that it is indeed true for any static black hole in App. \[AppZforStaticBH\]. Hence for the charged black hole we can also write: \[eq:RedshitStatic2\] 1+z\_s = = = . This subsection has shown that black holes can be described very conveniently in the DLC coordinates. We can feel that this must still be true for the general case of rotating black holes, but the technicality of this more complicated example is left for a future work. Trajectories near static black holes {#SecTrajBH:static} ------------------------------------ We consider here the trajectories of massive particles and photons around static black holes. More precisely, we start with the trajectory equations in DLC coordinates but quickly go back to $(t,r,\theta,\phi)$ coordinates in order to recover their usual form of Ref. [@schutz1985first]. We also show for photon trajectories the consequence of the relation $k_{\mu} = \partial_{\mu} w_v$. ### Massive particles {#SecTrajBH:staticSub1} Let us start with a relativistic particle of mass $m$ and energy $E$. One can find the trajectory of the particle thanks to its energy conservation. We have : p\_p\^= - m\^2 , which after considering the DLC metric and its reduced form for static black holes (see Eq. [(\[DLCfunctionsStaticBH\])]{}) becomes the trajectory equation: - \^2 p\^[w\_u]{} p\^[w\_v]{} + \_[ab]{} p\^a p\^b = - m\^2 . Using that $\gamma_{ab} = r^2 {\rm diag}(1 , \sin^2 \theta)$ and the symmetry of the problem that allows us to take $p^\theta = 0$, $\theta = \pi / 2$ (i.e. working in the equatorial plane), we get that : \_[ab]{} p\^a p\^b = , where $L$ is the particle’s angular momentum defined as $L \equiv p_{\phi} / m$. We thus have: p\^[w\_u]{} p\^[w\_v]{} - ()\^2 = ()\^2 . This is a very simple expression for the trajectory of a relativistic particle that we can relate to the usual one expressed in terms of $(t,r,\theta,\phi)$ coordinates of the static black hole metric. Indeed, the first two components of the particle’s momentum can be written as: p\^[w\_u]{} = m , p\^[w\_v]{} = m , with $\tilde t$ the proper time of the particle along the trajectory, and using the transformation of coordinates presented in Eq. [(\[TransfoCoordDLCstaticBH\])]{} we can show (for $r>2GM$) that they are equivalent to: \[DLCParticleMomentum\] p\^[w\_u]{} = ( E - ) , p\^[w\_v]{} = ( E + ) . The energy $E$ is given by $E = N^2 {\mathrm{d}}t / {\mathrm{d}}\tilde t$ as $p^t \equiv m \, {\mathrm{d}}t / {\mathrm{d}}\tilde t$ and $E \equiv - p_t / m$ (i.e. $E$ is related to the $0^{\rm th}$ component of the momentum in $(t,r,\theta,\phi)$ coordinates). These results are true for a Schwarzschild black hole as well as any static black hole. It is thus possible to simplify our trajectory using ${\widetilde{\Upsilon}}= N$ and get: \[PartTrajStaticBH\] ( )\^2 - ( E\^2 - N\^2 (1 + ) ) = 0 . We can then analyse the particle’s trajectories by studying the sign of the second term of Eq. [(\[PartTrajStaticBH\])]{}, as done in Ref. [@schutz1985first] with the same exact equation. ### Massless particles {#SecTrajBH:staticSub2} Let us now consider the case of a massless particle, like a photon. We can first consider the energy conservation given by $k_\mu k^\mu = 0$ in DLC coordinates $(w_u,w_v,\theta^a)$. This reads, according to Eq. [(\[DLCfunctionsStaticBH\])]{} in the case of a static black hole: - \^2 k\^[w\_u]{} k\^[w\_v]{} + \_[ab]{} k\^a k\^b = 0 , and we can replace ${\widetilde{\Upsilon}}$ by $N$. We notice now from Eq. [(\[DLCphotonMomentum\])]{} that $k^{w_u} = - 2 / N^2$ and $k^{w_v} = 0$, hence we get the relation on the angular part of the photon momentum: \_[ab]{} k\^a k\^b = 0 . This means that photons propagate orthogonally to the surface $\Sigma(\theta,\phi)$. It also means from the expression of $k^\mu$ that $k^\theta = k^\phi = 0$, i.e. the photon trajectory is trivial in DLC coordinates (a property shared with GLC coordinates), reducing simply to the following equation: k\^[w\_u]{} = - = - , which a priori involves the explicit expression of $r(w_u,w_v)$ to be solved. One can, on the other hand, come back on $(t,r,\theta,\phi)$ coordinates. For that we use Eq. [(\[TransfoFinalCoordAppB\])]{} which is valid for any static black hole, remark that $E = N^2 {\mathrm{d}}t / {\mathrm{d}}\lambda$, and we get: \[PhotonTrajStaticEasy\] k\^[w\_v]{} : & + N\^[-2]{} = 0 = - E\ k\^[w\_u]{} : & - N\^[-2]{} = N\^[-2]{} (E - ) = - = 2 + E . This directly leads to $E = -1$ that we interpret as the consequence from the fact that $k^\mu$ is a 4-vector pointing to the past. This also means that ${\mathrm{d}}r / {\mathrm{d}}\lambda = 1$ and thus $\lambda$ grows when we are going away from the observer. For incoming photons we have $\lambda$ growing, $r$ decreasing to zero, and $E > 0$, as physically expected. We can finally remark that this equation of motion is purely radial and it does not capture all the possible photon trajectories. This is explained from the fact that $k^\mu$ is here defining constant angular coordinates and null trajectories observed by the observer on his/her past light cone. Let us alleviate this assumption and consider the most general photon momentum in DLC coordinates in order to derive all the possible photon trajectories. We thus have $\tilde k^{\mu} = (\tilde k^{w_u},\tilde k^{w_v},\tilde k^{\theta},\tilde k^{\phi})$ and the condition $\tilde k_{\mu} \tilde k^{\mu} = 0$ is: -N\^2 k\^[w\_u]{} k\^[w\_v]{} + r\^2 (k\^)\^2 + r\^2 \^2(k\^)\^2 = 0 . From the symmetry of the problem we can place ourselves in the equatorial plane, taking $\tilde k^\theta = 0$ and $\theta = \pi / 2$. The equation above hence becomes: k\^[w\_u]{} k\^[w\_v]{} - (k\^)\^2 = 0 . We also have (in analogy with Eq. [(\[DLCParticleMomentum\])]{}): \[eq:kmuStaticBHTraj\] k\^[w\_u]{} = ( E - ) , k\^[w\_v]{} = ( E + ) , where $\lambda$ is the affine parameter describing the photon trajectory and we have used that ${\mathrm{d}}t / {\mathrm{d}}\lambda =N^{-2} E$. We thus get the well known photon trajectory in the $(t,r,\theta,\phi)$ coordinates after defining the photon momentum $L$ such that $L = \tilde k_{\phi}$ (hence $r^2 \sin^2\theta \, (\tilde k^\phi)^2 = L^2 / (r^2 \sin^2 \theta)$ simplified by $\theta = \pi / 2$), reading: \[PhotonTrajStaticReal\] ( )\^2 + (- E\^2 + ) = 0 . This relation is valid for any static black hole and well known in the literature [@schutz1985first]. We can finally notice that Eq. [(\[PhotonTrajStaticReal\])]{} gives back Eq. [(\[PhotonTrajStaticEasy\])]{} after imposing $L = 0$ (radial trajectory) and noticing the opposite sign between $E$ and ${\mathrm{d}}r / {\mathrm{d}}\lambda$ (incoming trajectories for $E>0$). ### Comment on redshift {#SecTrajBH:CommentZSBH} Let us do an extra comment here concerning the redshift of photon trajectories. In Sec. \[SecDLCBH:RedshiftStatBH\] we derived the expression of the redshift for the photon trajectories defining the angular coordinates of DLC, i.e. the radial trajectories. We now find for general photon trajectories (see Sec. \[SecTrajBH:staticSub2\]) that $u_\mu \tilde k^\mu = \frac{N}{2} (\tilde k^{w_u} + \tilde k^{w_v})$ and we can use Eq. [(\[eq:kmuStaticBHTraj\])]{} to get that: u\_k\^= 1+z\_s = = , as $E$ is a constant of motion fixed at the start of the trajectory and independent from the source or the observer. We have established the validity of Eqs. [(\[eq:RedshitStatic1\])]{} and [(\[eq:RedshitStatic2\])]{} in this more general case, showing that the redshift is also independent from the angular momentum $L$. Comment on ultra-relativistic particles {#SecCommentURP} ======================================= The geodesic equation was recently considered within the framework of GLC coordinates [@Fanizza:2015gdn] (see also Ref. [@Fleury:2016mul]) in order to compute the time-of-flight difference between two ultra-relativistic (UR) particles. Using DLC coordinates, we can find the mass-shell constraint: \[eq:massshell\] \^2 \_u \_v + 2 U\_a \_v \^a - \_[ab]{} \^a \^b + …= , where $m$ is the mass of the UR particle and $E$ its energy measured by the observer (at the origin of the coordinates). The dot-derivative is here taken with respect to the particle’s proper time $\tilde t$. The above expression assumes a hierarchy among the coordinates derivatives: \_u \~1 ,\^a \~\^[-1]{} ,\_v \~\^[-2]{} , with $\gamma$ the Lorentz factor of the UR particle, “…” denoting terms $\sim {\mathcal O}(\gamma^{-3})$, and both sides of Eq. [(\[eq:massshell\])]{} are of order $\gamma^{-2}$. It is clear from the hierarchy that $w_u$ and $w_v$ do not have exactly an equivalent role in DLC coordinates. We can also understand this fact from App. \[AppSecondDLCDerivation2\] where we find $w_u = - w + 2 \eta(\tau)$ at $0^{\rm th}$ order in perturbations around FLRW while $w_v = w$. Hence $\partial w_u / \partial \tau = 2 / a(\tau)$ at this order while $\partial w_v / \partial \tau = 0$. Using Eq. [(\[eq:massshell\])]{} brings the relation: \[eq:wvdot\] 2 \_v = ( + \^[ab]{} J\_a J\_b ) , where $J_a \equiv \gamma_{ac} \dot{\theta}^c$ and we used that $\dot{w}_u {\widetilde{\Upsilon}}^2 \sim {\mathcal O}(1) \gg {\widetilde{U}}_a \dot{\theta}^a \sim {\mathcal O}(\gamma^{-1})$. Considering from Ref. [@Fanizza:2015gdn] that $\dot{\tau} = {\Upsilon}_{\text{o}}/ {\Upsilon}$ (involving the rescaling of the particle’s proper time $\tilde t$) and that $\dot{w}_u / \dot{\tau} = \partial w_u / \partial \tau = 2 {\Upsilon}/ {\widetilde{\Upsilon}}^2$ (see e.g. Eq. [(\[eq:DLCmetricFunctions\])]{}), we see that we can approximate $\dot{w}_u \sim 2 {\Upsilon}_{\text{o}}/ {\widetilde{\Upsilon}}^2$ in the equation above. This leads to the expression: = ( + \^[ab]{} J\_a J\_b ) . Integrating this equation now gives: (w\_v)\_i - (w\_v)\_= \_[(w\_u)\_s]{}\^[(w\_u)\_]{} ( + \^[ab]{} J\_a J\_b ) w\_u , with i the particle index and we can neglect the $\gamma^{ab} J_a J_b$ contribution in the integral as we are integrating over the unperturbed geodesic (on which $J_a \sim 0$). Using that the time-of-flight difference between the two UR particle is $\Delta \tau = \tau_1 - \tau_2 = {\Upsilon}_{\text{o}}\left[ (w_v)_1 - (w_v)_2 \right]$, we get: \[eq:DeltaTau1\] &=& ( - ) \_[(w\_u)\_s]{}\^[(w\_u)\_]{} w\_u , with ${\Upsilon}_{\text{o}}\equiv {\Upsilon}((w_u)_{\text{o}}, (w_v)_{\text{o}},\theta^a_{\text{o}})$ (see App. \[AppSecondDLCDerivation2\] for explicit limits at the observer). We can also check in the homogeneous case (see e.g. App. \[AppDLCNearFLRW\]) that the remaining integral simplifies to: \_[(w\_u)\_s]{}\^[(w\_u)\_]{} w\_u = \_[(\_-)\_s]{}\^[(\_-)\_]{} \_- = \_[\_s]{}\^[\_]{} , giving back the homogeneous result: \[eq:DeltaTaHom\] = ( - ) \_[\_s]{}\^[\_]{} . We can conclude this section by noticing that the DLC coordinates have given through Eq. [(\[eq:DeltaTau1\])]{} an equivalent result to the GLC one. This expression is interesting but does not bring a real simplification compared to GLC. Nevertheless, it shows that DLC coordinates are also able to describe particles which are not exactly on the light cone, as long as they are ultra relativistic particles (hence propagating close to the light cone). Conclusions {#SecConclusion} =========== We have presented a system of coordinates that we derived directly from the geodesic light-cone (GLC) coordinates, replacing the proper time of the observer $\tau$ with a null coordinate $w_u$ while keeping the other three coordinates unchanged. We nicknamed these coordinates Double Light-Cone (DLC) coordinates as they make use of two null coordinates, share many of the advantages that GLC coordinates possess, and are mathematically equivalent to the well-known double-null coordinates of Brady [*et al.*]{}[@Brady:1995na]. They are thus adapted coordinates that can be employed in cosmology and for that reason we have attached importance to the description of their gauge fixing. In the spirit of adapted coordinates, and recalling the initial motivation of Temple to describe astrophysical objects, we employed the DLC coordinates to the description of static black holes. We showed their usefulness, but this is not a surprise considering the multiple applications of double-null coordinates in this field. Hence our illustration was more a consistency check for DLC coordinates than a new result. We also showed that they are convenient to describe massive particle and photon trajectories, and we briefly commented on the time of flight of ultra-relativistic particles. It would be interesting to extend our analysis to rotating (Kerr) black holes and see if the DLC coordinates offer any simplification. We imposed the black hole to be at the center of coordinates in this paper, it would thus be interesting to see how the description changes when it is placed at a certain distance on our past light cone. We could also study strong lensing from this black hole [@Virbhadra:1999nm; @Virbhadra:2008ws], as seen from an observer at the center of coordinates, extending adapted coordinates beyond caustics. Finally, in this paper we have considered the restricted case of an observer in geodesic motion in order to stay close to GLC. This imposed to write the peculiar velocity in terms of GLC metric functions, leading to expressions that were sometimes mixing DLC and GLC functions. This is not a restriction of DLC coordinates and we believe that they are adapted to cosmological or astrophysical studies as well as the GLC coordinates. Nevertheless, it is clear by definition that GLC coordinates are better adapted to a geodesic observer. As for DLC, they should have the advantage in situations involving light emission and reception, and hence represent a complementary tool for GLC. As already said, they are equivalent to the double-null coordinates, up to an eventual residual gauge fixing, and they thus build the bridge between GLC and double-null coordinates. They are adapted to light propagation and can be used for black hole calculations. The DLC coordinates may also turn useful for other applications, such as black hole perturbations or even gravitational wave emissions. Adapted coordinates are useful and we should continue to develop them. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== My research is supported by the Leung Center for Cosmology and Particle Astrophysics (LeCosPA) of the National Taiwan University (NTU). Any error appearing in these pages should only be attributed to my own responsibility. I want to thank Prof. Gabriele Veneziano (CERN, Collège de France) for giving me advice towards the construction of the DLC coordinates. I am also very grateful to Dr. Pierre Fleury (Univ. of Cape Town) for his comments on the draft and to Dr. Dong-Han Yeom (LeCosPA), Dr. Dong-Hoon Kim (Ewha Womans University) and Prof. Pisin Chen (LeCosPA) for our discussions regarding black holes. I am thankful to Dr. Hung-Yi Pu (ASIAA) for references on photon trajectories (see <https://odysseyedu.wordpress.com/> for his beautiful simulations) and to the anonymous referee who gave me the opportunity to improve the paper on points that were not explained well enough. The initial idea of this work was initiated three years ago, at the end of my PhD, but only revived during the Second LeCosPA Symposium “Everything About Gravity” in December 2015. The coordinates I had derived at that time were different and not as well defined as DLC. I thank Prof. Costas Bachas (LPTENS) for having discussed the black hole application with me at this epoch. Direct DLC transformation and perturbed FLRW {#AppSecondDLCDerivation} ============================================ We first present a direct derivation of the DLC inverse metric elements in terms of GLC coordinates and show that this approach is equivalent to Sec. \[SecDLC:Metric\]. We then solve the condition that makes $w_u$ to be null, perturbatively and using the method of characteristics. General considerations {#AppSecondDLCDerivation1} ---------------------- As mentioned in Sec. \[SecDLC:Metric\], we can establish the link between GLC and DLC coordinates in another way. Indeed, taking the inverse relation of Eq. [(\[eq:TransfoCoordinates\])]{}, namely: \[eq:InvTransfoCoordinates\] g\^\_[[DLC]{}]{}(y) = g\^\_[[GLC]{}]{}(x) , and assuming the following identities: w\_v &=& w = 1   ,   = = 0 ,\ \^a &=& \^a = \^a\_b   ,   = = 0 , we obtain the relations: \[TransformationInverseGLCtoDLC\] & g\_[[DLC]{}]{}\^[w\_u w\_u]{} = - ( )\^2 - - 2 + \^[ab]{} ,\ & g\_[[DLC]{}]{}\^[w\_v w\_v]{} = 0 ,g\_[[DLC]{}]{}\^[w\_u w\_v]{} = - ,\ & g\_[[DLC]{}]{}\^[w\_u a]{} = - + \^[ab]{} ,g\_[[DLC]{}]{}\^[w\_v a]{} = 0 ,g\_[[DLC]{}]{}\^[ab]{} = \^[ab]{}   . Introducing ${\widetilde{\Upsilon}}$ such that $g_{{\rm DLC}}^{w_u w_v} = -2/{\widetilde{\Upsilon}}^2$ followed by ${\widetilde{U}}^a$ such that $g_{{\rm DLC}}^{w_u a} = -2 {\widetilde{U}}^a/{\widetilde{\Upsilon}}^2$, we obtain the expressions of the DLC metric functions: \[eq:DLCmetricFunctions\] \^2 = 2 \^[-1]{} , \_a = \_[ab]{} \^b = U\_a - X\_a , X\_a \^[-1]{} . These two relations can be employed in $g_{{\rm DLC}}^{w_u w_u}$ of Eq. [(\[TransformationInverseGLCtoDLC\])]{} to find that: \[eq:dwudw\] g\_[[DLC]{}]{}\^[w\_u w\_u]{} = 0 \^[-1]{} = ( \^[ab]{} X\_a X\_b - 1 ) - U\^a X\_a , as required for our coordinates to be double null. This last relation is a second order partial differential equation that gives $w_u$ in terms of GLC coordinates and metric functions once solved (see Sec. \[AppSecondDLCDerivation2\]). We also find that $g_{{\rm DLC}}^{w_u w_u}$ of Eq. [(\[TransformationInverseGLCtoDLC\])]{} is consistent with Eq. [(\[eq:gUwuUwu\])]{} under the condition: \[eq:Appdtaudwu\] = - \^[-1]{} . This relation is indeed verified after using Eq. [(\[eq:tUtUps\])]{} into Eq. [(\[eq:gUwuUwu\])]{} and imposing $g_{{\rm DLC}}^{w_u w_u} = 0$, on the one hand: \[eq:Appdtaudwv\] = , and combining Eqs. [(\[TransformationInverseGLCtoDLC\])]{} and [(\[eq:DLCmetricFunctions\])]{} and imposing $g_{{\rm DLC}}^{w_u w_u} = 0$, on the other hand: \[eq:Appdwudw\] = . These three relations, with Eq. [(\[eq:tUtUps\])]{} to express $\partial_a \tau$, can be used in combination with Eq. [(\[eq:Deltas\])]{} to verify that: = w\_u + w\_v + \^a . We also show in App. \[AppDLCNearFLRW\] that the Eq. [(\[eq:gUwuUwu\])]{} can be solved at first order in perturbations around an FLRW geometry. This section hence proved the consistency between the derivation based on coordinates transformation (from GLC to DLC) and the one based on the metric (presented in Sec. \[SecDLC:Metric\]). We are now going to solve Eq. [(\[TransformationInverseGLCtoDLC\])]{} to prove that $w_u$ is well defined. Solution of $g_{{\rm DLC}}^{w_u w_u} = 0$ {#AppSecondDLCDerivation2} ----------------------------------------- Let us derive the expression of $w_u$ in terms of GLC coordinates and metric functions $(\tau, w, {\underline{\theta}}^a)$. The equation to be satisfied is given by $g_{{\rm DLC}}^{w_u w_u} = 0$ from Eq. [(\[TransformationInverseGLCtoDLC\])]{} that we simply write as: \[eq:dtaudwOfwu\] \_w\_u + \_w w\_u = - 2 U\^a (\^[-1]{}) \_a w\_u + \^[ab]{} (\_a w\_u)(\_b w\_u) (\_w\_u)\^[-1]{} , where $\partial_a$ denotes a derivative with respect to ${\underline{\theta}}^a$. This equation is a priori a non-linear partial differential equation, but an expansion of $w_u$ in perturbations around an homogeneous FLRW spacetime allows to solve it as a linear partial differential equation. Indeed, writing: \[eq:wuexpansion\] w\_u(,w,\^a) = \_[n=0]{}\^w\_u\^[(n)]{}(,w,\^a) , we have the zeroth order $w_u^{(0)}(\tau,w,{\underline{\theta}}^a) = w_u^{(0)}(\tau,w)$ independently from angles (homogeneous solution). The direct consequence of that is: \[eq:Order0Conditions\] U\^[a(0)]{} = 0 , \_a w\_u\^[(0)]{} = 0 , and the RHS of Eq. [(\[eq:dtaudwOfwu\])]{} is expressed in terms of lower orders of $w_u$ than in the LHS. In other words, Eq. [(\[eq:dtaudwOfwu\])]{} can be written at ${\mathcal O}(n \geq 1)$ as: \[eq:dtaudwOfwuOrdern\] \_w\_u\^[(n)]{} + \_w w\_u\^[(n)]{} = Y\^[(n)]{} + Z\^[(n)]{} where $Y^{(n)}$ is a contribution accounting for the difference between $\frac{2}{{\Upsilon}} \partial_w w_u$ and $\frac{2}{a} \partial_w w_u$ on the LHS and $Z^{(n)}$ is from the RHS of Eq. [(\[eq:dtaudwOfwu\])]{}: Y\^[(n)]{} &=& - 2 \_[k=1]{}\^[n]{} (\^[-1]{})\^[(k)]{} (\_w w\_u)\^[(n-k)]{} n 1 ,\ \[eq:Zn\] Z\^[(n)]{} &=& - 2 \_[N+M+K=n]{}\ & & + \_[N+M+K+L=n]{} n 2    . More precisely, we can derive a solution of Eq. [(\[eq:dtaudwOfwuOrdern\])]{} order by order. At ${\mathcal O}(0)$ (using Eq. [(\[eq:Order0Conditions\])]{}): \_w\_u\^[(0)]{} + \_w w\_u\^[(0)]{} = 0 . This is a linear partial differential equation that can be solved through the method of characteristics. We get that $w_u^{(0)}$ is constant along the characteristic curve: \[eq:C0\] \^[(0)]{}: w\_= -w + 2 () () \_[0]{}\^ , and its value is given in terms of a general function ${\widetilde{w}_u}^{(0)}$: w\_u\^[(0)]{} = [\_u]{}\^[(0)]{}(w\_) w\_u\^[(0)]{}(,w) = [\_u]{}\^[(0)]{}(-w+2()) . The same reasoning can be applied at ${\mathcal O}(1)$, ${\mathcal O}(2)$ and so one, with for example at first and second orders: Y\^[(1)]{} &=& \_w w\_u\^[(0)]{} ,\ Z\^[(1)]{} &=& 0 ,\ Y\^[(2)]{} &=& 2( - ) \_w w\_u\^[(0)]{} + 2 \_w w\_u\^[(1)]{} ,\ Z\^[(2)]{} &=& - U\^[a(1)]{} \_a w\_u\^[(1)]{} + \^[ab(0)]{} \_a w\_u\^[(1)]{} \_b w\_u\^[(1)]{} , where $\partial_w w_u^{(0)}$, and $\partial_w w_u^{(1)}$ or $\partial_a w_u^{(1)}$, are respectively given from the resolution of zeroth and first order equations. At ${\mathcal O}(n)$, the solution of Eq. [(\[eq:dtaudwOfwuOrdern\])]{} is found as follows. First we notice from the LHS that the characteristic curve is the same as the zeroth order, ${\mathscr{C}}^{(1)} = {\mathscr{C}}^{(0)}$. We can thus integrate along this curve and find that: \[eq:SolWun\] w\_u\^[(n)]{}(,w,\^a) = \_[0]{}\^ ’ (’,-w\_+2(’),\^a) + [\_u]{}\^[(n)]{}(w\_) , where $w_{\text{o}}$ needs to be replaced by $-w+2\eta(\tau)$ and ${\widetilde{w}_u}^{(n)}$ is an arbitrary function. Summing all orders and defining the functions of $(\tau,w,{\underline{\theta}}^a)$: Y = \_[n=1]{}\^Y\^[(n)]{} , Z = \_[n=1]{}\^Z\^[(n)]{} , [\_u]{}= \_[n=1]{}\^[\_u]{}\^[(n)]{} , we get the general solution: \[eq:SolWu\] w\_u(,w,\^a) = \_[0]{}\^ ’ (’,w-2()+2(’),\^a) + [\_u]{}(-w+2()) , of the equation equivalent to Eq. [(\[eq:dtaudwOfwu\])]{}: \[eq:LikedtaudwOfwu\] \_w\_u + \_w w\_u = Y + Z . We now need to fix the boundary condition of $w_u$ in order to set ${\widetilde{w}_u}$. In GLC we can impose the gauge condition $w |_{{\mathscr{L}}_{\text{o}}} = \eta(\tau)$ (see e.g. Refs. [@Fanizza:2015swa; @NugierThesis]), leading to $w_{\text{o}}|_{{\mathscr{L}}_{\text{o}}} = \eta(\tau)$. Imposing this condition and requiring that: w\_u |\_[\_]{} = () we get from Eq. [(\[eq:SolWu\])]{} that: \[eq:SolWu2\] [\_u]{}(x) = x - \_[0]{}\^[\^[-1]{}(x)]{} ’ (’,-x+2(’),\^a) . where we used $x \equiv \eta(\tau)$ for clarity. We now have an explicit form for ${\widetilde{w}_u}$ and the final expression of $w_u$ is given by: \[eq:SolWuExplicit\] w\_u(,w,\^a) = - w + 2 () + \_[\_]{}\^ ’ (’,w-2()+2(’),\^a) , where we have defined $\tau_{\text{o}}\equiv \eta^{-1}(-w+2\eta(\tau))$. We can check that $\tau |_{{\mathscr{L}}_{\text{o}}} = \tau_{\text{o}}$, so this lower bound corresponds to the proper time of the observer on his/her own worldline. Hence the property $w_u |_{{\mathscr{L}}_{\text{o}}} = \eta(\tau)$ is easily checked and this is also equal to $\eta(\tau_{\text{o}})$. In another gauge we would have a different form for ${\widetilde{w}_u}$ and thus $w_u$. For example the temporal gauge condition imposes $w |_{{\mathscr{L}}_{\text{o}}} = \tau$ and we could also choose $w_u |_{{\mathscr{L}}_{\text{o}}} = \tau$. Nevertheless in that case the expression of ${\widetilde{w}_u}(x)$, with now $x \equiv \tau + \eta(\tau)$, involves the expression of $\tau(x)$ which is not easy to obtain. Hence it is better not to use the temporal gauge in that case. Another form of Eq. [(\[eq:SolWuExplicit\])]{} solution could be obtained by integrating over $w$ rather than $\tau$. Skipping the details but noticing that the characteristic curve ${\mathscr{C}}^{(0)}$ is unchanged, we find: \[eq:SolWuExplicit2\] w\_u(,w,\^a) = - w + 2 () + \_[w\_]{}\^[w]{} w’ a((w\_,w’)) ((w\_,w’),w’,\^a) , where $w_{\text{o}}\equiv - w + 2 \eta(\tau) = \eta(\tau_{\text{o}})$ and this is consistent with the boundary conditions $w |_{{\mathscr{L}}_{\text{o}}} = \eta(\tau)$ and $\tau |_{{\mathscr{L}}_{\text{o}}} = \tau_{\text{o}}$ expressed above. We can check directly that $w_u |_{{\mathscr{L}}_{\text{o}}} = \eta(\tau)$. We also defined the function $\tau(w_{\text{o}},w') \equiv \eta^{-1}(\frac{w_{\text{o}}+w'}{2})$ for notation convenience. Let us trivially notice that the solutions of Eq. [(\[eq:SolWuExplicit\])]{} or [(\[eq:SolWuExplicit2\])]{} indeed work when plugged back into Eq. [(\[eq:LikedtaudwOfwu\])]{} (and this property is independent from the imposed boundary conditions on ${\mathscr{L}}_{\text{o}}$). We have thus proved in this appendix that $w_u$ can be expressed in terms of GLC coordinates, at least at a perturbative level around FLRW. This, in addition to other relations presented in the paper (e.g. in Sec. \[SecDLC:Metric\]), shows that DLC and GLC coordinates are perfectly consistent with each other. This is a non-trivial result in which we replaced the time coordinate $\tau$ by the null coordinate $w_u$ while keeping the three others identical ($w_v \equiv w$, $\theta^a \equiv {\underline{\theta}}^a$). DLC coordinates and the Newtonian gauge {#AppDLCNearFLRW} ======================================= We show in this section some relations for the DLC coordinates and metric functions near a perturbed FLRW geometry in the Newtonian gauge. This gauge is defined with the following line element: s\_[NG]{}\^2 = a\^2() involving the so-called conformal time $\eta$ and radius $r$ (in addition to the homogeneous angles ${\bar{\theta}}^a = (\theta,\phi)$). The metric functions $\Phi$ and $\Psi$ are the so-called Bardeen potentials that we will later assume equal (and denote by $\psi(\eta,r,{\bar{\theta}}^a)$) at first order in perturbations (with no anisotropic stress, otherwise see Ref. [@Marozzi:2014kua]), and we neglect vectors or tensor modes (cf. e.g. [@P5; @Fanizza:2015swa]). We can establish the transformation of coordinates between $y^\mu = (w_u ,w_v, \theta^a)$ and $x^\mu = (\eta, r, \overline{\theta}^a)$, using Eq. [(\[eq:InvTransfoCoordinates\])]{} with now $g_{\rm NG}^{\alpha\beta}$ replacing $g^{\alpha\beta}_{{\rm GLC}}$. With the first order decomposition: w\_u = - r + w\_u\^[(1)]{} ,w\_v = + r + w\_v\^[(1)]{} ,\^a = \^a + \^[a(1)]{} , we find the DLC metric functions at zeroth order to be: \^[(0)]{} = a ,\^[a(0)]{} = 0 ,\^[ab(0)]{} = a\^[-2]{} [diag]{}(r\^[-2]{} , r\^[-2]{} ()\^[-2]{}) . At first order the coordinates transformations and DLC functions are: & [\_[\_+]{}]{}w\_u\^[(1)]{} = [\_[\_-]{}]{}w\_v\^[(1)]{} = ,[\_[\_-]{}]{}\^[a(1)]{} = 0 ,\ & \^[(1)]{} = ,\^[a(1)]{} = ( 2 [\_[\_+]{}]{}\^[a(1)]{} - \^[ab(0)]{} \_b w\_u\^[(1)]{} )  ,\ & \^[ab(1)]{} = 2 a\^[-2]{} , where we have introduced the null-cone-like (but not exactly null) coordinates: \_= r ,\_= [\_[\_+]{}]{}+ [\_[\_-]{}]{},\_r = [\_[\_+]{}]{}- [\_[\_-]{}]{}. We can now study the condition $g^{w_u w_u}_{{\rm DLC}} = 0$ and see if the transformations above respect it. To achieve this, one can study either Eq. [(\[eq:gUwuUwu\])]{} or [(\[TransformationInverseGLCtoDLC\])]{} perturbatively. The second relation was already studied in Sec. \[AppSecondDLCDerivation2\], so we consider the first approach here. Based on Eq. [(\[eq:gUwuUwu\])]{}, we define the perturbative quantities in GLC and DLC coordinates: & = a + \^[(1)]{} , U\^a = U\^[a(1)]{} ,\ & \^[(1)]{} = a() (\_r P - [\_[\_+]{}]{}Q) , U\^[a(1)]{} = \_\^[a(1)]{} - \^[ab(0)]{} \_b \^[(1)]{} ,\ \[eq:GLCOrder1\] & = \^[(0)]{} + \^[(1)]{} \_[\_[in]{}]{}\^ ’ a(’) + a() P(,r,[|]{}\^a) , w = \_+ + Q , where these results were proved in Refs. [@P4; @NugierThesis; @Fanizza:2015swa] and we define the integrals: P(,r,[|]{}\^a) = \_[\_[in]{}]{}\^ ’ (’,r,[|]{}\^a) , Q(\_+,\_-,[|]{}\^a) = \_[\_]{}\^[\_-]{} x ( ) (\_+,x,[|]{}\^a) . We find that Eq. [(\[eq:gUwuUwu\])]{} is trivial at zeroth order (using that $\partial_{w_v} \tau = a / 2$), as expected, and find the conditions for first and second order: & \_[w\_v]{} \^[(1)]{} = ,\ & \_[w\_v]{} \^[(2)]{} = - U\^[a(1)]{} - \^[ab(0)]{} , in which we already made simplifications according to the order in perturbations. Let us prove that the first order relation is verified. Indeed, we can write: &=& + + ,\ &=& ( + ) ( 1 + ) + + [O]{}(\^3) ,\ &=& ( 1 + \_r P + 2 ) , where we used that $\partial \tau^{(1)} / \partial \eta_+ = a(\eta) \partial_r P / 2$ and $\eta_+^{(1)} + \eta_-^{(1)} = \eta^{(1)}$. Considering now that $w = \eta_+ + w^{(1)} = w_v$, we get that: = - + [O]{}(\^2) = - \_+ Q + [O]{}(\^2) , as $w^{(1)} = Q$ from Eq. [(\[eq:GLCOrder1\])]{}. This proves that: = ( \_r P - [\_[\_+]{}]{}Q ) , and thus Eq. [(\[eq:gUwuUwu\])]{} appears to be consistent with GLC also at first order in (scalar) perturbations around FLRW. Christoffel symbols in DLC coordinates {#AppChristoffel} ====================================== In this section we present the Christoffel symbols necessary to derive Einstein equations within DLC coordinates (a goal that we do not intend to fulfill here). We use the metric and its inverse presented in Eqs. [(\[DLCinvmetric\])]{} and [(\[DLCmetric\])]{}, plus the definition of the Christoffel symbols: [\^\_]{} = ( g\_[,]{}\^[[DLC]{}]{} + g\_[,]{}\^[[DLC]{}]{} - g\_[,]{}\^[[DLC]{}]{} ) . This gives us the following components: & [\^[u]{}\_[uu]{}]{} = , [\^[v]{}\_[vv]{}]{} = + , [\^[u]{}\_[uv]{}]{} = - - ,\ & [\^[v]{}\_[uu]{}]{} = 0 , [\^[u]{}\_[vv]{}]{} = - + , [\^[v]{}\_[uv]{}]{} = 0 ,\ & [\^[u]{}\_[ua]{}]{} = + - , [\^[a]{}\_[uu]{}]{} = 0 , [\^[v]{}\_[va]{}]{} = - ,\ & [\^[a]{}\_[vv]{}]{} = \^a + \^a - \^[ab]{} (\_b)\_[,v]{} - \^[ab]{} (\^2)\_[,b]{} ,\ & [\^[u]{}\_[va]{}]{} = - - ( (\_a)\_[,b]{} - (\_b)\_[,a]{} + (\_[ab]{})\_[,v]{} ) , [\^[v]{}\_[ua]{}]{} = 0 ,\ & [\^[a]{}\_[uv]{}]{} = ( \_[,b]{} - (\_b)\_[,u]{} ) ,\ & [\^[u]{}\_[ab]{}]{} = ( (\_a)\_[,b]{} + (\_b)\_[,a]{} + (\_[ab]{})\_[,v]{} ) - ( \_[ca,b]{} + \_[cb,a]{} - \_[ab,c]{} ) ,\ & [\^[v]{}\_[ab]{}]{} = , [\^[a]{}\_[ub]{}]{} = \^[ac]{} (\_[cb]{})\_[,u]{} ,\ & [\^[a]{}\_[vb]{}]{} = ( \_[,b]{} - (\_b)\_[,u]{} ) + \^[ac]{} ( (\_b)\_[,c]{} - (\_c)\_[,b]{} + (\_[cb]{})\_[,v]{} ) ,\ & [\^[a]{}\_[bc]{}]{} = ( \_[db,c]{} + \_[dc,b]{} - \_[bc,d]{} ) , where, just for notational convenience, we replaced $(w_u,w_v)$ by $(u,v)$ and used the coma notation for partial derivative. We recall also that ${\widetilde{U}}^2 \equiv {\widetilde{U}}_a {\widetilde{U}}^a$. The four components ${\Gamma^{\mu}_{uu}}$, standing for ${\Gamma^{\mu}_{w_uw_u}}$, confirm our result of Eq. [(\[eq:NGEinDLC\])]{}. Using now the expression of the Ricci tensor: R\_ = [\^\_[,]{}]{} - [\^\_[,]{}]{} + [\^\_]{} [\^\_]{} - [\^\_]{} [\^\_]{} , we find that the component $R_{w_u w_u}$ is given by: R\_[w\_u w\_u]{} = ( - ) \^[ac]{} (\_[ac]{})\_[,w\_u]{} - \^[bc]{} \^[ad]{} (\_[ac]{})\_[,w\_u]{} (\_[db]{})\_[,w\_u]{} . The null energy condition of Eq. [(\[eq:nullenergycondition\])]{} then gives a relation between the metric functions: 2 ( - 1 ) \^[ac]{} (\_[ac]{})\_[,w\_u]{} \^[bc]{} \^[ad]{} (\_[ac]{})\_[,w\_u]{} (\_[db]{})\_[,w\_u]{} . Transformation of coordinates for static black holes {#AppZforStaticBH} ==================================================== We present here a general proof of the correspondence between the DLC gauge and the static black hole metric. This also gives a rather simple proof of the redshift expression $1+z_s = \frac{N_{\text{o}}}{N_s}$ for any static black hole. Let us recall that the DLC metric is given by Eq. [(\[DLCds2\])]{} while the static black hole metric is given by Eq. [(\[GenSchBHnullcoord\])]{} in terms of ingoing and outgoing null coordinates $(u,v)$. We can still assume, without loss of generality, that $w_u = u$, $w_v = v$, $\theta^a = {\bar{\theta}}^a$, ${\widetilde{U}}^a = 0$ and $\gamma_{ab} = r^2(u,v) \delta_{ab}$ like presented in Eq. [(\[DLCfunctionsStaticBH\])]{}. The comparison between the two metrics is thus reduced to their “radial” part (as opposed to “angular”): \[eq:AppMetrics\] s\^2\_[DLC]{} = - \^2 w\_u w\_v       s\^2\_[stat.]{} = - N\^2 t\^2 + = - N\^2 u v   . This clearly identifies ${\widetilde{\Upsilon}}= N$ for static black holes, but does not give the expressions of ${\Upsilon}$. For this reason we introduce the following change of coordinates: \[eq:AppTransfoCoord\] = w\_u + w\_v ,r = A w\_u + B w\_v , where the first relation comes from Eq. [(\[AnsatzTau\])]{} with $\partial \tau / \partial \theta^a = 0$ (due to spherical symmetry), between GLC and DLC coordinates, and the second relates the static black hole radial distance $r$ to the DLC coordinates. We further impose that the proper time of GLC coordinates is directly related to the cosmic time $t$ of the static black hole metric by ${\mathrm{d}}\tau = C {\mathrm{d}}t$. Inverting the system of Eq. [(\[eq:AppTransfoCoord\])]{} and plugging the expressions in Eq. [(\[eq:AppMetrics\])]{}, we find that: A = - ,B = ,C = N . Hence we already found, as expected from Sec. \[SecDLCBH:RedshiftStatBH\], that the proper time of the observer $\tau$ is related to the time $t$, leading to the redshift expression: = N t  1+z\_s = . We also established the transformation between $(t,r)$ and $(w_u,w_v)$: t = w\_u + w\_v ,r = - w\_u + w\_v , that we can now combine with the general transformation of Eq. [(\[DiffrAndt\])]{} (assuming again $w_u = u$, $w_v = v$). This gives: r\_[,u]{} = - = - ,r\_[,v]{} = = r\_[,u]{} r\_[,v]{} = - . This already confirms that ${\widetilde{\Upsilon}}= N$ and we can impose that $r_{,u} = - r_{,v}$ to establish that: r\_[,v]{} = = - r\_[,u]{} ,= = N , for static black holes, confirming results of Secs. \[SecDLCBH:StatBH\] and \[SecDLCBH:RedshiftStatBH\] and giving the useful relations: \[TransfoFinalCoordAppB\] w\_u = t - ,w\_v = t + . [^1]: One should be careful though with the fact that both $d_A$’s numerator and denominator in Eq. [(\[dAandMu\])]{} go to zero on the observer worldline (e.g. for $r \rightarrow 0$ above). In the practical case of a perturbed FLRW geometry described by the Newtonian gauge (see App. \[AppDLCNearFLRW\]), we find that $\gamma^{1/4} = a r (\sin\theta)^{1/2}$ and $\gamma_{\text{o}}^{1/4} / \sqrt{(\det \dot{\gamma}_{ab})_{\text{o}}} = (\sin\theta_{\text{o}})^{-1/2}/2$. The observer angle being homogeneous (${\underline{\theta}}\equiv \theta_{\text{o}}$), we get back Eq. [(\[dAandMu\])]{} at zeroth order near the observer. If first order corrections affect the observer, Eq. [(\[AngDist\])]{} is corrected with first order terms [@Fanizza:2013doa; @Yoo:2016vne]. [^2]: Actually the choice $w_v = w$ is also a convenient choice avoiding unnecessary complications. One could for example take a modified GLC system of coordinates, spanned by $\tau$ and future light cones $w = {\mathrm{cst}}$, and then identify $w_u$ with $w$. We choose to stay as close as possible to GLC in our definition of DLC coordinates. [^3]: Note that Refs. [@P1; @P2; @P3; @P4; @P5] are taking $\lambda = -\tau$ while Ref. [@P6] is using $\lambda = \tau$. [^4]: The factor $e^{\bar{\lambda}}$ is used instead of $e^\lambda$, as in Ref. [@Brady:1995na], for the simple reason that $\lambda$ already denotes our affine parameter along null trajectories. Similarly, we replaced the null coordinates $u^A$ of Ref. [@Brady:1995na] by our $w^A$.
--- abstract: 'Given a row-finite $k$-graph $\Lambda$ with no sources we investigate the $K$-theory of the higher rank graph $C^*$-algebra, $C^*(\Lambda)$. When $k=2$ we are able to give explicit formulae to calculate the $K$-groups of $C^*(\Lambda)$. The $K$-groups of $C^*(\Lambda)$ for $k>2$ can be calculated under certain circumstances and we consider the case $k=3$. We prove that for arbitrary $k$, the torsion-free rank of $K_0(C^*(\Lambda))$ and $K_1(C^*(\Lambda))$ are equal when $C^*(\Lambda)$ is unital, and for $k=2$ we determine the position of the class of the unit of $C^*(\Lambda)$ in $K_0(C^*(\Lambda))$.' address: | Institute of Mathematical and Physical Sciences\ Aberystwyth University\ Penglais Campus\ Aberystwyth\ Ceredigion\ SY23 3BZ\ Wales\ UK. author: - 'D. Gwion Evans' title: 'On the $K$-theory of Higher Rank Graph $C^*$-Algebras' --- Introduction ============ In [@S91] Spielberg realised that a crossed product algebra $C(\Omega)\rtimes\Gamma$, where $\Omega$ is the boundary of a certain tree and $\Gamma$ is a free group, is isomorphic to a Cuntz-Krieger algebra [@CK80; @C81]. Noticing that such a tree may be regarded as an affine building of type $\tilde{A}_1$, Robertson and Steger studied the situation when a group $\Gamma$ acts simply transitively on the vertices of an affine building of type $\tilde{A}_2$ with boundary $\Omega$ [@RS96]. They found that the corresponding crossed product algebra $C(\Omega)\rtimes \Gamma$ is generated by two Cuntz-Krieger algebras. This led them to define a $C^*$-algebra ${\mathcal{A}}$ via a finite sequence of finite 0–1 matrices (i.e. matrices with entries in $\{0,1\}$) $M_1,\ldots,M_r$ satisfying certain conditions (H0)-(H3), such that ${\mathcal{A}}$ is generated by $r$ Cuntz-Krieger algebras, one for each $M_1,\ldots,M_r$. Accordingly they named their algebras higher rank Cuntz-Krieger algebras, the rank being $r$. Kumjian and Pask [@KP00] noticed that Robertson and Steger had constructed their algebras from a set, $W$ of [*(higher rank) words*]{} in a finite [*alphabet*]{} $A$ - the common index set of the 0–1 matrices - and realised that $W$ could be thought of as a special case of a generalised directed graph - a higher rank graph. Subsequently, Kumjian and Pask associated a $C^*$-algebra, $C^*(\Lambda)$ to the higher rank graph $\Lambda$ and showed that ${\mathcal{A}}\cong C^*(W)$ [@KP00 Corollary 3.5 (ii)]. Moreover, they derived a number of results elucidating the structure of higher rank graph $C^*$-algebras. They show in [@KP00 Theorem 5.5] that a simple, purely infinite $k$-graph $C^*$-algebra $C^*(\Lambda)$ may be classified by its $K$-theory. This is a consequence of $C^*(\Lambda)$ satsifying the hypotheses of the Kirchberg-Phillips classification theorem ([@K; @P00]). Moreover, criteria on the underlying $k$-graph $\Lambda$ were found that decided when $C^*(\Lambda)$ was simple and purely infinite (see [@KP00 Proposition 4.8, Proposition 4.9] and [@S06]). Thus a step towards the classification of $k$-graph $C^*$-algebras is the computation of their $K$-groups. In [@RS01 Proposition 4.1] Robertson and Steger proved that the $K$-groups of a rank 2 Cuntz-Krieger algebra is given in terms of the homology of a certain chain complex, whoose differentials are defined in terms of $M_1,\dots,M_r$. Their proof relied on the fact that a rank 2 Cuntz-Krieger algebra is stably isomorphic to a crossed product of an AF-algebra by ${{\mathbb{Z}}}^2$. We will generalise their method to provide explicit formulae for the $K$-groups of 2-graph $C^*$-algebras and to gain information on the $K$-groups of $k$-graph $C^*$-algebras for $k>2$. The rest of this paper is organised as follows. We begin in §\[S:Prelim\] by recalling the fundamental definitions relating to higher rank graphs and their $C^*$-algebras we will need from [@KP00]. In §\[S:K-theory\] we use the fact that the $C^*$-algebra of a row-finite $k$-graph $\Lambda$ with no sources is stably isomorphic to a crossed product of an AF algebra, $B$, by ${{\mathbb{Z}}}^k$ ([@KP00 Theorem 5.5]) to apply a theorem of Kasparov [@K88 6.10 Theorem] to deduce that there is a homological spectral sequence ([@W94 Chapter 5]) converging to $K_*(C^*(\Lambda))$ with initial term given by $E^2_{p,q}\cong H_p({{\mathbb{Z}}}^k,K_q(B))$ (see [@Br94] for the definition of the homology of a group $G$ with coefficients in a left $G$-module $M$, denoted by $H_*(G,M)$). We will see that it suffices to compute $H_*({{\mathbb{Z}}}^k,K_0(B))$. It transpires that $H_*({{\mathbb{Z}}}^k,K_0(B))$ is given by the so called vertex matrices of $\Lambda$. These are matrices with non-negative integer entries that encode the structure of the category $\Lambda$. Next we assemble the results of §\[S:K-theory\] and state them in our main theorem, Theorem \[T:Main\]. We then specialise to the cases $k=2$ and $k=3$. For $k=2$ a complete description of the $K$-groups in terms of the vertex matrices can be given. For $k=3$ we illustrate how Theorem \[T:Main\] can be used to give a description of the $K$-groups of 3-graph $C^*$-algebras under stronger hypotheses. In section §\[S:unital\] we consider the $K$-theory of unital $k$-graph $C^*$-algebras. We show that the torsion-free rank of $K_0(C^*(\Lambda))$ is equal to that of $K_1(C^*(\Lambda))$ when $C^*(\Lambda)$ is unital and give formulae for the torsion-free rank and torsion parts of the $K$-groups of 2-graph $C^*$-algebras. We conclude with §\[S:examples\], in which we consider some immediate applications to the classification of $k$-graph $C^*$-algebras by means of the Kirchberg-Phillips classification theorem. We also consider some simple examples of $K$-group calculations using the results derived in the previous sections. This paper was written while the author was an European Union Network in Quantum Spaces – Non-Commutative Geometry funded post-doc at the University of Copenhagen. The paper develops a part of the author’s PhD thesis, which was written under the supervision of David E. Evans at Cardiff University. We would like to take this opportunity to thank David for his guidance and support, and Johannes Kellendonk and Ryszard Nest for enlightening discussions on homological algebra. We would also like to express our gratitude to the members of the operator algebras groups in both universities, for maintaining stimulating environments for research. Finally, we thank the referee for their careful reading, comments and suggestions, which helped to clarify the exposition of the paper. Preliminaries {#S:Prelim} ============= By the usual slight abuse of notation we shall let the set of morphisms of a small category $\Lambda$ be denoted by $\Lambda$ and identify an object of $\Lambda$ with its corresponding identity morphism. Also note that a monoid $M$ (and hence a group) can be considered as a category with one object and morphism set equal to $M$, with composition given by multiplication in the monoid. For convenience of notation, we shall denote a monoid and its associated category by the same symbol. The following notation will be used throughout this paper. We let ${{\mathbb{N}}}$ denote the abelian monoid of non-negative integers and we let ${{\mathbb{Z}}}$ be the group of integers. For a positive integer $k$, we let ${{\mathbb{N}}}^k$ be the product monoid viewed as a category. Similarly, we let ${{\mathbb{Z}}}^k$ be the product group viewed, where appropriate, as a category. Let $\{e_i\}_{i=1}^k$ be the canonical generators of ${{\mathbb{N}}}^k$ as a monoid and ${{\mathbb{Z}}}^k$ as a group. Moreover, we choose to endow ${{\mathbb{N}}}^k$ and ${{\mathbb{Z}}}^k$ with the coordinatewise order induced by the usual order on ${{\mathbb{N}}}$ and ${{\mathbb{Z}}}$, i.e. for all $m,n\in{{\mathbb{Z}}}^k \; m\le n \iff m-n\in{{\mathbb{N}}}^k$. We will denote by ${{\mathbb{K}}}(\mathcal{H})$ the $C^*$-algebra of compact operators on a Hilbert space $\mathcal{H}$. Where the Hilbert space $\mathcal{H}$ is separable and of infinite dimension we write ${{\mathbb{K}}}$ for ${{\mathbb{K}}}(\mathcal{H})$. The concept of a [*higher rank graph*]{} or $k$-graph ($k=1,2,\ldots$ being the rank) was introduced by A. Kumjian and D. Pask in [@KP00]. We recall their definition of a $k$-graph. A [**$k$-graph**]{} (rank $k$ graph or higher rank graph) $(\Lambda,d)$ consists of a countable small category $\Lambda$ (with range and source maps $r$ and $s$ respectively) together with a functor $d:\Lambda\longrightarrow {{\mathbb{N}}}^k$ satisfying the [**factorisation property:**]{} for every $\lambda\in\Lambda$ and $m,n\in{{\mathbb{N}}}^k$ with $d(\lambda)=m+n$, there are unique elements $\mu,\nu\in\Lambda$ such that $\lambda=\mu\nu$ and $d(\mu)=m,\,d(\nu)=n$. For $n\in{{\mathbb{N}}}^k$ and $v\in\Lambda^0$ we write $\Lambda^n:=d^{-1}(n),\;\Lambda(v):=r^{-1}(v)$ and $\Lambda^n(v):=\{\lambda\in\Lambda^n \,|\, r(\lambda)=v\}$. A $k$-graph $\Lambda$ is [**row-finite**]{} if for each $n\in{{\mathbb{N}}}^k$ and $v\in\Lambda^0$ the set $\Lambda^n(v)$ is finite. We say that $\Lambda$ has [**no sources**]{} if $\Lambda^n(v)\ne\emptyset$ for all $v\in\Lambda^0$ and $n\in{{\mathbb{N}}}^k$. Unless stated otherwise, we will assume that each higher rank graph in this paper is row-finite with no sources. Furthermore, we shall denote such a generic higher rank graph by $(\Lambda,d)$ (or more succinctly $\Lambda$ with the understanding that the degree functor will be denoted by $d$). We refer to [@MacL98] as an appropriate reference on category theory. There is no need for a detailed knowledge of category theory as we will be interested in the combinatorial graph-like nature of higher rank graphs. As the name suggests a higher rank graph can be thought of as a higher rank analogue of a directed graph. Indeed, every 1-graph is isomorphic (in the natural sense) to the category of finite paths of a directed graph ([@KP00 Example 1.3]). By [@KP00 Remarks 1.2] $\Lambda^0$ is the set of identity morphisms of $\Lambda$. Indeed it is fruitful to view $\Lambda^0$ as a set of vertices and $\Lambda$ as a set of (coloured) paths with composition in $\Lambda$ being concatention of paths. This viewpoint is discussed further in [@E03; @RSY03]. In the sequel we will use the following higher rank graph constructions devised by Kumjian and Pask. For further examples of $k$-graphs see, for example, [@KP00; @RSY03; @RSY04; @S06b]. ${}$\ \[exs:k-graph\_constructions\] 1. Let $\Delta_k$ be the category with morphism set equal to $\{(m,n)\in{{\mathbb{Z}}}^k\times{{\mathbb{Z}}}^k \;|\; m\le n\}$, object set equal to $\{(m,m)\;|\; m\in{{\mathbb{Z}}}^k\}$, structure maps defined by $r(m,n)=m,\; s(m,n)=n$ and composition defined by $(m,l)(l,n)=(m,n)$ for all $m,l,n\in{{\mathbb{Z}}}^k$. One may define a degree functor $d:\Delta_k{\longrightarrow}{{\mathbb{N}}}^k$ by $d(m,n)=n-m$ so that $(\Delta_k, d)$ is a $k$-graph. Furthermore, it is straightforward to check that $(\Delta_k,d)$ is row-finite and has no sources. 2. **The product higher rank graph ([[@KP00 Proposition 1.8]]{})**: Let $(\Lambda_1,d_1)$ and $(\Lambda_2,d_2)$ be rank $k_1,k_2$ graphs respectively, then their product higher rank graph $(\Lambda_1\times\Lambda_2,d_1\times d_2)$ is a $(k_1+k_2)$-graph, where $\Lambda_1 \times \Lambda_2$ is the product category and the degree functor $d_1\times d_2:\Lambda_1\times\Lambda_2 {\longrightarrow}{{\mathbb{N}}}^{k_1+k_2}$ is given by $d_1\times d_2(\lambda_1,\lambda_2)=(d_1(\lambda_1),d_2(\lambda_2))\in{{\mathbb{N}}}^{k_1}\times{{\mathbb{N}}}^{k_2}$ for $\lambda_1\in\Lambda_1$ and $\lambda_2\in\Lambda_2$. 3. **The skew-product higher rank graph ([[@KP00 Definition 5.1]]{})**: Given a countable group $G$, a $k$-graph $\Lambda$ and a functor $c:\Lambda\longrightarrow G$, the *skew-product* $k$-graph $G \times_c \Lambda$ consists of a category with object set identified with $G \times\Lambda^0$ and morphism set identified with $G \times \Lambda$. The structure maps are given by: $s(g,\lambda) = (gc(\lambda), s(\lambda))$ and $r(g,\lambda) = (g, r(\lambda))$. If $s(\lambda)=r(\mu)$ then $(g,\lambda)$ and $(gc(\lambda),\mu)$ are composable in $G \times_c \Lambda$ and $(g,\lambda)(gc(\lambda),\mu) = (g,\lambda\mu)$. The degree map is given by $d(g,\lambda)=d(\lambda)$. Furthermore, $G$ acts freely on $G\times_c\Lambda$ by $g\cdot(h,\lambda)\mapsto (gh,\lambda)$ for all $g,h\in G$ and $\lambda\in\Lambda$ (see [@KP00 Remark 5.6] and its preceding paragraph). To each row-finite $k$-graph with no sources, Kumjian and Pask associated an unique $C^*$-algebra in the following way. ([@KP00 Definitions 1.5]) Let $\Lambda$ be a row-finite $k$-graph with no sources. Then $C^*(\Lambda)$ is defined to be the universal $C^*$-algebra generated by a family $\{s_\lambda \;|\; \lambda\in\Lambda\}$ of partial isometries satisfying: 1. $\{s_v \;|\; v\in\Lambda^0\}$ is a family of mutually orthogonal projections, 2. $s_{\lambda\mu}=s_\lambda s_\mu$ for all $\lambda,\mu\in\Lambda$ such that $s(\lambda)=r(\mu)$, 3. $s_\lambda^*s_\lambda = s_{s(\lambda)}$ for all $\lambda\in\Lambda$, 4. for all $v\in\Lambda^0$ and $n\in{{\mathbb{N}}}^k$ we have $s_v=\sum_{\lambda\in\Lambda^n(v)} s_\lambda s_\lambda^*$. For $\lambda\in\Lambda$, define $p_\lambda:=s_\lambda s_\lambda^*$. A family of partial isometries satisfying (i)–(iv) above is a called a **$*$-representation** of $\Lambda$. We consider the following $C^*$-algebras associated with the constructions noted in Examples \[exs:k-graph\_constructions\], which will be useful in the sequel. ${}$\ \[exs:k-graph\_C\*-constructions\] 1. Let $\Delta_k$ be the row-finite $k$-graph with no sources defined in Examples \[exs:k-graph\_constructions\].1. Then $C^*(\Delta_k)\cong {{\mathbb{K}}}(\ell^2({{\mathbb{Z}}}^k))$ since $\{e_{m,n}\;|\; m,n\in{{\mathbb{Z}}}^k\}$ is a complete system of matrix units if $e_{m,n}:=s_{(m,q)}s_{(n,q)}^*$ where $q:=\mbox{sup}\{m,n\}$ (cf. [@KP00 Examples 1.7 (ii)]). 2. Let $(\Lambda_i,d_i)$ be a row-finite $k_i$-graph with no sources for $i=1,2$. Then $$C^*(\Lambda_1\times \Lambda_2)\cong C^*(\Lambda_1)\otimes C^*(\Lambda_2)$$ by [@KP00 Corollary 3.5 (iv)].[^1] 3. Let $G$ be a countable group, $\Lambda$ a row-finite $k$-graph with no sources and $c:\Lambda\longrightarrow G$ a functor. Then the action of $G$ on $G\times_c\Lambda$ described in Examples \[exs:k-graph\_constructions\].3 induces an action $\beta:G{\longrightarrow}\operatorname{\mbox{Aut}}(C^*(G\times_c\Lambda))$ such that $\beta_g(s_{(h,\lambda)})=s_{(gh,\lambda)}$. Furthermore $C^*(G\times_c\Lambda)\rtimes_\beta G\cong C^*(\Lambda)\otimes {{\mathbb{K}}}(\ell^2(G))$ [@KP00 Theorem 5.7]. The $K$-groups of $k$-graph $C^*$-algebras {#S:K-theory} ========================================== For the remainder of this paper we shall denote by $B_\Lambda$ (or simply $B$ when there is no ambiguity) the $C^*$-algebra of the skew-product of a row-finite $k$-graph $(\Lambda,d)$, with no sources, by ${{\mathbb{Z}}}^k$ via the degree functor regarded as a functor into ${{\mathbb{Z}}}^k$, i.e. $B:=C^*({{\mathbb{Z}}}^k\times_d\Lambda)$, and by $\beta$ the action of ${{\mathbb{Z}}}^k$ on $B$ as described in Examples \[exs:k-graph\_C\*-constructions\].3. Note that by [@KP00 Corollary 5.3 and Theorem 5.5] and Takesaki-Takai duality [@T75] (or [@KP00 Theorem 5.7], cf. Examples \[exs:k-graph\_C\*-constructions\].3), $C^*(\Lambda)$ is stably isomorphic to the crossed product of an AF-algebra, $B$, by ${{\mathbb{Z}}}^k$, i.e. $B\rtimes_\beta {{\mathbb{Z}}}^k\cong C^*(\Lambda)\otimes {{\mathbb{K}}}$. Therefore $K_0(C^*(\Lambda))\cong K(B\rtimes_\beta{{\mathbb{Z}}}^k)$. It will be useful for us in the sequel to have an explicit description of how an isomorphism $K_0(C^*(\Lambda)){\longrightarrow}K(B\rtimes_\beta{{\mathbb{Z}}}^k)$ acts on the $K_0$-class of a canonical projection $p_v,\;v\in\Lambda^0$ in $C^*(\Lambda)$. To this end, we prefer to investigate how the isomorphism acts by using an alternative approach to that outlined above. This we do below by using a standard technique in $k$-graph $C^*$-algebra theory, namely by utilising the gauge-invariant uniqueness theorem for $k$-graph $C^*$-algebras [@KP00 Theorem 3.4]. \[thrm:K(C\*(Lambda))-K(Crossed-Product)\] Let $\Lambda$ be a row-finite $k$-graph with no sources. Then there exists a group isomorphism $\Phi_0:K_0(B\rtimes_\beta {{\mathbb{Z}}}^k){\longrightarrow}K_0(C^*(\Lambda))$ such that $\Phi_0([i_B(p_{(0,v)})]))=[p_v]$ for all $v\in\Lambda^0$ (where we adopt the notation used in [@R88] for crossed-product $C^*$-algebras). Let $(B\rtimes_\beta{{\mathbb{Z}}}^k,\,i_B,\,i_{{{\mathbb{Z}}}^k})$ be a crossed product for the dynamical system $(B,\,{{\mathbb{Z}}}^k,\,\beta)$ in the sense of [@R88]. One checks that $\{t_{(\lambda,(m,n))} \;|\; (\lambda,(m,n))\in\Lambda\times\Delta \}$ is a \*-representation of $\Lambda\times\Delta$, where for $(\lambda,(m,n))\in\Lambda\times\Delta$ we let $t_{(\lambda,(m,n))}:=i_B(s_{(m,\lambda)})i_{{{\mathbb{Z}}}^k}(m+d(\lambda)-n)$. Moreover $C^*(t_\xi \;|\; \xi\in\Lambda\times\Delta)=B\rtimes_\beta{{\mathbb{Z}}}^k$. Thus by the universal property of $C^*(\Lambda\times\Delta)$, there exists a $*$-homomorphism $\pi:C^*(\Lambda\times\Delta){\longrightarrow}B\rtimes_\beta{{\mathbb{Z}}}^k$ such that $\pi(s_\xi)=t_\xi$ for all $\xi\in\Lambda\times\Delta$. Let $\alpha:{{\mathbb{T}}}^k{\longrightarrow}\operatorname{\mbox{Aut}}(B)$ denote the canonical gauge action on $B$ and let $\hat{\beta}:{{\mathbb{T}}}^k{\longrightarrow}\operatorname{\mbox{Aut}}(B\rtimes_\beta{{\mathbb{Z}}}^k)$ denote the dual action of $\beta$. There exists an action $\tilde{\alpha}$ of ${{\mathbb{T}}}^k$ on $B\rtimes_\beta{{\mathbb{Z}}}^k$ such that $i_B\alpha_z=\tilde{\alpha}_zi_B$ for all $z\in{{\mathbb{T}}}^k$. It is clear that setting $\gamma_{(z_1,z_2)}:=\tilde{\alpha}_{z_1z_2}\hat{\beta}_{z_2^{-1}}$ for all $(z_1,z_2)\in{{\mathbb{T}}}^{k}\times{{\mathbb{T}}}^k$ defines an action $\gamma$ of ${{\mathbb{T}}}^{2k}$ on $B\rtimes_\beta{{\mathbb{Z}}}^k$. Moreover, it satisfies $\pi\alpha^\times_z=\gamma_z\pi$ for all $z\in{{\mathbb{T}}}^{2k}$ where $\alpha^\times$ is the canonical gauge action on $C^*(\Lambda\times\Delta)$. Clearly $\pi(p_v)=0$ for all $v\in\Lambda\times\Delta$, hence by the gauge-invariant uniqueness theorem [@KP00 Theorem 3.4] we see that $\pi:C^*(\Lambda\times\Delta){\longrightarrow}B\rtimes_\beta{{\mathbb{Z}}}^k$ is a $*$-isomorphism. For the zero element $0\in{{\mathbb{Z}}}^k$, we see that $p_{(0,0)}$ is a minimal projection in $C^*(\Delta_k)\cong{{\mathbb{K}}}$ (cf. Examples \[exs:k-graph\_C\*-constructions\].1). Therefore, the homomorphism given by $x\mapsto x\otimes p_{(0,0)}$ induces an isomorphism between $K_0(C^*(\Lambda))$ and $K_0(C^*(\Lambda)\otimes C^*(\Delta))$, which in turn is isomorphic to $K_0(C^*(\Lambda\times\Delta)$ (see Examples \[exs:k-graph\_C\*-constructions\].2) and $K_0(B\rtimes_\beta {{\mathbb{Z}}}^k)$. Let $\Psi:K_0(C^*(\Lambda)){\longrightarrow}K_0(B\rtimes_\beta{{\mathbb{Z}}}^k)$ be the composition of the preceding group isomorphisms. Then it follows easily that $\Psi([p_v])=[i_B(p_{(0,v)})]$. Setting $\Phi_0:=\Psi^{-1}$ completes the proof. Therefore we may apply [@K88 6.10 Theorem] to describe the $K$-groups of $C^*(\Lambda)$ by means of a homological spectral sequence with initial term given by $H_p({{\mathbb{Z}}}^k,K_q(B))$, i.e. the homology of the group ${{\mathbb{Z}}}^k$ with coefficients in the left ${{\mathbb{Z}}}^k$-module $K_q(B)$ [@Br94], where the ${{\mathbb{Z}}}^k$-action is given by $e_i\cdot m = K_0(\beta_{e_i})(m)$ for $i=1,\ldots,k$, (cf. the proof of [@RS01 Proposition 4.1]). First, we recall the definition of a homology spectral sequence from [@W94 §5] and the notion of convergence (see also [@MacL63]).[^2] \[D:spectral-sequence\] A *homology spectral sequence* (starting at $E^a$) consists of the following data: 1. A family $\{E^r_{p,q}\}$ of modules defined for all integers $p,q$ and $r\ge a$. 2. Maps $d^r_{pq} : E^r_{p,q} {\longrightarrow}E^r_{p-r,q+r-1}$ that are differentials in the sense that $d^r_{p-r,q+r-1} d_{pq}=0$. 3. Isomorphisms $E^{r+1}_{pq}{\longrightarrow}\ker(d^r_{pq})/\operatorname{im}(d^r_{p+r,q-r+1})$. We will denote the above data by $\{(E^r,d^r)\}$. The *total degree* of the term $E^r_{pq}$ is $n=p+q$. The homology spectral sequence is said to be *bounded* if for each $n$ there are only finitely many nonzero terms of total degree $n$ in $\{E^r_{pq}\}$, in which case, for each $p$ and $q$ there is an $r_0$ such that $E^r_{pq}\cong E^{r+1}_{pq}$ for all $r\ge r_0$. We write $E^\infty_{pq}$ for this stable value of $E^r_{pq}$. We say that a bounded spectral sequence *converges to ${\mathcal{K}}_*$* if we are given a family of modules $\{{\mathcal{K}}_n\}$, each having a finite filtration $$0=F_s({\mathcal{K}}_n)\subseteq \cdots \subseteq F_{p-1}({\mathcal{K}}_n) \subseteq F_p({\mathcal{K}}_n) \subseteq F_{p+1}({\mathcal{K}}_n) \subseteq \cdots \subseteq F_t({\mathcal{K}}_n)={\mathcal{K}}_n,$$ and we are given isomorphisms $E^\infty_{pq} {\longrightarrow}F_p({\mathcal{K}}_{p+q})/F_{p-1}({\mathcal{K}}_{p+q})$. \[L:E\^infty\] There exists a spectral sequence $\{(E^r,d^r)\}$ converging to $K_*(C^*(\Lambda)):=\{{\mathcal{K}}_n\}_{n\in{{\mathbb{Z}}}}$ where $${\mathcal{K}}_n:=\left\{\begin{array}{ll} K_0(C^*(\Lambda)) & \mbox{if $n$ is even}, \\ K_1(C^*(\Lambda)) & \mbox{if $n$ is odd}.\end{array}\right.$$ Moreover, for $p,q\in{{\mathbb{Z}}}$, $$E^2_{p,q}\cong \left\{\begin{array}{ll} H_p({{\mathbb{Z}}}^k,K_0(B)) & \mbox{if } p\in\{0,1,\ldots,k\} \mbox{ and $q$ is even}, \\ 0 & \mbox{otherwise},\end{array}\right.$$ $E^\infty_{p,q}\cong E^{k+1}_{p,q}$ and $E^{k+1}_{p,q}=0$ if $p\in{{\mathbb{Z}}}\backslash\{0,1,\ldots,k\}$ or $q$ is odd. The first assertion follows from [@K88 6.10 Theorem] applied to $B\rtimes_\beta{{\mathbb{Z}}}^k$, which is $*$-isomorphic to $C^*(\Lambda)\otimes{{\mathbb{K}}}$ by Theorem \[thrm:K(C\*(Lambda))-K(Crossed-Product)\], after noting that $K_*(B \rtimes_\beta {{\mathbb{Z}}}^k)$ coincides with its “$\gamma$-part” since the Baum-Connes Conjecture with coefficients in an arbitrary $C^*$-algebra is true for the amenable group ${{\mathbb{Z}}}^k$ for all $k\ge 1$. By the proof of [@K88 6.10 Theorem], $K_*(B\rtimes_\beta{{\mathbb{Z}}}^k)\cong K_*(D)$ for some $C^*$-algebra $D$ which has a finite filtration by ideals: $0\subset D_0\subset D_1\subset \cdots \subset D_k=D$ since the dimension of the universal covering space of the classifying space of ${{\mathbb{Z}}}^k$ is $k$. The spectral sequence we are considering is the spectral sequence $\{(E^r,d^r)\}$ in homology $K_*$ associated with the finite filtration $0\subset D_0\subset D_1\subset \cdots \subset D_k=D$ of $D$ ([@S81 §6]) which has $E^1_{p,q}=K_{(p+q \mod 2)}(D_p/D_{p-1})$ where $D_n=0$ for $n<0$ and $D_n=D$ for $n\ge k$. It follows easily that $E^r_{p,q}=0$ for $p\in{{\mathbb{Z}}}\backslash\{0,1,\ldots,k\}$, for all $q\in{{\mathbb{Z}}}$ and for all $r\ge 1$ and $E^\infty_{p,q} \cong E^{k+1}_{p,q}$ (see also [@S81 Theorem 2.1]). This combined with Kasparov’s calculation in the proof of [@K88 6.10 Theorem], giving $E^2_{p,q}\cong H_p({{\mathbb{Z}}}^k,K_q(B))$, along with the observation that $K_q(B)=0$ for odd $q$, since $B$ is an AF-algebra, proves the second assertion. Now we will compute $H_*({{\mathbb{Z}}}^k,K_0(B))$ in terms of the combinatorial data encoded in $\Lambda$. First, let us examine the structure of $B$, and hence $K_0(B)$, in a little more detail. \[L:structure\_of\_B\] Let $\Lambda$ be a row-finite $k$-graph with no sources. Then $$B = \overline{\bigcup_{n\in{{\mathbb{Z}}}^k} B_n},$$where $$B_n\!=\!{\overline{\operatorname{span}}}\{s_\lambda s_\mu^* | \lambda,\mu\in {{\mathbb{Z}}}^k \times_d \Lambda, s(\lambda)\!=\!s(\mu)\!=\!(n,v)\;\mbox{for some}\;v\in\Lambda^0 \}\cong\bigoplus_{v\in\Lambda^0} \!B_n(v),$$ and $$B_n(v):={\overline{\operatorname{span}}}\{s_\lambda s_\mu^* \;|\; \lambda,\mu\in {{\mathbb{Z}}}^k \times_d \Lambda,\; s(\lambda)=s(\mu)=(n,v) \}\cong {{\mathbb{K}}}(\ell^2(s^{-1}(v)))$$ for all $v\in \Lambda^0$ and $n\in{{\mathbb{Z}}}^k$. Follows immediately from the proofs of [@KP00 Lemma 5.4, Theorem 5.5] and the observation that for all $n\in{{\mathbb{Z}}}^k$ and $v\in\Lambda^0$, $s^{-1}((n,v))\subset {{\mathbb{Z}}}^k\times_d\Lambda$ may be identified with $s^{-1}(v)\subset\Lambda$ via $(n-d(\lambda),\lambda)\mapsto \lambda$ for all $\lambda\in s^{-1}(v)$. \[D:ZLambda\^0\] Let ${{\mathbb{Z}}}\Lambda^0$ be the group of all maps from $\Lambda^0$ into ${{\mathbb{Z}}}$ that have finite support under pointwise addition. For each $u\in\Lambda^0$, we denote by $\delta_u$ the element of ${{\mathbb{Z}}}\Lambda^0$ defined by $\delta_u(v)=\delta_{u,v}$ (the Kronecker delta) for all $v\in\Lambda^0$. Note that ${{\mathbb{Z}}}\Lambda^0$ is a free abelian group with free set of generators $\{\delta_u\;|\; u\in\Lambda^0\}$. Define the vertex matrices of $\Lambda$, $M_i$, by the following. For $u,v\in\Lambda^0$ and $i=1,2,\ldots,k$, $M_i(u,v):=|\{\lambda\in\Lambda^{e_i}\,|\,r(\lambda)=u, s(\lambda)=v\}|$. \[R:vertex\_mxs\_commute\] By the factorisation property, the vertex matrices of a $k$-graph pairwise commute [@KP00 §6]. \[C:matrix-endo\]Given a matrix $M$ with integer entries and index set $I$, by slight abuse of notation we shall, on occasion, regard $M$ as the group endomorphism ${{\mathbb{Z}}}\Lambda^0{\longrightarrow}{{\mathbb{Z}}}\Lambda^0$, defined in the natural way as $(Mf)(i)=\sum_{j\in I}M(i,j)f(j)$ for all $i\in I,\;f\in{{\mathbb{Z}}}\Lambda^0$. \[L:K\_0(B)\] For all $n,m\in{{\mathbb{Z}}}^k$ such that $m\le n$, let $A_m:={{\mathbb{Z}}}\Lambda^0$. Moreover, define homomorphisms $\jmath_{nm}:A_m{\longrightarrow}A_n$ by $\jmath_{mm}(f)=f$, $\jmath_{m+e_i, m}:=M_i^t$ for all $f\in A_m,\; u\in\Lambda^0,\;i\in\{1,\ldots,k\}$, and $\jmath_{m+e_i+e_j,m}:=\jmath_{m+e_i+e_j,m+e_i}\jmath_{m+e_i,m}$ for all $j\in\{1,\ldots,k\}$. Then $(A_m;\jmath_{nm})$ is a direct system of groups and $K_0(B)$ and $A:=\lim_{\rightarrow}(A_{m};\jmath_{nm})$ are isomorphic. It follows from Remarks \[R:vertex\_mxs\_commute\] that the connecting homomorphisms are well-defined and that $(A_m;\jmath_{nm})$ is a direct system. From Lemma \[L:structure\_of\_B\] we deduce that $\displaystyle K_0(B) \cong \lim_{\rightarrow} (K_0(B_n); K_0(\iota_{n,m}))$, where, for $m,n\in{{\mathbb{Z}}}^k$ with $m\le n$, $\iota_{n,m}: B_m {\longrightarrow}B_n$ are the inclusion maps [@W-O93 Proposition 6.2.9]. We also deduce that for $n\in{{\mathbb{Z}}}^k$ and $v\in \Lambda^0$, $K_0(B_n(v))$ is isomorphic to ${{\mathbb{Z}}}$ and is generated by the equivalence class comprising all minimal projections in $B_n(v)$, of which $p_\xi$ is a member for any $\xi\in s^{-1}(n,v)$. Therefore, $K_0(B_n) \cong \bigoplus_{v\in \Lambda^0}K_0(B_n(v))$ is generated by $\{[p_{(n,v)}]_n \;|\; v\in\Lambda^0\}$, where $[\,\cdot\,]_n$ denotes the equivalence classes of $K_0(B_n)$ for all $n\in {{\mathbb{Z}}}^k$. Thus the map $\psi_n: A_n \longrightarrow K_0(B_n)$ given by $f\mapsto \sum_{u\in\Lambda^0} f(u) [p_{(n,u)}]_n$ is a group isomorphism for all $n\in{{\mathbb{Z}}}^k$. The embedding $\iota_{n,m}:B_m \longrightarrow B_n$ is given by $$\iota_{n,m}(s_{(m-d(\lambda),\lambda)} s_{(m-d(\mu),\mu)}^*) = \sum_{\alpha\in\Lambda^{n-m}(s(\lambda))} s_{(m-d(\lambda),\lambda\alpha)} s_{(m-d(\mu),\mu\alpha)}^*$$ for all $\lambda,\mu\in\Lambda$. Therefore, $$\begin{aligned} K_0(\iota_{n+e_i, n}) \left( [p_{(n,v)}]_n \right) &=& \left[ \iota_{n+e_i, n} \left( p_{(n,v)} \right) \right]_{n+e_i} = \left[ \sum_{\alpha\in\Lambda^{e_i}(v)} p_{(n,\alpha)} \right]_{n+e_i}\\ &=& \sum_{u\in\Lambda^0} M_i(v,u)[p_{(n+e_i,u)}]_{n+e_i} \end{aligned}$$ and $$\begin{aligned} K_0(\iota_{n+e_i, n})\left(\sum_{v\in\Lambda^0} f(v) [p_{(n,v)}]_n\right) &=& \sum_{u\in\Lambda^0} \left( \sum_{v\in\Lambda^0} M_i(v,u)f(v) \right) [p_{(n+e_i,u)}]_{n+e_i}\\ &=& \sum_{u\in\Lambda^0} \jmath_{n+e_i,n}(f)(u)[p_{(n+e_i,u)}]_{n+e_i}. \end{aligned}$$ Thus the following square commutes for all $i\in\{1,2,\ldots,k\}$. $$\begin{CD} K_0(B_n) @>{K_0( \iota_{n+e_i, n} )}>> K_0(B_{n+e_i})\\ @A{\psi_n}AA @AA{\psi_{n+e_i}}A\\ A_n @>{\jmath_{n+e_i,n}}>> A_{n+e_i} \end{CD}$$ The result follows. Henceforth, we shall follow the notation introduced in Lemma \[L:K\_0(B)\] and its proof. Now we begin to examine the action of ${{\mathbb{Z}}}^k$ on $K_0(B)$ in terms of the description of $K_0(B)$ provided by Lemma \[L:K\_0(B)\]. \[L:K\_0(beta)\] Fix $i\in\{1,\ldots,k\}$ and define a homomorphism $\phi_{i,n}:A_n{\longrightarrow}A_n$ by $\phi_{i,n}:=M_i^t$ for all $n\in{{\mathbb{Z}}}^k$. Let $\phi_i: A {\longrightarrow}A$ be the homomorphism induced by the system of homomorphisms $\{\phi_{i,n}\;|\; n\in{{\mathbb{Z}}}^k\}$. Then $\psi\phi_i=K_0(\beta_{e_i})\psi$, where $\psi: A {\longrightarrow}K_0(B)$ is the isomorphism constructed in Lemma \[L:K\_0(B)\]. It follows from Remarks \[R:vertex\_mxs\_commute\] that $\phi_{i,n}\jmath_{nm}=\jmath_{nm}\phi_{i,m}$ for all $m,n\in{{\mathbb{Z}}}^k$ so that $\phi_i$ is well-defined for all $i\in\{1,\ldots,k\}$. Now, we let $\tilde{\psi}: A {\longrightarrow}\lim_{\rightarrow} (K_0(B_m); K_0(\iota_{n,m}))$ be the unique isomorphism such that $K_0(\iota_n)\circ \psi_n = \tilde{\psi} \circ \jmath_n$ for all $n\in{{\mathbb{Z}}}^k$ where $\psi_n:A_n {\longrightarrow}K_0(B_n): f \mapsto \sum_{u\in\Lambda^0} f(u)[p_{(n,u)}]_n$ (cf. proof of Lemma \[L:K\_0(B)\]). Then $\psi: A{\longrightarrow}K_0(B)$ is the composition of $\tilde{\psi}$ with the canonical isomorphism of $\lim_\rightarrow (K_0(B_n); K_0(\iota_{n,m}))$ onto $K_0(B)$. We will show that $$\begin{CD} K_0(B_n) @>{K_0(\iota_n)}>> K_0(B) \\ @V{\tilde{\phi}_{i,n}}VV @VV{K_0(\beta_{e_i})}V\\ K_0(B_n) @>{K_0(\iota_n)}>> K_0(B) \end{CD}$$ commutes for all $i=1,2,\ldots,k$ and $n\in{{\mathbb{Z}}}^k$ where $\iota_n:B_n {\longrightarrow}B$ is the inclusion map for all $n\in{{\mathbb{Z}}}^k$ and $\tilde{\phi}_{i,n}=\psi_n\circ\phi_{i,n}\circ \psi_n^{-1}$. For then the Lemma follows from the universal properties of direct limits. Fix $i\in\{1,\ldots,k\}$ and $n\in{{\mathbb{Z}}}^k$. Since $K_0(B_n)$ is generated by $\{[p_{(n,v)}]_n \;|\; v\in\Lambda^0 \}$, it suffices to show that $K_0(\beta_{e_i})\circ K_0(\iota_n)([p_{(n,v)}]_n) = K_0(\iota_n)\circ \tilde{\phi}_{i,n} ([p_{(n,v)}]_n)$ for all $v\in\Lambda^0$. To see that this holds let $v\in\Lambda^0$, then $$K_0(\beta_{e_i})\circ K_0(\iota_n)([p_{(n,v)}]_n) = K_0(\beta_{e_i})([p_{(n,v)}]) = [p_{(n+e_i,v)}].$$ While $$K_0(\iota_n)\circ \tilde{\phi}_{i,n} ([p_{(n,v)}]_n) = \sum_{u\in\Lambda^0}M_i(v,u)[p_{(n,u)}] = \sum_{\alpha\in\Lambda^{e_i}(v)}[p_{(n+e_i,\alpha)}] = [p_{(n+e_i,v)}].$$ Having established a description of $K_0(B)$ as a left ${{\mathbb{Z}}}^k$-module in terms of the structure of $\Lambda$, we are almost in a position to describe $H_*({{\mathbb{Z}}}^k,K_0(B))$. First we recall some relevant notions from homological algebra. It will be convenient to use multiplicative notation for the free abelian group ${{\mathbb{Z}}}^k$, generated by $k$ generators. Thus we set $G:= \langle s_i \;|\; s_i s_j=s_js_i \mbox{ for all } i,j\in\{1,\ldots,k\} \rangle$ and $R:={{\mathbb{Z}}}G$, the group ring of $G$ [@Br94]. An efficient method of computing $H_*({{\mathbb{Z}}}^k,K_0(B))$ is by means of a Koszul resolution $K({\mbox{\boldmath$x$}})$ for an appropriate regular sequence ${\mbox{\boldmath$x$}}$ on $R$, [@W94 Corollary 4.5.5]. By a regular sequence on $R$ we mean a sequence ${\mbox{\boldmath$x$}}=\{x_i\}_{i=1}^n$ of elements of $R$ such that - $(x_1,\ldots,x_n)R \ne R$; and - For $i=1,\ldots,n,\; x_i\not\in \mathcal{Z}(R/(x_1,\ldots,x_{i-1})R)$. In statements (a) and (b), we regard $R$ as an $R$-module; denote by $(x_1,\ldots,x_j)\;(j=1,\ldots,n)$ the ideal of $R$ generated by $\{x_i\}_{i=1}^j$ and $(x_1,\ldots,x_j)R$ the sub-$R$-module $\{r\cdot r' \;|\; r\in (x_1,\dots,x_j), r' \mbox{ in the $R$-module } R\}$ of $R$; and denote by $\mathcal{Z}(M)$ the set of zero-divisors on a R-module $M$, i.e. $\mathcal{Z}(M):=\{ r\in R \;|\; r\cdot m =0$ for some non-zero $m\in M \}$ (see [@Kap74 §3.1] for more details). It is straightforward to check that for any finite set of generators $\{t_1,\ldots,t_k\}$ of $G$, the subset ${\mbox{\boldmath$x$}}=\{x_i\}_{i=1}^k$, where $x_i:=1-t_i$ for $i=1,\ldots,k$, is a regular sequence on $R$. Following [@W94 §4.5] we will describe the Koszul complex, $K({\mbox{\boldmath$x$}})$, in terms of the exterior algebra of a free $R$-module [@AB74]. It will be convenient for us to describe the terms of the exterior algebra as follows. For any non-negative integer $l$, let ${\mathcal{E}}^l(R^k)$ denote the $l^{\mbox{\scriptsize th}}$ term of the exterior algebra of the free $R$-module $R^k:=\bigoplus_{i=1}^k R_i$, over $R$, where $R_i=R$ for $i=1,\ldots,k$. Moreover, for any negative integer $l$, let ${\mathcal{E}}^l(R^k)=\{0\}$. For $l\in{{\mathbb{Z}}}$ let $N_l:= \left\{\!\begin{array}{ll} \{ (\mu_1,\ldots,\mu_l)\in \{1,\ldots,k\}^l \;|\; \mu_1<\cdots<\mu_l \} & \!\!\mbox{if } l\in\{1,\ldots,k\},\\ \{\star\} & \!\!\mbox{if } l=0,\\ \emptyset & \!\!\mbox{otherwise.}\end{array}\right.$ For $l\in\{1,\ldots,k\}$ and $\mu=(\mu_1,\ldots,\mu_l)\in N_l, i=1,\ldots,l$ we let $$\mu^i:=\left\{\begin{array}{ll}(\mu_1,\mu_2\ldots,\mu_{i-1},\mu_{i+1},\mu_{i+2}\ldots,\mu_{l})\in N_{l-1} & \mbox{if } l\ne 1,\\ \star & \mbox{if } l=1.\end{array}\right.$$ For $n=1,2,\dots$ and $r\in{{\mathbb{Z}}}$, let $${n \choose r }:= \left\{\begin{array}{ll} \frac{n!}{(n-r)!r!} & \mbox{if } 0 \le r \le n, \\ 0 & \mbox{if } r<0 \mbox{ or } r>n.\end{array}\right.$$ Using the above notation we may describe the $l^{\mbox{\scriptsize th}}$-term ($l\in{{\mathbb{Z}}}$) of the exterior algebra over $R^k$, ${\mathcal{E}}^l(R^k)$, as being generated by the set $N_l$ as a free $R$-module and having rank $k \choose l$. Now let $K({\mbox{\boldmath$x$}})$ be the chain complex $$0 \longleftarrow {\mathcal{E}}^0(R^k) \longleftarrow {\mathcal{E}}^1(R^k) \longleftarrow \cdots \longleftarrow {\mathcal{E}}^k(R^k) \longleftarrow 0$$ where the differentials ${\mathcal{E}}^l(R^k){\longrightarrow}{\mathcal{E}}^{l-1}(R^k)$ are given by mapping $$\mu \mapsto \sum_{j=1}^l (-1)^{j+1} x_{\mu_j}\mu^j \qquad \mbox{for all } \mu=(\mu_1,\ldots,\mu_l)\in N_l,$$ if $l\in\{1,\ldots,k\}$ and the zero map otherwise. By [@W94 Corollary 4.5.5] $K({\mbox{\boldmath$x$}})$ is a free resolution of $R/I$ over $R$ where $I$ is the ideal of $R$ generated by ${\mbox{\boldmath$x$}}$. It is well known (see e.g. [@W94 Chapter 6],[@Br94 §I.2]) that $I=\ker \epsilon$ where $\epsilon:R{\longrightarrow}{{\mathbb{Z}}}: g \mapsto 1$ is the augmentation map of the group ring $R={{\mathbb{Z}}}G$. Thus we have a free (and hence projective) resolution of ${{\mathbb{Z}}}$ over ${{\mathbb{Z}}}G$, which we may use to compute $H_*(G, K_0(B))$ (see [@Br94 Chapter III]). \[L:Koszul\] Following the above notation, we have $H_*(G,K_0(B))$ isomorphic to the homology of the chain complex $$\mathcal{B}: 0 \longleftarrow K_0(B) \longleftarrow \cdots \longleftarrow \bigoplus_{N_l} K_0(B) \longleftarrow \cdots \longleftarrow K_0(B) \longleftarrow 0,$$ where the differentials $\tilde{\partial_l}:\bigoplus_{N_l} K_0(B) {\longrightarrow}\bigoplus_{N_{l-1}}K_0(B)\; (l\in\{1,\ldots,k\})$ are defined by $$\bigoplus_{\mu\in N_{l}} m_\mu \mapsto \bigoplus_{\lambda\in N_{l-1}} \sum_{\mu\in N_l} \sum_{i=1}^l (-1)^{i+1} \delta_{\lambda,\mu^i}(m_\mu-K_0(\beta_{\mu_i})(m_\mu)).$$ (Recall that the $G$-action on $K_0(B)$ is given by $s_i\cdot m=K_0(\beta_{e_i})(m)$ for all $m\in K_0(B),\;i=1,\dots,k$.) By definition $H_*(G,K_0(B))\cong H_*(K({\mbox{\boldmath$x$}})\otimes_G K_0(B))$, where the latter chain complex is obtained by applying the functor $-\otimes_G K_0(B)$ termwise to the chain complex $K({\mbox{\boldmath$x$}})$. The Lemma follows from the fact that ${\mathcal{E}}^l(R^k)\otimes_G K_0(B)$ is canonically isomorphic to $\bigoplus_{N_l} K_0(B)$ ($l\in\{1,\ldots,k\}$) and setting $t_i:=s_i^{-1}$ ($i=1,\ldots,k$) as our generators of $G$ to obtain ${\mbox{\boldmath$x$}}$ as described above. For $m,n\in{{\mathbb{Z}}}^k$ with $m\le n$, let $\mathcal{A}^{(n)}$ be the chain complex $$0 \longleftarrow A_n \longleftarrow \cdots \longleftarrow \bigoplus_{N_l} A_n \longleftarrow \cdots \longleftarrow A_n \longleftarrow 0,$$ with $A_n={{\mathbb{Z}}}\Lambda^0$ and differentials, $\partial^{(n)}_l:\bigoplus_{N_l} A_n {\longrightarrow}\bigoplus_{N_{l-1}} A_n \;(l\in\{1,\ldots,k\})$, defined by $$\bigoplus_{\mu\in N_{l}} m_\mu \mapsto \bigoplus_{\lambda\in N_{l-1}} \sum_{\mu\in N_l} \sum_{i=1}^l (-1)^{i+1} \delta_{\lambda,\mu^i}(m_\mu - \phi_{\mu_i,n}(m_\mu)),$$ where for $i=1,\ldots,k$ and $n\in{{\mathbb{Z}}}^k$, $\phi_{i,n}$ is the homomorphism defined in Lemma \[L:K\_0(beta)\]. Furthermore, let $(\tau_m^n)_p: \mathcal{A}^{(m)}_p {\longrightarrow}\mathcal{A}^{(n)}_p$ be the homomorphism defined by $(\tau_m^n)_p(\bigoplus_{\mu\in N_p} m_\mu) = \bigoplus_{\mu\in N_p} \jmath_{nm}(m_\mu)$ for all $p\in\{0,1,\ldots,k\}$ and the trivial map for $p\in{{\mathbb{Z}}}\backslash\{0,1,\ldots,k\}$ (cf. Lemma \[L:K\_0(B)\]). Following [@S66 Chapter 4, §1], by a chain map $\tau:\mathcal{C}{\longrightarrow}\mathcal{C}'$ we mean a collection $\{\tau_p:\mathcal{C}_p{\longrightarrow}\mathcal{C}'_p\}$ of homomorphisms that commute with the differentials in the sense that commutativity holds in each square: $$\begin{CD} \mathcal{C}_p @>>> \mathcal{C}_{p-1} \\ @V\tau_pVV @VV\tau_{p-1}V\\ \mathcal{C}'_p @>>> \mathcal{C}'_{p-1} \end{CD}$$ Recall that there is a category of chain complexes whose objects are chain complexes and whose morphisms are chain maps. Moreover, the category of chain complexes admits direct limits. Following the above notation, for each $m,n\in{{\mathbb{Z}}}^k$ the system of homomorphisms $\{(\tau_m^n)_p\;|\; p\in{{\mathbb{Z}}}\}$ defines a chain map $\tau_m^n:\mathcal{A}^{(m)}{\longrightarrow}\mathcal{A}^{(n)}$ such that $(\mathcal{A}^{(n)}; \tau_m^n)$ is a direct system in the category of chain complexes. Furthermore, $(\mathcal{B};\gamma_n)$ is a direct limit for $(\mathcal{A}^{(n)};\tau_m^n)$, where $\gamma_n:\mathcal{A}^{(n)}{\longrightarrow}\mathcal{B}$ is given by $(\gamma_n)_p(\bigoplus_{\mu\in N_p} m_\mu)=\bigoplus_{\mu\in N_p}\psi\jmath_n(m_\mu)$ for all $p\in\{0,1,\ldots,k\}$ and the trivial map otherwise. That $\tau_m^n : \mathcal{A}^{(m)}{\longrightarrow}\mathcal{A}^{(n)}$ is a chain map for all $m,n\in{{\mathbb{Z}}}^k$, follows immediately from the fact that $\phi_{i,n}\jmath_{nm}=\jmath_{nm}\phi_{i,m}$ for all $i\in\{1,\ldots,k\}$ and $m,n\in{{\mathbb{Z}}}^k$ (cf. proof of Lemma \[L:K\_0(beta)\]). That $(\mathcal{A}^{(n)}; \tau_m^n)$ is a direct system of chain complexes follows immediately from the fact that $(A_m;\jmath_{nm})$ is a direct system of groups (by Lemma \[L:K\_0(B)\]). Note that since $K_0(\beta_{e_i})\psi=\psi\phi_i$ for all $i\in\{1,\dots,k\}$ by Lemma \[L:K\_0(beta)\], a direct calculation shows that $\tilde{\partial}_p(\gamma_n)_p=(\gamma_n)_{p-1}\partial^{(n)}_p$ for all $p\in{{\mathbb{Z}}}^k$, thus $\gamma_n:\mathcal{A}^{(n)}{\longrightarrow}\mathcal{B}$ is a chain map for all $n\in{{\mathbb{Z}}}^k$. The fact that $\gamma_m=\gamma_n\tau_{nm}$ for all $m,n\in{{\mathbb{Z}}}^k$ follows immediately by construction of the maps. Now suppose that $(\mathcal{A};\tau_n)$ is a direct limit for $(\mathcal{A}^{(n)};\tau_m^n)$. Then by the above and the universal property of direct limits, there exists a morphism $\gamma:\mathcal{A} {\longrightarrow}\mathcal{B}$ such that $\gamma \tau_m^n = \gamma_n$ for all $m,n\in{{\mathbb{Z}}}^k$. In order to show that $\gamma$ is an isomorphism it suffices to show that each $(\gamma)_p:\mathcal{A}_p{\longrightarrow}\mathcal{B}_p$ is an isomorphism for all $p\in{{\mathbb{Z}}}$. This follows immediately from the fact that $\psi:A{\longrightarrow}K_0(B)$ is an isomorphism (Lemma \[L:K\_0(B)\]) and that direct limits commute with (finite) direct sums in the category of abelian groups in the obvious way. We have shown therefore that $(\mathcal{B};\gamma_n)$ is a direct limit for $(\mathcal{A}^{(n)};\tau_m^n)$ in the category of chain complexes. Note that each chain complex $\mathcal{A}^{(n)}$ does not actually depend on $n\in{{\mathbb{Z}}}^k$, thus for ease of notation we let $\mathcal{D}$ denote this common chain complex with differentials $\partial_p:=\partial^{(n)}_p$ for all $p\in{{\mathbb{Z}}}$. \[T:Hom\] Using the above notation, the homology of ${{\mathbb{Z}}}^k$ with coefficients in the left ${{\mathbb{Z}}}^k$-module $K_0(B)$ is given by the homology of the chain complex $\mathcal{D}$, i.e. we have $H_*(G,K_0(B))\cong {H}_*(\mathcal{D})$. The homology functor commutes with direct limits ([@S66 Chapter 4, §1, Theorem 7]), therefore it follows that $H_*(G,K_0(B))\cong \lim_{\rightarrow}( {H}_*(\mathcal{A}^{(n)}), {H}_*(\tau_m^n) )$. Thus, it suffices to prove that ${H}(\tau_m^{m+e_j})_p$ is the identity map for all $p\in{{\mathbb{Z}}},\;m\in{{\mathbb{Z}}}^k,\;j\in\{1,\ldots,k\}$. To see that this is true we show that $\bigoplus_{\mu\in N_p}(1-M_j^t)(y)\in \operatorname{im}\partial_{p+1}$ for all $y\in \ker\partial_{p},\;p\in{{\mathbb{Z}}},\;j\in\{1,\ldots,k\}$. Indeed, we claim that given $y=\bigoplus_{\mu\in N_p} y_\mu \in \ker \partial_p$ we have $$\bigoplus_{\mu\in N_p} (1-M_j^t)y_\mu = \partial_{p+1}\left(\bigoplus_{\lambda\in N_{p+1}} z_{\lambda}\right)$$ where $z_{\lambda}=\sum_{i=1}^{p+1} (-1)^{i+1} \delta_{\lambda_i,j} y_{\lambda^i}$ for all $\lambda\in N_{p+1}$. Fix $j,p\in\{1,\ldots,k\}$. Let $\kappa(\mu):=\sum_{i=1}^p\delta_{j,\mu_i}i$, i.e. if $j$ is a component of $\mu$, then $\kappa(\mu)$ denotes the unique $i\in\{1,\ldots,k\}$ such that $\mu_i=j$; otherwise $\kappa(\mu)=0$. Now fix a $\mu'=(\mu_1',\dots,\mu_p')\in N_p$ and let $y=\bigoplus_{\mu\in N_p} y_\mu$ be in $\ker \partial_p$. First, suppose that $\kappa(\mu')>0$ and let $\eta=(\mu')^{\kappa(\mu')}$. Then $$\begin{aligned} 0 &=& \partial_p(y)_\eta = \sum_{\mu\in N_p} \sum_{i=1}^p (-1)^{i+1} \delta_{\eta,\mu^i} (1-M_{\mu_i}^t) y_{\mu} \\ &=& \sum_{i=1}^p (-1)^{i+1} \delta_{\eta,(\mu')^i}(1-M_{\mu_i}^t)x_{\mu'} + \sum_{\mu\in N_p \atop \mu\ne\mu'} \sum_{i=1}^p (-1)^{i+1} \delta_{\eta,\mu^i}(1-M_{\mu_i}^t)x_\mu \\ &=& (-1)^{\kappa(\mu')+1}(1-M_j^t)x_{\mu'} + \sum_{\mu\in N_p \atop \mu\ne\mu'} \sum_{i=1}^p (-1)^{i+1} \delta_{\eta,\mu^i}(1-M_{\mu_i}^t)x_\mu, \end{aligned}$$so that $$\begin{aligned} (1-M_j^t)x_{\mu'} &=& \sum_{\mu\in N_p \atop \mu\ne \mu'} \sum_{i=1}^p(-1)^{i+\kappa(\mu')+1}\delta_{\eta,\mu^i}(1-M_{\mu_i}^t)x_\mu.\end{aligned}$$ Now $$\begin{aligned} \partial_{p+1}\left(\bigoplus_{\lambda\in N_{p+1}} z_{\lambda}\right)_{\mu'} &=& \sum_{\lambda\in N_{p+1}} \sum_{i,r=1}^{p+1} (-1)^{i+r+2} \delta_{\mu',\lambda^i}\delta_{j,\lambda_r}(1-M_{\lambda_i}^t)y_{\lambda^r}\\ &=& \sum_{\lambda\in N_{p+1} \atop \kappa(\lambda)>0} \sum_{i=1}^{p+1} (-1)^{i+\kappa(\lambda)} \delta_{\mu',\lambda^i}(1-M_{\lambda_i}^t)y_{\lambda^{\kappa(\lambda)}} \\ &=& \sum_{\lambda\in N_{p+1} \atop \kappa(\lambda)>0} \left\{\sum_{i=1}^{\kappa(\lambda)-1} (-1)^{i+\kappa(\mu')+1} \delta_{\mu',\lambda^i}(1-M_{\lambda_i}^t)y_{\lambda^{\kappa(\lambda)}}\right. \\ &+& \left.\sum_{i=\kappa(\lambda)+1}^{p+1} (-1)^{i+\kappa(\mu')} \delta_{\mu',\lambda^i}(1-M_{\lambda_i}^t)y_{\lambda^{\kappa(\lambda)}} \right\}\\\end{aligned}$$ $$\begin{aligned} &=& \sum_{\mu\in N_p \atop \kappa(\mu)=0} \sum_{i=1}^p (-1)^{i+\kappa(\mu')+1} \delta_{\eta,\mu^i}(1-M_{\mu_i}^t)y_\mu\end{aligned}$$ since for every $\lambda\in N_{p+1}$ such that $\kappa(\lambda)>0$ and for every $i\in\{1,\dots,p+1\}\backslash\{\kappa(\lambda)\}$ we have 1. $\delta_{\mu',\lambda^{\kappa(\lambda)}}=0$, 2. $\kappa(\lambda)=\left\{\begin{array}{ll} \kappa(\mu')+1 & \mbox{if } \mu'=\lambda^i \mbox{ with } i < \kappa(\lambda), \\ \kappa(\mu') & \mbox{if } \mu'=\lambda^i \mbox{ with } \kappa(\lambda) < i,\end{array}\right.$ 3. $\mu' = \lambda^i \iff \eta = \left\{\begin{array}{ll} (\lambda^{\kappa(\lambda)})^i & \mbox{if } i<\kappa(\lambda), \\ (\lambda^{\kappa(\lambda)})^{i-1} & \mbox{if } \kappa(\lambda)<i,\end{array}\right.$ and if $\mu\in N_p$ then $\kappa(\mu)=0,\;\eta=\mu^i$ for some $i\in\{1,\ldots,k\} \iff \mu\ne \mu',\; \eta=\mu^i$ for some $i\in\{1,\dots,k\}$. Hence, $$\begin{aligned} \partial_{p+1}\left( \bigoplus_{\lambda\in N_{p+1}} z_\lambda \right)_{\mu'} &=& \sum_{\mu\in N_p \atop \mu\ne \mu'} \sum_{i=1}^p (-1)^{i+\kappa(\mu')+1}\delta_{\eta,\mu^i}(1-M_{\mu_i}^t)y_\mu \\ &=& (1-M_j^t) y_{\mu'}.\end{aligned}$$ Now suppose that $\kappa(\mu')=0$. Then $$\begin{aligned} \partial_{p+1}\left( \bigoplus_{\lambda\in N_{p+1}} z_\lambda \right)_{\mu'} &=& \sum_{\lambda\in N_{p+1}} \sum_{i,r=1}^{p+1} (-1)^{i+r+2}\delta_{\mu',\lambda^i}\delta_{j,\lambda_r}(1-M_{\lambda_i}^t)y_{\lambda^r} \\ &=& (-1)^{\kappa(\xi)+\kappa(\xi)+2}(1-M_{\xi_{\kappa(\xi)}}^t)y_{\xi^{\kappa(\xi)}} \\ &=& (1-M_j^t)y_{\mu'},\end{aligned}$$ where $\xi$ is the unique element of $N_{p+1}$ satisfying $\kappa(\xi)>0$ and $\xi^{\kappa(\xi)}=\mu'$. Combining the results of this section we get the following theorem. \[T:Main\] Let $\Lambda$ be a row-finite $k$-graph with no sources. Then there exists a spectral sequence $\{(E^r,d^r)\}$ converging to $K_*(C^*(\Lambda))$ with $E^\infty_{p,q}\cong E^{k+1}_{p,q}$ and $$E^2_{p,q}\cong \left\{\begin{array}{ll}{H}_p(\mathcal{D}) & \mbox{if } p\in\{0,1,\ldots,k\} \mbox{ and $q$ is even,}\\ 0 & \mbox{otherwise,}\end{array}\right.$$ where $\mathcal{D}$ is the chain complex with $$\mathcal{D}_p:=\left\{\begin{array}{ll} \bigoplus_{\mu\in N_p}{{\mathbb{Z}}}\Lambda^0 & \mbox{if }p\in\{0,1,\ldots,k\},\\ 0 & \mbox{otherwise.}\end{array}\right.$$ and differentials $$\partial_p:\mathcal{D}_p{\longrightarrow}\mathcal{D}_{p-1}: \bigoplus_{\mu\in N_{p}} m_\mu \mapsto \bigoplus_{\lambda\in N_{p-1}} \sum_{\mu\in N_p} \sum_{i=1}^p (-1)^{i+1} \delta_{\lambda,\mu^i}(1-M_{\mu_i}^t)m_\mu$$ for $p\in\{1,\ldots,k\}$. Specialising Theorem \[T:Main\] to the case when $k=2$ gives us explicit formulae to compute the $K$-groups of the $C^*$-algebras of row-finite $2$-graphs with no sources. \[P:k=2\] Let $\Lambda$ be a row-finite 2-graph with no sources and vertex matrices $M_1$ and $M_2$. Then there is an isomorphism $$\Phi:\operatorname{coker}(1-M_1^t,1-M_2^t)\oplus\ker{ M_2^t-1 \choose 1-M_1^t }{\longrightarrow}K_0(C^*(\Lambda))$$ such that $\Phi((\delta_u + \operatorname{im}\partial_1)\oplus 0)=[p_u]$ for all $u\in\Lambda^0$ (cf. Definition \[D:ZLambda\^0\]) and where we regard $(1-M_1^t,1-M_2^t):{{\mathbb{Z}}}\Lambda^0\oplus{{\mathbb{Z}}}\Lambda^0{\longrightarrow}{{\mathbb{Z}}}\Lambda^0$ and $\displaystyle{ M_2^t-1 \choose 1-M_1^t }:{{\mathbb{Z}}}\Lambda^0 {\longrightarrow}{{\mathbb{Z}}}\Lambda^0\oplus{{\mathbb{Z}}}\Lambda^0$ as group homomorphisms defined in the natural way. Moreover, we have $$K_1(C^*(\Lambda)) \cong \ker(1-M_1^t,1-M_2^t)/\operatorname{im}{ M_2^t-1 \choose 1-M_1^t }.$$ The Kasparov spectral sequence converging to $K_*(C^*(\Lambda))$ of Proposition \[T:Main\] has $E^\infty_{p,q}\cong E^3_{p,q}$ for all $p,q\in{{\mathbb{Z}}}$. However, it follows from $E^2_{p,q}=0$ for odd $q$ that the differential $d^2$ is the zero map and $E^3_{p,q}\cong E^2_{p,q}\cong {H}_p(\mathcal{D})$ for all $p\in\{0,1,\ldots,k\}$ and even $q$, where $\mathcal{D}$ is the chain complex $$0 {\longleftarrow}{{\mathbb{Z}}}\Lambda^0 \stackrel{\partial_1}{{\longleftarrow}} {{\mathbb{Z}}}\Lambda^0\oplus{{\mathbb{Z}}}\Lambda^0 \stackrel{\partial_2}{{\longleftarrow}} {{\mathbb{Z}}}\Lambda^0 {\longleftarrow}0$$ with $\partial_1= (1-M_1^t,1-M_2^t)$ and $\displaystyle \partial_2={ M_2^t-1 \choose 1-M_1^t }$ for a suitable choice of bases. Convergence of the spectral sequence to $K_*(C^*(\Lambda))$ (Definition \[D:spectral-sequence\]) and the above means that we have the following finite filtration of ${\mathcal{K}}_1=K_1(C^*(\Lambda))$: $$0=F_0({\mathcal{K}}_1)\subseteq F_1({\mathcal{K}}_1)=F_2({\mathcal{K}}_1)={\mathcal{K}}_1,$$ with $F_1({\mathcal{K}}_1)\cong H_1(\mathcal{D})$. Hence, $K_1(C^*(\Lambda))\cong H_1(\mathcal{D})$ as required. Now, we could proceed to obtain an isomorphism of $K_0(C^*(\Lambda))$ by use of the spectral sequence; however we choose to use the Pimsner-Voiculescu sequence in order to deduce relatively easily the action of the isomorphism on basis elements. By applying the Pimsner-Voiculescu to $B\rtimes{{\mathbb{Z}}}$ and $B\rtimes{{\mathbb{Z}}}^2\cong (B\rtimes{{\mathbb{Z}}})\rtimes{{\mathbb{Z}}}$ in succession, we may deduce that $K_0(i_B):K_0(B){\longrightarrow}K_0(B\rtimes{{\mathbb{Z}}}^2)$ factors through an injection $\Phi_1:H_0(\mathcal{B}){\longrightarrow}K_0(B\rtimes{{\mathbb{Z}}}^2)$, where $i_B$ is the canonical injection $i_B:B{\longrightarrow}B\rtimes{{\mathbb{Z}}}^2$ and $\mathcal{B}$ is the chain complex defined in Lemma \[L:Koszul\]. We may also deduce that there is an exact sequence that constitutes the first row of the following commutative diagram: $$\begin{CD} 0 @>>> H_0(\mathcal{B}) @>\Phi_1>> K_0(B\rtimes{{\mathbb{Z}}}^2) @>>> H_2(\mathcal{B}) @>>> 0\\ @. @V\Phi_2VV @VV\Phi_0V @VVV @. \\ 0 @>\Phi>> H_0(\mathcal{D}) @>>> K_0(C^*(\Lambda)) @>>> H_2(\mathcal{D}) @>>> 0. \end{CD}$$ where all downward arrows are isomorphisms. In particular, $\Phi_0:K_0 (C^*(\Lambda){\longrightarrow}K_0(B\rtimes{{\mathbb{Z}}}^2)$ is the isomorphism constructed in Theorem \[thrm:K(C\*(Lambda))-K(Crossed-Product)\] and $\Phi_2:H_0(\mathcal{B}){\longrightarrow}H_0(\mathcal{D})$ is one of the isomorphisms in Theorem \[T:Hom\]. Now $H_2(\mathcal{D})\subseteq{{\mathbb{Z}}}\Lambda^0$ is a free abelian group, thus the exact sequences split and we have an isomorphism $\Phi:H_0(\mathcal{D})\oplus H_2(\mathcal{D}) {\longrightarrow}C^*(\Lambda)$ such that $\Phi(g\oplus 0)= \Phi_0\Phi_1\Phi_2^{-1}(g)$ for all $g\in H_0(\mathcal{D})$. It is straightforward to check that $\Phi(\delta_u + \operatorname{im}\partial_1\oplus 0)=[p_u]$ for all $u\in\Lambda^0$. Thus the Theorem is proved. Evidently complications arise when $k>2$, however it is worth noting that under some extra assumptions on the vertex matrices it is possible to determine a fair amount about the $K$-groups of higher rank graph $C^*$-algebras. For example, the case $k=3$ is considered below. \[P:k=3\] Let $\Lambda$ be a row-finite 3-graph with no sources. Consider the following group homomorphisms defined by block matrices: $$\begin{aligned} \partial_1 &=& (1-M_1^t \;\; 1-M_2^t \;\; 1-M_3^t):\bigoplus_{i=1}^3{{\mathbb{Z}}}\Lambda^0 {\longrightarrow}{{\mathbb{Z}}}\Lambda^0,\\ \partial_2 &=& \left(\!\!\!\begin{array}{ccc}M_2^t-1 & M_3^t - 1 & 0 \\ 1-M_1^t & 0 & M_3^t -1 \\ 0 & 1-M_1^t & 1- M_2^t\end{array}\!\!\!\right):\bigoplus_{i=1}^3{{\mathbb{Z}}}\Lambda^0 {\longrightarrow}\bigoplus_{i=1}^3{{\mathbb{Z}}}\Lambda^0, \\ \partial_3 &=& \left(\!\!\!\begin{array}{c} 1-M_3^t \\ M_2^t -1 \\ 1-M_1^t \end{array}\!\!\!\right) : {{\mathbb{Z}}}\Lambda^0 {\longrightarrow}\bigoplus_{i=1}^3 {{\mathbb{Z}}}\Lambda^0.\end{aligned}$$ There exists a short exact sequence: $$0 {\longrightarrow}\operatorname{coker}\partial_1/G_0 {\longrightarrow}K_0(C^*(\Lambda)) {\longrightarrow}\ker\partial_2/\operatorname{im}\partial_3{\longrightarrow}0,$$ and $$K_1(C^*(\Lambda))\cong \ker\partial_1/\operatorname{im}\partial_2 \oplus G_1,$$ where $G_0$ is a subgroup of $\operatorname{coker}\partial_1$ and $G_1$ is a subgroup of $\ker\partial_3$. By Theorem \[T:Main\], there exist short exact sequences $$\begin{aligned} 0 {\longrightarrow}E^4_{0,0} {\longrightarrow}K_0(C^*(\Lambda)) {\longrightarrow}E^4_{2,-2}{\longrightarrow}0,\\ 0 {\longrightarrow}E^4_{1,0} {\longrightarrow}K_1(C^*(\Lambda)) {\longrightarrow}E^4_{3,-2}{\longrightarrow}0. \end{aligned}$$ However, since $E^4_{p,q}=0$ if $p\in{{\mathbb{Z}}}\backslash\{0,1,2,3\}$ the only non-zero components of the differential $d^3$ are $d^3_{3,q}:E^3_{3,q}{\longrightarrow}E^3_{0,q+2}$, where $q\in 2{{\mathbb{Z}}}$. Moreover, as in the proof of Proposition \[P:k=2\], the differential $d^2$ is the zero map. Thus we have $$\begin{array}{ll} E^4_{1,0}\cong E^3_{1,0}\cong E^2_{1,0}\cong {H}_1(\mathcal{D}), &E^3_{0,0}\cong E^2_{0,0}\cong H_0(\mathcal{D}), \\ E^4_{2,-2}\cong E^3_{2,-2} \cong E^2_{2,-2}\cong{H}_2(\mathcal{D}),& E^3_{3,-2}\cong E^2_{3,-2}\cong H_3(\mathcal{D}). \end{array}$$ Also note that $E^4_{3,-2}$ is isomorphic to $\ker d^3_{3,-2}\subseteq E^3_{3,-2}$, which is a subgroup of the free abelian group $H_3(\mathcal{D})\cong \ker \partial_3$. Thus $E^4_{3,-2}$ is itself a free abelian group, from which we deduce that the exact sequence for $K_1(C^*(\Lambda))$ splits. Hence the result follows by setting $G_0$ to be the image of $\operatorname{im}d^3_{3,-2}\subseteq E^3_{0,0}$ under the isomorphism $E^3_{0,0}{\longrightarrow}E^2_{0,0}{\longrightarrow}H_0(\mathcal{D})$, and $G_1$ to be the image of $\ker d^3_{3,-2}\subseteq E^3_{3,-2}$ under the isomorphism $E^3_{3,-2}{\longrightarrow}E^2_{3,-2}{\longrightarrow}H_3(\mathcal{D})$. Now we consider two cases for which we can describe the $K$-groups of the $C^*$-algebra of a row finite $3$-graph with no sources in terms of its vertex matrices by the immediate application of Proposition \[P:k=3\]. \[C:k=3\] In addition to the hypothesis of Proposition \[P:k=3\]: 1. if $\partial_1$ is surjective then $$\begin{aligned} K_0(C^*(\Lambda)) &\cong& \ker \partial_2/ \operatorname{im}\partial_3, \\ K_1(C^*(\Lambda)) &\cong& \ker \partial_1/\operatorname{im}\partial_2 \oplus \ker \partial_3;\end{aligned}$$ 2. if $\cap_{i=1}^3 \ker (1-M_i^t)=0$ then $$\begin{aligned} K_1(C^*(\Lambda)) &\cong& \ker \partial_1/\operatorname{im}\partial_2\end{aligned}$$ and there exists a short exact sequence $$0 {\longrightarrow}\operatorname{coker}\partial_1 {\longrightarrow}K_0(C^*(\Lambda)) {\longrightarrow}\ker\partial_2/\operatorname{im}\partial_3{\longrightarrow}0.$$ To prove (1), note that we have $0=\operatorname{coker}\partial_1$ thus the exact sequence for $K_0(C^*(\Lambda))$ collapses to give the result for $K_0(C^*(\Lambda))$. Also note that $0=\operatorname{coker}\partial_1=H_0(\mathcal{D})\cong E^3_{0,0}$. Therefore $ d^3_{3,-2}:E^3_{3,-2}{\longrightarrow}E^3_{3,0}$ is the zero map and $\ker d^3_{3,-2}=E^3_{3,-2}\cong E^2_{3,-2}\cong H_3(\mathcal{D})=\ker \partial_3$. Therefore, $G_1$ in Proposition \[P:k=3\] is $\ker\partial_3$ and (1) is proved. To prove (2), if $\bigcap_{i=1}^3 \ker(1-M_i^t)=0$ then $\ker \partial_3=0$, which implies that $G_1$ in Proposition \[P:k=3\] is the trivial group. It also follows that $E^3_{3,-2}=0$ so that $\operatorname{im}d^3_{3,-2}=0$ and $G_0$ in Proposition \[P:k=3\] is the trivial group. Whence (2) follows immediately from Proposition \[P:k=3\]. ${}$\ 1. One may recover [@Pa97 Theorem 3.1] from Theorem \[T:Main\] by setting $k$ equal to 1. 2. By [@KP00 Corollary 3.5 (ii)] a rank $k$ Cuntz-Krieger algebra ([@RS99; @RS01]) is isomorphic to a $k$-graph $C^*$-algebra. Thus, Propsition \[P:k=2\] generalises [@RS01 Proposition 4.1], the proof of which inspired the methods used throughout this paper. 3. By showing that the $C^*$-algebra of a row-finite 2-graph, $\Lambda$, with no sources and finite vertex set, satisfying some further conditions, is isomorphic to a rank 2 Cuntz-Krieger algebra, Allen, Pask and Sims used Robertson and Steger’s [@RS01 Proposition 4.1] result to calculate the $K$-groups of $C^*(\Lambda)$ [@APS04 Theorem 4.1]. Moreover, in [@APS04 Remark 4.7. (1)] they note that their formulae for the $K$-groups holds for more general 2-graph $C^*$-algebras, namely the $C^*$-algebras of row-finite 2-graphs, $\Lambda$, with no sinks (i.e. $s^{-1}(v)\cap\Lambda^n\ne\emptyset$ for all $n\in{{\mathbb{N}}}^k,\;v\in\Lambda^0$) nor sources and finite vertex set. 4. The notion of associating a $C^*$-algebra, $C^*(\Lambda)$, to a $k$-graph $\Lambda$ was generalised by Raeburn, Sims and Yeend [@RSY03] to include the case where $\Lambda$ is finitely-aligned; a property identified by them to enable an appropriate $C^*$-algebra to be constructed. The family of finitely-aligned $k$-graphs and their associated $C^*$-algebras admit $k$-graphs with no sources and those that are not row-finite. In [@F06], Farthing devised a method of constructing, from an arbitrary finitely-aligned $k$-graph $\Lambda$ with sources, a row-finite $k$-graph with no sources, $\bar{\Lambda}$, which contains $\Lambda$ as a subgraph. If, in addition, $\Lambda$ is row-finite then Farthing showed that $C^*(\bar{\Lambda})$ is strong Morita equivalent to $C^*(\Lambda)$ and thus has isomorphic $K$-groups to those of $C^*(\Lambda)$. Therefore, in principal, the results in this paper could be extended to the case where $\Lambda$ is row-finite but with sources. The $K$-groups of unital $k$-graph $C^*$-algebras {#S:unital} ================================================= Recall that if $\Lambda$ is a row-finite higher rank graph with no sources then $\Lambda^0$ finite is equivalent to $C^*(\Lambda)$ being unital ([@KP00 Remarks 1.6 (v)]). Thus in this section we specialise in the case where the vertex set of our higher rank graph, hence each vertex matrix, is finite. We will continue to denote the Kasparov spectral sequence converging to $K_*(C^*(\Lambda))$ of the previous section by $\{(E^r,d^r)\}$ and we shall denote the torsion-free rank of an abelian group $G$ by $r_0(G)$ (see e.g. [@Fu70]). \[P:tor-free\] If $\Lambda$ is a row-finite higher rank graph with no sources and $\Lambda^0$ finite then $K_0(C^*(\Lambda))$ and $K_1(C^*(\Lambda))$ have equal torsion-free rank. Let the rank of the given higher rank graph $\Lambda$ be $k$ and let $|\Lambda^0|=n$. Since, $E^\infty_{p,q}\cong E^{k+1}_{p,q}$ for all $p,q\in{{\mathbb{Z}}}$ and $E^{k+1}_{p,q}=0$ if $p\in{{\mathbb{Z}}}\backslash\{0,1,\ldots,k\}$ or $q$ odd by Lemma \[L:E\^infty\], it follows from the definition of convergence of $\{(E^r,d^r)\}$ (Definition \[D:spectral-sequence\]) that there exist finite filtrations, $$0=F_{-1}({\mathcal{K}}_0) \subseteq E^{k+1}_{0,0} \cong F_0({\mathcal{K}}_0)\subseteq F_1({\mathcal{K}}_0) \subseteq \cdots \subseteq F_{k-1}({\mathcal{K}}_0)\subseteq F_k({\mathcal{K}}_0)={\mathcal{K}}_0,$$ and $$0=F_0({\mathcal{K}}_1) \subseteq E^{k+1}_{1,0} \cong F_1({\mathcal{K}}_1)\subseteq F_2({\mathcal{K}}_1) \subseteq \cdots \subseteq F_{k-1}({\mathcal{K}}_1) \subseteq F_k({\mathcal{K}}_1)={\mathcal{K}}_1$$ of ${\mathcal{K}}_0=K_0(C^*(\Lambda))$ and ${\mathcal{K}}_1=K_1(C^*(\Lambda))$ respectively, such that $$E^{k+1}_{p,q}\cong F_p({\mathcal{K}}_{p+q})/F_{p-1}({\mathcal{K}}_{p+q}).$$ Thus, $$\begin{aligned} r_0(K_0(C^*(\Lambda))) &=&r_0(F_k({\mathcal{K}}_0)) = r_0( F_{k-1}({\mathcal{K}}_0)) + r_0( E^{k+1}_{k,-k}) = \cdots \\ &=& r_0(F_0({\mathcal{K}}_0)) + \sum_{s \ge 1} r_0(E^{k+1}_{s,-s}) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^{k+1}_{s,-s}),\end{aligned}$$ and $$\begin{aligned} r_0(K_1(C^*(\Lambda))) &=& r_0(F_k({\mathcal{K}}_1)) = r_0( F_{k-1}({\mathcal{K}}_1)) + r_0( E^{k+1}_{k,-k+1}) = \cdots \\ &=& r_0(F_1({\mathcal{K}}_1)) + \sum_{s \ge 2} r_0(E^{k+1}_{s,-s+1}) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^{k+1}_{s,-s+1}).\end{aligned}$$ Now we claim that $$\sum_{s\in{{\mathbb{Z}}}} r_0(E^{k+1}_{s,-s}) - r_0(E^{k+1}_{s,-s+1}) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^2_{s,-s}) -r_0(E^2_{s,-s+1}).$$ To see that this holds it is sufficient to prove that for all $r\ge 2$ we have $$\sum_{s\in{{\mathbb{Z}}}} r_0(E^{r+1}_{s,-s}) - r_0(E^{r+1}_{s,-s+1}) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^r_{s,-s}) - r_0(E^r_{s,-s+1}).$$ Recall that for all $r\ge 1,\;p,q\in{{\mathbb{Z}}}, \; E^{r+1}_{p,q}\cong Z(E^r)_{p,q}/B(E^r)_{p,q}$ where $Z(E^r)_{p,q}=\ker d^r_{p,q}$ and $B(E^r)_{p,q}=\operatorname{im}d^r_{p+r,q-r+1}$. Thus $$\begin{aligned} r_0(E^{r+1}_{p,q}) &=& r_0(Z(E^r)_{p,q}) - r_0(B(E^r)_{p,q})\\ &=& r_0(Z(E^r)_{p,q}) - r_0(E^r_{p+r,q-r+1}) + r_0(Z(E^r)_{p+r,q-r+1})\end{aligned}$$ for all $r\ge 1,\;p,q\in{{\mathbb{Z}}}$. Moreover, it follows from the definition of the Kasparov spectral sequence that given any $r\ge 1$ and $p,q,q'\in{{\mathbb{Z}}}$ with $q=q' \mod 2$ there exist isomorphisms $\rho:E^r_{p,q}{\longrightarrow}E^r_{p,q'},\; \sigma:E^r_{p-r,q+r-1}{\longrightarrow}E^r_{p-r,q'+r-1}$ such that $d^r_{p,q'}\circ\rho=\sigma\circ d^r_{p,q}$. Therefore, $$\begin{aligned} \sum_{s\in{{\mathbb{Z}}}}r_0(E^{r+1}_{s,-s}) - r_0(E^{r+1}_{s,-s+1}) &=& \sum_{s\in{{\mathbb{Z}}}}\left\{ r_0(Z(E^r)_{s,-s}) - r_0(Z(E^r)_{s+r,-s-r+2})\right. \\ &-& r_0(Z(E^r)_{s,-s+1}) + r_0(Z(E^r)_{s+r,-s-r+1})\\ &+&\left. r_0(E^r_{s+r,-s-r+2}) - r_0(E^r_{s+r,-s-r+1})\right\}\\ &=& \sum_{s\in{{\mathbb{Z}}}} r_0(E^r_{s,-s}) - r_0(E^r_{s,-s+1}) \end{aligned}$$ for all $r\ge 1$. Combining the above gives $$r_0(K_0(C^*(\Lambda))) - r_0(K_1(C^*(\Lambda))) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^2_{s,-s}) - r_0(E^2_{s,-s+1}).$$ Now, recall that for all $p\in{{\mathbb{Z}}}$ and $q\in 2{{\mathbb{Z}}}$, $E^2_{p,q}\cong H_p({{\mathbb{Z}}}^k,K_0(B))\cong \ker \partial_p/\operatorname{im}\partial_{p+1}$ by Theorem \[T:Hom\]. Therefore, $$\begin{aligned} && r_0(K_0(C^*(\Lambda))) - r_0(K_1(C^*(\Lambda))) = \sum_{s\in{{\mathbb{Z}}}} r_0(E^2_{2s,-2s}) - r_0(E^2_{2s+1,-2s}) \\ &=& \sum_{s\in{{\mathbb{Z}}}} r_0(\ker \partial_{2s}) - r_0(\operatorname{im}\partial_{2s+1}) - r_0(\ker\partial_{2s+1}) + r_0(\operatorname{im}\partial_{2s+2}) \\ &=& \sum_{s\in{{\mathbb{Z}}}} r_0(\ker \partial_{2s}) - r_0\left(\left(\bigoplus_{N_{2s+1}}{{\mathbb{Z}}}\Lambda^0\right)/\ker\partial_{2s+1}\right) - r_0(\ker\partial_{2s+1})\\ &+& r_0\left(\left(\bigoplus_{N_{2s+2}}{{\mathbb{Z}}}\Lambda^0\right)/\ker\partial_{2s+2}\right) \\ &=& \sum_{s\in{{\mathbb{Z}}}} r_0(\ker \partial_{2s}) - {k \choose 2s+1}n + r_0(\ker \partial_{2s+1}) - r_0(\ker \partial_{2s+1}) \\ &+&{k \choose 2s+2}n - r_0(\ker \partial_{2s+2}) \\ &=& \sum_{s\in{{\mathbb{Z}}}} \left\{{k \choose 2s} - {k \choose 2s-1} \right\}n \\ &=& \sum_{s\in{{\mathbb{Z}}}} \left\{ {k-1 \choose 2s} + {k-1 \choose 2s-1} - {k-1 \choose 2s-1} - {k-1 \choose 2s-2} \right\} n = 0.\end{aligned}$$ If $\Lambda$ is a row-finite higher rank graph with no sources and $\Lambda^0$ is finite then there exists a non-negative integer $r$ such that for $i=0,1$, $$K_i(C^*(\Lambda))\cong {{\mathbb{Z}}}^r\oplus T_i$$ for some finite group $T_i$, where ${{\mathbb{Z}}}^0:=\{0\}$. It is well-known that if $B$ is a finitely generated subgroup of an abelian group $A$ such that $A/B$ is also finitely generated then $A$ must be finitely generated too [@Fu70]. Now, for all $p,q\in{{\mathbb{Z}}}$, $E^{k+1}_{p,q}$ is isomorphic to a sub-quotient of the finitely generated abelian group $E^2_{p,q}\cong H_p({{\mathbb{Z}}}^k,K_q(B))$, therefore $E^{k+1}_{p,-p}$ is also finitely generated. Moreover, $E^{k+1}_{0,i}\cong F_0(K_i(C^*(\Lambda)))$ and for $p\in\{1,2,\ldots,k\},\; E^{k+1}_{p,-p+i}\cong F_p(K_i(C^*(\Lambda)))/F_{p-1}(K_i(C^*(\Lambda)))$, which implies that $K_i(C^*(\Lambda))=F_k(K_i(C^*(\Lambda)))$ is finitely generated. The result follows from Proposition \[P:tor-free\] by noting that every finitely generated abelian group $A$ is isomorphic to the direct sum of a finite group with ${{\mathbb{Z}}}^r$, where $r=r_0(A)$ (see e.g. [@Fu70 Theorem 15.5]). Note that it is well-known that when $k=1$ we always have $T_1=0$ in the above, i.e. $K_1(C^*(\Lambda))$ is torsion-free. However, for $k>1$, $K_1(C^*(\Lambda))$ may contain torsion elements. Formulae for the torsion-free rank and torsion parts of the $K$-groups of unital $C^*$-algebras of row-finite $2$-graphs with no sources can be given in terms of the vertex matrices (cf. [@RS01 Proposition 4.13]). This we do in Proposition \[P:Formulae\] below. \[P:Formulae\] Let $\Lambda$ be a row-finite 2-graph with no sources and finite vertex set. Then $$\begin{aligned} r_0(K_0(C^*(\Lambda))) &=& r_0(K_1(C^*(\Lambda))\\ &=& r_0(\operatorname{coker}(1-M_1^t, 1-M_2^t)) + r_0(\operatorname{coker}(1-M_1, 1-M_2)),\\ \operatorname{tor}(K_0(C^*(\Lambda))) &\cong& \operatorname{tor}(\operatorname{coker}(1-M_1^t,1-M_2^t)),\\ \operatorname{tor}(K_1(C^*(\Lambda))) &\cong& \operatorname{tor}(\operatorname{coker}(1-M_1, 1-M_2)).\end{aligned}$$ We have already seen in Proposition \[P:tor-free\] that the torsion-free rank of the $K_0$-group and $K_1$-group of a $k$-graph are equal so it is sufficient to calculate the torsion-free rank of $K_0(C^*(\Lambda))$. Let $n:=|\Lambda^0|$. By Proposition \[P:k=2\] we have $$\begin{aligned} r_0(K_0(C^*(\Lambda))) &=& r_0(\operatorname{coker}(1-M_1^t, 1-M_2^t)) + r_0\left(\ker {1-M_1^t \choose 1-M_2^t}\right) \\ &=& r_0(\operatorname{coker}( 1-M_1^t, 1-M_2^t)) + n - r_0(\operatorname{im}(1-M_1, 1-M_2)) \\ &=& r_0(\operatorname{coker}(1-M_1^t, 1-M_2^t)) + r_0(\operatorname{coker}(1-M_1, 1-M_2)).\end{aligned}$$ Furthermore, the assertion about the torsion part of $K_0(C^*(\Lambda))$ is obvious. The torsion part of $K_1(C^*(\Lambda))$ is given by $$\operatorname{tor}(K_1(C^*(\Lambda))) \cong \operatorname{tor}\left(\ker (1-M_1^t, 1-M_2^t)/\operatorname{im}{ M_2^t - 1 \choose 1-M_1^t}\right),$$ which is clearly isomorphic to $\operatorname{tor}(\operatorname{coker}{M_2^t-1 \choose 1-M_1^t})$. However, by reduction to Smith normal forms, $\operatorname{coker}{M_2^t-1 \choose 1-M_1^t}$ is isomorphic to $\operatorname{coker}(1-M_1, 1-M_2)$. We note that, in the case where $\Lambda$ is a row-finite $3$-graph with no sources and finite vertex set, and with $\partial_1,\partial_2$ defined as in Proposition \[P:k=3\], it is straightforward to show that if $\partial_1$ is surjective then $$K_0(C^*(\Lambda)) \cong K_1(C^*(\Lambda))\cong {{\mathbb{Z}}}^m,$$ where $m:=r_0(\ker\partial_2) - |\Lambda^0| = r_0(\operatorname{coker}\partial_2) - |\Lambda^0|$ (with ${{\mathbb{Z}}}^0:=0$). Applications and Examples {#S:examples} ========================= We begin this section with two corollaries to the results in the preceding section, which facilitate the classification of the $C^*$-algebras of row-finite $2$-graphs with no sources. We then end the paper with some simple illustrative examples. \[C:unit\] Let $\Lambda$ be a row-finite $2$-graph with no sources, finite vertex set and vertex matrices $M_1$ and $M_2$. Then there exists an isomorphism $$\Phi:\operatorname{coker}(1-M_1^t,1-M_2^t)\oplus\ker{ M_2^t-1 \choose 1-M_1^t }{\longrightarrow}K_0(C^*(\Lambda))$$ such that $\Phi(e+\operatorname{im}\partial_0)=[1]$, where $e(v)=1$ for all $v\in\Lambda^0$. Follows immediately from Proposition \[P:k=2\] and the fact that $\sum_{u\in\Lambda^0} p_u =1$. \[R:Kirchberg-Phillips\] We note that the $C^*$-algebra of a row-finite $k$-graph $\Lambda$, with no sources, is separable, nuclear and satisfies the UCT [@RSc87]. If in addition the $C^*$-algebra is simple and purely infinite we say that it is a Kirchberg algebra, and note that by the Kirchberg-Phillips classification theorem ([@K; @P00]) it is classifiable by its $K$-theory (see [@KP00 Theorem 5.5]). We also note that conditions on the underlying $k$-graph have been identified, which determine whether the $C^*$-algebra is simple ([@KP00 Proposition 4.8], [@RSi07 Theorem 3.2]) and purely infinite ([@S06 Proposition 8.8]). \[C:2-graphs-with-same-vertex-mxs\] Let $\Lambda$ and $\Delta$ be two row-finite $2$-graphs with no sources. Furthermore, suppose that $C^*(\Lambda)$ and $C^*(\Delta)$ are both simple and purely infinite, and that $\Lambda$ and $\Delta$ share the same vertex matrices. Then $C^*(\Lambda)\cong C^*(\Delta)$. Let $\Lambda$ and $\Delta$ are two $2$-graphs satisfying the hypothesis. Then, by Proposition \[P:k=2\] their $K$-groups are isomorphic. Suppose that the vertex set of $\Lambda$ (and hence that of $\Delta$) is infinite. Then $C^*(\Lambda)$ and $C^*(\Delta)$ are both non-unital, and thus stable, Kirchberg algebras with isomorphic $K$-groups. Thus by the Kirchberg-Phillips classification theorem $C^*(\Lambda)\cong C^*(\Delta)$. In the case where the vertex set of $\Lambda$ (and hence that of $\Delta$) is finite, $C^*(\Lambda)$ and $C^*(\Delta)$ are both unital Kirchberg algebras with isomorphic $K$-groups. Furthermore, by Corollary \[C:unit\] we see that the isomorphism of $K$-groups maps the $K_0$-class of the unit of one of the $C^*$-algebras onto that of the other. Therefore, by the Kirchberg-Phillips classification theorem we conclude that $C^*(\Lambda)\cong C^*(\Delta)$ and the result is proved. ${}$\ 1. Let $\Lambda$ be a row-finite $2$-graph with no sources. Suppose that the vertex matrices of $\Lambda$ are both equal to $M$, say. By Proposition \[P:k=2\] and [@Pa97 Theorem 3.1] we have: $$\begin{aligned} K_i(C^*(\Lambda))&\cong& {{\mathbb{Z}}}\Lambda^0/\operatorname{im}(1-M^t) \oplus \ker (1-M^t)\\ &\cong& K_0(C^*(E))\oplus K_1(C^*(E)),\end{aligned}$$ for $i=1,2$, where $E$ is the $1$-graph with vertex matrix $M$. The isomorphism for $K_0(C^*(\Lambda))$ is immediately obvious and that for $K_1(C^*(\Lambda))$ is given by $${x \choose y} + \operatorname{im}{M^t-1 \choose 1-M^t} \mapsto (x + \operatorname{im}(1-M^t)) \oplus (x+y).$$ 2. Fix non-zero $n_1,n_2\in{{\mathbb{N}}}^2$ and let $\Lambda$ be a $2$-graph with one vertex and vertex matrices $M_1=(n_1)$ and $M_2=(n_2)$ respectively. By Proposition \[P:k=2\], we have $K_0(C^*(\Lambda))\cong K_1(C^*(\Lambda) \cong {{\mathbb{Z}}}/g{{\mathbb{Z}}}$, where $g$ is the greatest common divisor of $n_1-1$ and $n_2-1$. Note that we recover the $K$-groups of tensor products of Cuntz algebras [@C77] by letting $\Lambda$ be the product $2$-graph of two $1$-graphs each with one vertex and a finite number of edges. We also note that tensor products of two Cuntz algebras are not the only examples of $C^*$-algebras of such $2$-graphs $\Lambda$ (cf. [@KP00 §6]). However, by Corollary \[C:2-graphs-with-same-vertex-mxs\], they are, up to $*$-isomorphism, the only examples of Kirchberg algebras arising from row-finite $2$-graphs with one vertex. 3. For each positive integer $n$, let ${\mbox{\boldmath$O$}}_n$ be the 1-graph with 1 vertex, $\star$, and $n$ edges (i.e. morphisms of degree 1), $\alpha_1,\alpha_2,\ldots,\alpha_n$. Let $c:{\mbox{\boldmath$O$}}_3 \times {\mbox{\boldmath$O$}}_n {\longrightarrow}{{\mathbb{Z}}}$ be the unique functor that satisfies $c(\alpha_i,\star)=\delta_{i,1}$ $(i=1,2,3)$ and $c(\star,\alpha_i)=1$ $(i=1,\ldots,n)$. Define $\Lambda$ to be the $2$-graph ${{\mathbb{Z}}}\times_c ({\mbox{\boldmath$O$}}_3\times {\mbox{\boldmath$O$}}_n)$. Let $T_i:=1-M_i^t$, where $M_1$ and $M_2$ are the vertex matrices of $\Lambda$. Then $$\begin{aligned} \label{E:T1} T_1 \delta_u &=& -\delta_u-\delta_{u+1} \mbox{ and}\\ \label{E:T2} T_2 \delta_u &=& \delta_u - n\delta_{u+1},\end{aligned}$$ where we identify $\Lambda^0$ with ${{\mathbb{Z}}}$. Clearly, $\ker{-T_2\choose T_1}=0$. Now consider $\operatorname{coker}(T_1,T_2)$ and for each $g\in{{\mathbb{Z}}}\Lambda^0$ let $[g]$ be the image of $g$ under the natural homomorphism ${{\mathbb{Z}}}\Lambda^0{\longrightarrow}\operatorname{coker}(T_1,T_2)$. By (\[E:T1\]) and (\[E:T2\]) we have $(n+1) [\delta_u]=0$. Therefore, $\operatorname{coker}(T_1,T_2)$ is a cyclic group, generated by $[\delta_0]$ say, whose order divides $n+1$. We claim that $\rho[\delta_0]\ne 0$ for each $\rho=1,\ldots,n$. Suppose the contrary, then we have $$\rho\delta_0 = T_1x + T_2y$$ for some $x,y\in{{\mathbb{Z}}}\Lambda^0$ and $\rho\in\{1,\ldots,n\}$. Thus, for each $u\in\Lambda^0$ we have $$\begin{aligned} \rho\delta_0(u)&=& -x(u)-x(u-1)+y(u)-ny(u-1).\\\end{aligned}$$ Since $x,y\in{{\mathbb{Z}}}\Lambda^0$, there exists $N$ such that $x(u)=y(u)=0$ if $|u|>N$, which we assume, without loss of generality, to be greater than zero. It follows that $$\begin{aligned} y(-N)&=&x(-N),\\ y(u)&=& x(u)+(n+1)\sum_{j=0}^{N-1+u}n^{N-1+u-j}x(-N+j),\mbox{ if }-N+1\le u\le -1,\\ y(u) &=& x(u) + (n+1)\sum_{j=0}^{N-1+u}n^{N-1+u-j}x(-N+j) + \rho n^{u},\mbox{ if } u\ge 0.\end{aligned}$$ Setting $u=N+1$, we arrive at the contradiction $(n+1)|\rho n^{N+1}$. Therefore, by Proposition \[P:k=2\] $K_0(C^*(\Lambda))\cong {{\mathbb{Z}}}/(n+1){{\mathbb{Z}}}$. Now we turn our attention to $\ker(T_1,T_2)/\operatorname{im}{-T_2 \choose T_1}$. Suppose that $x\oplus y \in \ker(T_1,T_2)$, then there exists $N$ such that $x(u)=y(u)=0$ if $|u|>N$, $$\begin{aligned} \label{E:y-in-terms-of-x-1} y(-N) &=& x(-N)\mbox{ and}\\\label{E:y-in-terms-of-x-2} y(u)&=& x(u) + (n+1)\sum_{j=-N}^{u-1}n^{u-1-i}x(j)\mbox{ for all } u\ge -N+1.\end{aligned}$$ Let $P:{{\mathbb{Z}}}\Lambda^0\oplus{{\mathbb{Z}}}\Lambda^0{\longrightarrow}{{\mathbb{Z}}}\Lambda^0$ be the projection onto the second component, i.e. $P(x\oplus y)=y$. From (\[E:y-in-terms-of-x-1\]) and (\[E:y-in-terms-of-x-2\]) we see that $P$ is injective on $\ker(T_1,T_2)$ and thus induces an isomorphism $\ker(T_1,T_2)/\operatorname{im}{-T_2\choose T_1} \cong P(\ker(T_1,T_2))/\operatorname{im}T_1$. Moreover, $P(\ker(T_1,T_2))=\{y\in{{\mathbb{Z}}}\Lambda^0 \;|\; \sum_{j\in{{\mathbb{Z}}}} (-1)^jy(j)=0\}$. Now given $y\in P(\ker(T_1,T_2))$, define $z:\Lambda^0{\longrightarrow}{{\mathbb{Z}}}$ by $$\begin{aligned} z(u)&=& 0 \mbox{ if } u<-N,\\ z(u) &=& \sum_{j=-N}^u (-1)^{u-j+1}y(j) \mbox{ if } u\ge -N.\end{aligned}$$ Then it is straightforward to show that $z\in{{\mathbb{Z}}}\Lambda^0$ and $T_1 z=y$, and thus $\ker(T_1,T_2)/\operatorname{im}{-T_2\choose T_1}$ is the trivial group. Therefore, by Proposition \[P:k=2\] $K_1(C^*(\Lambda))=0$. Note that $\Lambda$ satisfies the hypotheses of [@KP00 Proposition 4.8] and [@S06 Proposition 8.8] and thus $C^*(\Lambda)$ is a (stable) Kirchberg algebra. It now follows from the Kirchberg-Phillips classification theorem that $C^*(\Lambda)$ is $*$-isomorphic to the stabilized Cuntz algebra $\mathcal{O}_{n+2}\otimes{{\mathbb{K}}}$. 4. Let $c:{\mbox{\boldmath$O$}}_3\times{\mbox{\boldmath$O_3$}}\times{\mbox{\boldmath$O_3$}}{\longrightarrow}{{\mathbb{Z}}}_2$ be the unique functor that satisfies[^3] $$\begin{array}{rcl@{,\hspace{1cm}}rcl@{,\hspace{1cm}}rcl} c(\alpha_1,\star,\star)&=&0 & c(\star,\alpha_1,\star)&=&0 & c(\star,\star,\alpha_1)&=&1,\\ c(\alpha_2,\star,\star)&=&0 & c(\star,\alpha_2,\star)&=&0 & c(\star,\star,\alpha_2)&=&1,\\ c(\alpha_3,\star,\star)&=&1 & c(\star,\alpha_3,\star)&=&1 & c(\star,\star,\alpha_3)&=&1. \end{array}$$ Then the vertex matrices of $\Lambda:={{\mathbb{Z}}}_2\times_c ({\mbox{\boldmath$O$}}_3\times{\mbox{\boldmath$O_3$}}\times{\mbox{\boldmath$O_3$}})$ are $$M_1=M_2=\left(\begin{array}{cc} 1 & 2 \\ 2 & 1 \end{array}\right),\qquad M_3=\left(\begin{array}{cc} 0 & 3 \\ 3 & 0 \end{array}\right).$$ Following the notation in Proposition \[P:k=3\], for $i=1,2,3$, we have $$\begin{array}{rcl@{\hspace{5mm}}rcl} \partial_1 &=& \left(\begin{array}{rrrrrr} -1& -1& -1& -1& 1& -3 \\ -1& -1& -1& -1& -3& 1 \end{array}\right),& \partial_3 &=& \left(\begin{array}{rr} 1& -3 \\ -3& 1 \\ 1& 1 \\ 1& 1 \\ -1& -1 \\ -1& -1 \end{array}\right).\\ \partial_2 &=& \left(\begin{array}{rrrrrr} 1& 1& -1& 3& 0& 0 \\ 1& 1& 3& -1& 0& 0 \\ -1& -1& 0& 0& -1& 3 \\ -1& -1& 0& 0& 3& -1 \\ 0& 0& -1& -1& -1& -1 \\ 0& 0& -1& -1& -1& -1 \end{array}\right), \end{array}$$ To compute the $K$-groups of $C^*(\Lambda)$ we reduce the relevant matrices to their Smith normal forms (for a (not necessarily square) matrix $M$ we shall denote its Smith normal form by $S(M)$). In particular, $$U_1\partial_1V_1 = S(\partial_1)=S(\partial_3)^t=\left(\begin{array}{cccccc} 1& 0& 0& 0& 0& 0 \\ 0& 4& 0& 0& 0& 0 \end{array}\right)$$ for some invertible matrices $U_1,\;V_1$. Thus, $\operatorname{coker}\partial_1 \cong {{\mathbb{Z}}}/4{{\mathbb{Z}}}$, $\ker\partial_3=0$ and Corollary \[C:k=3\] can be applied to deduce that there exists a short exact sequence $$0 {\longrightarrow}{{\mathbb{Z}}}/4{{\mathbb{Z}}}{\longrightarrow}K_0(C^*(\Lambda)) {\longrightarrow}\ker\partial_2/\operatorname{im}\partial_3 {\longrightarrow}0$$ and $K_1(C^*(\Lambda))\cong \ker \partial_1/\operatorname{im}\partial_2$. Now $$U_2\partial_2 V_2 = S(\partial_2)= \left(\begin{array}{cccccc} 1& 0& 0& 0& 0& 0 \\ 0& 1& 0& 0& 0& 0 \\ 0& 0& 4& 0& 0& 0 \\ 0& 0& 0& 4& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \end{array}\right),$$ for some invertible matrices $U_2,\;V_2$, and we see that there exists an isomorphism (induced by $V_2^{-1}$ or by $U_1$) of $\ker \partial_2/\operatorname{im}\partial_3$ onto ${{\mathbb{Z}}}/4{{\mathbb{Z}}}$. Furthermore, we can now see that $\ker\partial_1/\operatorname{im}\partial_2$ is isomorphic to ${{\mathbb{Z}}}/4{{\mathbb{Z}}}\oplus{{\mathbb{Z}}}/4{{\mathbb{Z}}}$. Hence, $K_0(C^*(\Lambda))$ is a group order 16 and $K_1(C^*(\Lambda))\cong {{\mathbb{Z}}}/4{{\mathbb{Z}}}\oplus{{\mathbb{Z}}}/4{{\mathbb{Z}}}$.[^4] We note that it is, perhaps, slightly surprising that $C^*(\Lambda)$ shares, at least, one of its $K$-groups with that of $\mathcal{O}_5\otimes\mathcal{O}_5\otimes\mathcal{O}_5$, given that this relationship is not obvious at the level of $C^*$-algebras. 5. Let $c:{\mbox{\boldmath$O$}}_2\times{\mbox{\boldmath$O_3$}}\times{\mbox{\boldmath$O_3$}}{\longrightarrow}{{\mathbb{Z}}}_2$ be the unique functor that satisfies $$\begin{array}{rcl@{\hspace{1cm}}rcl@{,\hspace{1cm}}rcl} c(\alpha_1,\star,\star)&=&0, & c(\star,\alpha_1,\star)&=&0 & c(\star,\star,\alpha_1)&=&1,\\ c(\alpha_2,\star,\star)&=&1, & c(\star,\alpha_2,\star)&=&1 & c(\star,\star,\alpha_2)&=&1,\\ &&& c(\star,\alpha_3,\star)&=&1 & c(\star,\star,\alpha_3)&=&1. \end{array}$$ Then the vertex matrices of $\Lambda:={{\mathbb{Z}}}_2\times_c ({\mbox{\boldmath$O$}}_3\times{\mbox{\boldmath$O_3$}}\times{\mbox{\boldmath$O_3$}})$ are $$M_1=\left(\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array}\right),\qquad M_2=\left(\begin{array}{cc} 1 & 2 \\ 2 & 1 \end{array}\right),\qquad M_3=\left(\begin{array}{cc} 0 & 3 \\ 3 & 0 \end{array}\right).$$ Now, for $i=1,2,3$, we have $$\begin{array}{lcr@{\hspace{5mm}}lcr} \partial_1 &=& \left(\begin{array}{rrrrrr} 0& -1& 0& -2& 1& -3 \\ -1& 0& -2& 0& -3& 1 \end{array}\right),&\partial_3 &=& \left(\begin{array}{rr} 1& -3 \\ -3& 1 \\ 0& 2 \\ 2& 0 \\ 0& -1 \\ -1& 0 \end{array}\right).\\ \partial_2 &=& \left(\begin{array}{rrrrrr} 0& 2& -1& 3& 0& 0 \\ 2& 0& 3& -1& 0& 0 \\ 0& -1& 0& 0& -1& 3 \\ -1& 0& 0& 0& 3& -1 \\ 0& 0& 0& -1& 0& -2 \\ 0& 0& -1& 0& -2& 0 \end{array}\right), \end{array}$$ As in the previous example we compute the Smith normal form of $\partial_1$ (and hence that of $\partial_3$) first, to determine whether Corollary \[C:k=3\] is applicable. We find that $$S(\partial_1)=S(\partial_3)^t=\left(\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{array}\right),$$ thus $\operatorname{coker}\partial_1\cong \ker \partial_3 \cong 0$ and we may apply Corollary \[C:k=3\] to deduce that $$K_0(C^*(\Lambda))=\ker \partial_2/\operatorname{im}\partial_3\quad\mbox{and}\quad K_1(C^*(\Lambda))=\ker\partial_1/\operatorname{im}\partial_2.$$ Now, $$S(\partial_2)=\left(\begin{array}{cccccc} 1& 0& 0& 0& 0& 0 \\ 0& 1& 0& 0& 0& 0 \\ 0& 0& 1& 0& 0& 0 \\ 0& 0& 0& 1& 0& 0 \\ 0& 0& 0& 0& 0& 0 \\ 0& 0& 0& 0& 0& 0 \end{array}\right).$$ It follows that both $\ker\partial_2/\operatorname{im}\partial_3$ and $\ker\partial_1/\operatorname{im}\partial_2$ are trivial. Thus, the $K$-groups of $C^*(\Lambda)$ are isomorphic to those of the Cuntz algebra $\mathcal{O}_2$. Furthermore, It is clear that $\Lambda$ satisfies the hypotheses of [@KP00 Proposition 4.8] and [@S06 Proposition 8.8], and therefore $C^*(\Lambda)$ is an unital Kirchberg algebra (cf. Remarks \[R:Kirchberg-Phillips\]). Applying the Kirchberg-Phillips classification theorem, we conclude that $C^*(\Lambda)\cong \mathcal{O}_2$. [10]{} , [*Proc. Amer. Math. Soc.*]{} [**134**]{} (2006), no. 2, 455–464, MR2176014 (2006e:46078), Zbl pre02227186. , [*Harper & Row Publishers, New York, 1974, Harper’s Series in Modern Mathematics*]{}. MR0366959 (51 \#3205), Zbl 0325.13001. , [*Graduate Texts in Mathematics, vol. 87, Springer-Verlag, New York*]{}, 1994, MR1324339 (96a:20072), Zbl 0584.20036. , [*Comm. Math. Phys*]{}. [**57**]{} (1977), no. 2, 173–185, MR0467330 (57 \#7189, Zbl 0399.46045. [to3em]{}. [A class of [$C\sp{\ast} $]{}-algebras and topological [M]{}arkov chains. [II]{}. [R]{}educible chains and the [E]{}xt-functor for [$C\sp{\ast} $]{}-algebras]{}, [*Invent. Math*]{}. [**63**]{} (1981), no. 1, 25–40, MR0608527 (82f:46073b), Zbl 0461.46047. , [*Invent. Math*]{}. [**56**]{} (1980), no. 3, 251–268, MR0561974 (82f:46073a), Zbl 0434.46045. , PhD Thesis, [*Univ. Wales*]{}, 2002. , (2006), arXiv:math/0603037v2. , [*Pure and Applied Mathematics, Vol. 36, Academic Press, New York*]{}, 1970, MR0255673 (41 \#333), Zbl 0209.05503. , revised ed., [*The University of Chicago Press, Chicago, Ill.-London*]{}, 1974, MR0345945 (49 \#10674), Zbl 0296.13001. , [*Invent. Math.*]{} [**91**]{} (1988), no. 1, 147–201, MR0918241 (88j:58123), Zbl 0647.46053. , [*Fields Inst. Commun, Amer. Math. Soc. Providence, to appear.*]{} , [*New York J. Math.*]{} [**6**]{} (2000), 1–20 (electronic), MR1745529 (2001b:46102), Zbl 0946.46044. , [*Die Grundlehren der mathematischen Wissenschaften, Bd. 114 Academic Press, Inc., Publishers, New York; Springer-Verlag, Berlin-Göttingen-Heidelberg*]{} 1963 x+422 pp. MR0156879 (28 \#122), Zbl 0133.26502. [to3em]{}. [Categories for the Working Mathematician]{}, second ed., [*Graduate Texts in Mathematics, vol. 5, Springer-Verlag, New York*]{}, 1998, MR1712872 (2001j:18001), Zbl 0906.18001. , [*Operator algebras and quantum field theory (Rome, 1996), Internat. Press, Cambridge, MA*]{}, 1997, pp. 85–92, MR1491109 (98m:46072), Zbl 0921.46050. , [*Publ. Res. Inst. Math. Sci.*]{} [**32**]{} (1996), no. 3, 415–443, MR1409796 (97m:46111), Zbl 0862.46043. , [*Doc. Math.*]{} [**5**]{} (2000), 49–114 (electronic), MR1745197 (2001d:46086b), Zbl 0943.46037. , [*Proc. Edinburgh Math. Soc. (2)*]{} [**31**]{} (1988), no. 2, 321–330, MR0989764 (90d:46093), Zbl 0674.46038. , [*Proc. Edinb. Math. Soc. (2)*]{} [**46**]{} (2003), no. 1, 99–115, MR1961175 (2004f:46068), Zbl pre01925883. [to3em]{}. [The [$C\sp *$]{}-algebras of finitely aligned higher-rank graphs.]{} [*J. Funct. Anal.*]{} [**213**]{} (2004), no. 1, 206–240, MR2069786 (2005e:46103), Zbl 1063.46041. , [*Bull. Lond. Math. Soc.*]{} [**39**]{} (2007), no. 2, 337–344, MR2323468. , [*Proc. London Math. Soc. (3)*]{} [**72**]{} (1996), no. 3, 613–637, MR1376771 (98b:46088), Zbl 0869.46035. [to3em]{}, [Affine buildings, tiling systems and higher rank [C]{}untz-[K]{}rieger algebras]{}, [*J. Reine Angew. Math.*]{} [**513**]{} (1999), 115–144, MR1713322 (2000j:46109), Zbl pre01333057. [to3em]{}, [Asymptotic [$K$]{}-theory for groups acting on [$\tilde{A}\sb 2$]{} buildings]{}, [*Canad. J. Math.*]{} [**53**]{} (2001), no. 4, 809–833, MR1848508 (2002f:46141), Zbl 0993.46039. , [*Duke Math. J.*]{} [**55**]{} (1987), 431–474, MR 88i:46091, Zbl 644.46051. , [*Pacific J. Math.*]{} [**96**]{} (1981), no. 1, 193–211, MR0634772 (84g:46105a), Zbl 0426.46057. , [*Canad. J. Math.*]{} [**58**]{} (2006), no. 6, 1268–1290, MR2270926 (2007j:46095). [to3em]{}. [Relative Cuntz-Krieger algebras of finitely aligned higher-rank graphs.]{} [*Indiana Univ. Math. J.*]{} [**55**]{} (2006), no. 2, 849–868. , [*McGraw-Hill Book Co., New York*]{}, 1966, MR0210112 (35 \#1007), Zbl 0145.43303. , [*Internat. J. Math.*]{} [**2**]{} (1991), no. 4, 457–476, MR1113572 (92j:46120), Zbl 0769.46044. , [ *J. Funct. Anal.*]{} [**19**]{} (1975), 25–39. MR0365160 (51 \#1413), Zbl 0295.46088. , [*Oxford Science Publications, The Clarendon Press Oxford University Press, New York*]{}, 1993, MR1222415 (95c:46116), Zbl 0780.46038. , [*Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, Cambridge*]{}, 1994, MR1269324 (95f:18001), Zbl 0797.18001. [^1]: The $C^*$-algebra of a row-finite higher rank graph with no sources is nuclear [@KP00 Theorem 5.5]. [^2]: The reader will notice that the definition is presented in a less general form than in [@W94 §5], but is adequate for our purposes. [^3]: We extend the definition of the product of two higher rank graphs (Examples \[exs:k-graph\_constructions\].2) to the product higher rank graph of three higher rank graphs in the natural way. Note that if $\Lambda_i$ is a $k_i$-graph for $i=1,2,3$ then both $(\Lambda_1 \times \Lambda_2)\times\Lambda_3$ and $\Lambda_1 \times (\Lambda_2 \times \Lambda_3)$ are clearly pairwise isomorphic as $(k_1+k_2+k_3)$-graphs to $\Lambda_1\times\Lambda_2\times\Lambda_3$ . [^4]: It is well-known that there are, up to isomorphism, 5 abelian groups of order 16.
--- abstract: | MAGIC (Major Atmospheric Gamma$-$ray Imaging Cherenkov Telescope) is a system of two 17 meters Cherenkov telescopes, sensitive to very high energy (VHE; $> 10^{11}$ eV) gamma radiation above an energy threshold of 50 GeV. The first telescope was built in 2004 and operated for five years in stand‐alone mode. A second MAGIC telescope (MAGIC$-$II), at a distance of 85 meters from the first one, started taking data in July 2009. Together they integrate the MAGIC stereoscopic system. Stereoscopic observations have improved the MAGIC sensitivity and its performance in terms of spectral and angular resolution, especially at low energies. We report on the status of the telescope system and highlight selected recent results from observations of galactic and extragalactic gamma$-$ray sources. The variety of sources discussed includes pulsars, galactic binary systems, clusters of galaxies, radio galaxies, quasars, BL Lacertae objects and more. address: | ITPA, Universität Würzburg, Campus Hubland Nord,\ Emil-Fischer-Str. 31 D-97074 Würzburg, Germany\ $^*$E-mail: omar.tibolla@gmail.com ;\ Omar.Tibolla@astro.uni-wuerzburg.de author: - 'O. Tibolla$^*$ on behalf of the MAGIC collaboration' title: Recent Results from the MAGIC Telescopes --- MAGIC ===== The Major Atmospheric Gamma$-$ray Imaging Cherenkov Telescope (MAGIC) is a system of two 17$-$meters Atmospheric Cherenkov Telescopes (shown in Fig. \[m\]) located at the *Observatorio del Roque de los Muchachos* in the island of *La Palma*, 2200 meters above sea level. MAGIC-I has been in operation since 2004 and the stereoscopic system has been operation since 2009. MAGIC has an enhanced duty cycle up to $\sim 17$% as it is able to operate in presence of moderate moonlight and twilight. The performances of the MAGIC stereoscopic system are reported in [@magic] . The low energy threshold of 50 GeV (or 25 GeV with special trigger setup [@sumtrigger] ) allows observations of the distant universe and overlaps with the Fermi satellite; the angular resolution is $\sim 0.1^{\circ}$ at 100 GeV, down to $\sim 0.05^{\circ}$ above 1 TeV; the energy resolution is 20% at 100 GeV and goes down to 15% at 1 TeV. Another very important feature of MAGIC telescopes is their light structure (ultralight carbon fiber frame), that allows fast repositioning (less than 20 seconds for a 180$^\circ$ repositioning), for fast follow-up observation of gamma-ray bursts (GRBs). In order to achieve an easier maintenance, better sensitivity and performances (in particular for extended sources), on June 15$^{th}$, the MAGIC telescopes were shut down to perform a major upgrade of the hardware: - [Both telescopes will be equipped with a new 2 GSamples/s readout based on DRS4 chip (linear, low dead time, low noise);]{} - [The camera of MAGIC-I will be upgraded to a clone of the MAGIC-II camera, i.e. from 577 to 1039 pixels to match the camera geometry and the trigger area of MAGIC-I (currently this is planned for 2012);]{} - [Both telescopes will be equipped with *sumtrigger* (threshold $\sim$25 GeV [@sumtrigger] ) covering the total conventional trigger area (planned for 2012 as well).]{} The MAGIC scientific program covers different aspects of high energy astrophysics: - [Galactic Objects: Supernovae Remnants (SNRs), Pulsars and Pulsar Wind Nebulae (PWNe).]{} - [Extragalactic Objects: Active Galactic Nulcei (AGNs), starburst galaxies, clusters of galaxies and Gamma-Ray Bursts (GRBs).]{} - [Fundamental physics, such as the origin of Cosmic Rays (CRs; that can of course be studied indirectly, by means of studying SNRs for instance, but also directly, considering the showers initiated by primary CRs), Dark Matter (DM) searches and the possible tests of Lorentz invariance violations.]{} Galactic observations --------------------- Recent MAGIC results on Galactic science are here highlighted in four sections: - [Crab Pulsar Wind Nebula.]{} - [Crab pulsar.]{} - [Extended sources (PWNe and SNRs).]{} - [X-ray binaries (XRBs) and Galactic microquasars.]{} ### Crab nebula The Crab nebula is the the prototype of young PWN and it has been considered the ”standard candle“ of Imaging Atmospheric Cherenkov Telescopes (IACTs) so far; however, in the past year, questions have been raised about its flux constancy, after it was seen flaring in GeV gamma-rays (with both *AGILE* and *Fermi LAT*, [@111] ), a year-scale longer variability has been observed in X-rays and ARGO-YBJ [@222] reported an increase in the TeV gamma-rays flux. MAGIC observed the Crab during the September 2010 flare, finding no indication for variability above 300 GeV [@flare] , and during the April 2011 flare (shown in Fig. \[fig\_crab\] and reported in [@crab_icrc] ) and no variability observed in the energy range 700 GeV - 10 TeV. However, given the daily binning of MAGIC data, any shorter term variability cannot be excluded so far. ### Crab pulsar Most models for gamma-ray emission from pulsars (such as polar cap, outer or slot gap) predict exponential or super-exponential cut-offs in pulsar spectral energy distribution at a few GeV and this is indeed what *Fermi LAT* has observed in the 100 MeV - 10 GeV energy range (e.g. [@psr] ) for many pulsars. Thanks to its low energy threshold MAGIC has the capabilities to test this trend; the Crab pulsar has been observed for 59 hours (between October 2007 and February 2009) with MAGIC-I, allowing the extraction of detailed phase$-$resolved spectra between 25 GeV and 100 GeV: the spectra show a power$-$law behavior and the cut-off extrapolation is ruled out at more than 5 standard deviations [@taka] . After fall 2009, the Crab pulsar was observed for 73 hours in stereoscopic mode; the phase$-$resolved spectra extracted from those observations agree with the ones obtained with MAGIC-I, their simple power law behavior is confirmed and they extend well beyond a cut-off at few GeV energies [@crab2] . Moreover MAGIC data are in good agreement with *Fermi LAT* and VERITAS [@ver] observations; hence ”standard“ models above mentioned cannot explain this observed behaviour. Is the Crab pulsar atypical or do other pulsars also have such a VHE power law tail? ### Extended sources Thanks to the improved stereoscopic system, MAGIC is more performant also for studying extended sources. - [HESS J1857+026, a VHE unidentified gamma-ray source discovered by H.E.S.S. in 2008 and after suggested to be a PWN powered by the energetic pulsar PSR J1856+0245, has been detected by MAGIC in 2010, allowing us to investigate its energy dependent morphology [@1857] . Its spectrum fits well with a power law, consistent with the H.E.S.S. one and its extrapolation; in order to agree with the LAT data a spectral turnover is needed at 10-100 GeV: this could be naturally explained with an Inverse Compton turnover, which would confirm the leptonic nature suggested for this source.]{} - [W51 was detected in GeV gamma-rays energies by *Fermi LAT* [@w51f] and at TeV energies by H.E.S.S. [@w51h] ; its gamma-ray emission is thought to have its origin in the SNR/MC interaction. MAGIC clearly detected W51 in 2010 (more than 8 standard deviations in 31 hours of observation), confirming its extent ($\sim 0.16^{\circ}$) and the fact that the VHE emission spatially coincides with shocked MC [@w51] . Its spectral shape would confirm the hadronic origin of the gamma-ray signal as well.]{} ### XRBs and Galactic microquasars Two nice examples of binaries have been observed by MAGIC in the last years: - [The high mass X-ray binary system (HXRB) LSI+61 303 consists of a compact object of unknown nature (i.e. either a Neutron Star or a Black Hole) orbiting around a Be star of 13 solar masses, with a period of $\sim$27 days, and it is located at 2 kpc of distance. It was discovered in VHE gamma-rays by MAGIC in 2005 and regularly monitored since then. In 2008, LSI+61 303 faded, however in 2009 we managed to detect it during this low VHE state; and more recently, between Autumn 2010 and Spring 2011, the luminosity of this HXRB was back to the level at which it was first detected [@61303] ; eventual correlations with the superorbital periodicity ($\sim$4.6 years) observed in radio are currently under investigations.]{} - [HESS J0632+057 was discovered by H.E.S.S. in 2007; it was the first point-like unidentified source seen at VHE energies and it is the first binary discovery triggered by VHE observations. It was detected with MAGIC in 2011 [@mon] in coincidence with a high X-ray activity period. Currently this source is monitored with MAGIC, H.E.S.S. and VERITAS and hence is a very nice example of synergy among different IACTs]{} However there is another class of XRBs, i.e. objects that show an accretion disk around the compact object and jet-like structures orthogonal to it, the so-called Galactic microquasars, monitored by MAGIC and not detected so far: e.g Cygnus X-1 has been observed for more than 100 hours leading to no detection, and, more recently, also upper limits on GRS1915+105 and Scorpius X-1 have been released by our collaboration [@X1] [@1915] ; the current upper limits put constraints to the gamma-ray luminosity to be a very small fraction of the kinetic luminosty of the jets. Extragalactic observations -------------------------- The importance of an improved stereoscopic system reflects also on extragalactic science; in fact in the last 12 months, seven extragalactic objects have been discovered at VHE energies thanks to MAGIC: 4 BL Lac objects (1ES1741+196, 1ES 1215+303, MAGIC J2001+435 and B3 2247+381), 2 radio-galaxies (NGC 1275 and IC 310, visible in Fig. \[lllll\]) and one Flat Spectrum Radio Quasar (PKS 1221+21). Another successful strategy in order to discover new extragalactic objects in VHE gamma-rays is represented by the optical trigger, i.e. by monitoring regularly candidate sources by the optical KVA telescope in La Palma (close to MAGIC site) and observing the candidates with MAGIC during their high optical states. This led us to discover several BL Lac objects, such as Mrk180, 1ES1011+496, S5 0716+714, B3 2247+381 and 1ES1215+303. Another crucial improvement is represented by the complementarity with gamma-ray satellites, that allow us generate much more detailed Spectral Energy Distributions of the sources, by covering in details the High Energy component over 5 decades, and, by means of monitoring these sources also at Radio, optical and X-ray wavelengths, allow us to cover simultaneously more than 17 decades in energy (e.g. [@421] ). However, present day the extragalactic VHE science is no longer restricted to BL Lac objects: MAGIC detected successfully also Radio galaxies (such as M87, IC 310 and NGC 1275, e.g. [@rg] ) and FSRQs [@279] [@1222] . ### Quasars, the most distant objects After the surprising detection by MAGIC, 3C 279 (i.e. the most distant object ever detected at VHE; z=0.536) has been re-observed by MAGIC [@279] , confirming that its emission is harder than expected, and showing that the Universe is more transparent to gamma-rays than previously predicted. This low upper limit to the Extragalactic Background Light (EBL) has been confirmed with the discovery at VHE of another FSRQ: PKS 1222+21 [@1222] . Observations of PKS 1222+21 and 3C 279 show the same features: - [Emission up to hundreds of GeV.]{} - [Fast variability (e.g in PKS 1222+21 we saw 9 minutes doubling times).]{} - [No signs of intrinsic cut-off.]{} To reconcile those hard spectra with such a high variability is still a challenge for the theoretical models of photon emission in this type of sources. In fact, in the standard picture [@der] [@ghi] [@sik] , if gamma-rays are produced outside the Broad Line Region (BLR) by Inverse Compton scattering of dusty torus photons, we can explain explain the smaller-than-expected but it is hard to explain the fast variability; in contrast, if gamma-rays are produced inside the BLR by Inverse Compton scattering of BLR photons, we can explain the variability, but we expect strong absorption and Klein-Nishina suppression (i.e. a cut-off at energies lower than 100 GeV). More complex models are currently under evaluation, such as strong recollimations of the jet, or the presence of blobs or minijets inside the jet, or the so-called two-zone model (i.e. a large emission zone inside the BLR and in addition a small blob outside). ### GRBs As mentioned in section \[magic\] MAGIC was especially designed to search for prompt emission of GRBs, thanks to its fast repositioning (less than 20 seconds for 180$^{\circ}$) automatically after an alert. We are observing in average $\sim$1 GRB/month and so far none has been detected at VHE energies. Recently, following a X-ray detection, GRB110328 was observed was observed. After a multiwavelength follow-up, it turned out to be hardly classified as a GRB due to its long-lasting activity. The nature of this source is still uncertain [@grb] . Fundamental physics ------------------- ### Cosmic Rays The first MAGIC results on CR electrons spectrum, visible in Fig. \[ele\], are based on 14 hours of data taken in 2009-2010: the e$^{\pm}$ spectrum has been measured in the energy range between 100 GeV and 3 TeV. The energy distribution agrees well with the previous measurements of H.E.S.S. and Fermi LAT (and the peak detectedby ATIC cannot be excluded or confirmed) [@e] . A related initiative (following the experience of the ARTEMIS experiment) consists in probing the e$^{+}/$e$^{-}$ ratio at 300-700 GeV by measuring the shadowing of the CR flux by the Moon. This measurable effect (for 50 hours of observations) has been estimated to be $\sim$4.4% of the Crab flux in the range 300-700 GeV; given its small observability window (i.e. the most favorable observation periods are the spring equinox and the autumn equinox for e$^{+}$ and e$^{-}$ respectively) this measure could be possible with MAGIC integrating data over a few years (for details [@e2] ). ### Dark Matter: indirect searches Super-Symmetrical (SUSY) extensions of the standard model foresee the existence of stable, weakly interacting particles (e.g. the lightest neutralino) which could account for part of the dark matter in the Universe. The annihilation of neutralinos may give rise to gamma-rays in the energy range accesible to MAGIC. Such signals have been sought with MAGIC in several targets: - [Galaxy Clusters. MAGIC searches concentrated on the Perseus cluster, that is really challenging for several reasons: (1) for the presence of NGC 1275 and IC 310, (2) since the expected flux much smaller than the one coming from CRs and moreover (3) it could possibly have a very extended DM profile.]{} - [Unidentified Fermi Objects. 1FGL J0338.8+1313 and 1FGL J2347.3+0710 show a hard spectrum in *Fermi LAT* data and they could be DM micro-spikes in the Galactic halo; they have been observed for relatively short exposures ($\sim$10 hours) and lead to no detection so far [@dm] .]{} - [Dwarf spheroidal galaxies. Several of them were observed in the past (such as Draco and Willman-1) and recently Segue-1, i.e. the most DM dominated object known so far ($M/L > 1000)$, has been observed for 29 hours, showing no significant excess [@segue] .]{} DM indirect searches leaded to no detections so far and the derived upper limits are still above theoretical expectations. Acknowledgments =============== The MAGIC Collaboration would like to thank the Instituto de Astrofisica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The support of the German BMBF and MPG, the Italian INFN, the Swiss National Fund SNF, and the Spanish MICINN is gratefully acknowledged. This work was also supported by the Marie Curie program, by the CPAN CSD2007-00042 and MultiDark CSD2009-00064 projects of the Spanish Consolider-Ingenio 2010 programme, by grant DO02-353 of the Bulgarian NSF, by the YIP of the Helmholtz Gemeinschaft, by the DFG Cluster of Excellence ”Origin and Structure of the Universe“, by the DFG Collaborative Research Centers SFB823/C4 and SFB876/C3, and by the Polish MNiSzW grant 745/N-HESS-MAGIC/2010/0. [10]{} The MAGIC collaboration, arXiv:1108.1477 (2011). Rissi et al., IEEE, 56, 3840 (2009). Baldini et al. (Fermi LAT collaboration), these proceedings. Marsella et al. (ARGO YBJ collaboration), these proceedings. Mariotti et al.(MAGIC collaboration), ATel. 2967 (2011). Zanin et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Abdo et al. (Fermi LAT collaboration) ApJS, 187, 460 (2010). The MAGIC collaboration, arXiv:1108.5391 (2011). The MAGIC collaboration, arXiv:1109.6124 (2011). Aliu et al. (VERITAS collaboration), arXiv:1108.3797 (2011). Klepser et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Abdo et al. (Fermi LAT collaboration) ApJ, 706, L1 (2009). Fiasson et al. (H.E.S.S collaboration), 31$^{st}$ ICRC (2009). Carmona et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Jogler et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Mariotti et al.(MAGIC collaboration), ATel. 3161 (2011). The MAGIC collaboration,ApJ, 735, L5 (2011). Rico et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). The MAGIC collaboration, arXiv:1106.1589 (2011). Hildebrand et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). The MAGIC collaboration, A&A, 530, id.A4 (2011). The MAGIC collaboration, ApJ, 730, L8 (2011). Dermer et al., ApJ, 692, 32 (2009). Ghisellini and Tavecchio, MNRAS, 397, 985 (2009). Sikora et al., ApJ, 704, 38 (2009). Berger et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Borla Tridon et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). Colin et al.(MAGIC collaboration), arXiv:0907.1026 (2009). Nieto et al.(MAGIC collaboration), 32$^{nd}$ ICRC (2011). The MAGIC collaboration,JCAP, 06, 035 (2011).
--- author: - | Mario S. Holubar\ Department of Artificial Intelligence\ Bernoulli Institute\ University of Groningen\ `mario.holubar@gmail.com`\ Marco A. Wiering\ Department of Artificial Intelligence\ Bernoulli Institute\ University of Groningen\ `m.a.wiering@rug.nl`\ bibliography: - 'literature.bib' title: 'Continuous-action Reinforcement Learning for Playing Racing Games: Comparing SPG to PPO' ---
--- abstract: 'Recent high-precision results for the critical exponent of the localization length at the integer quantum Hall (IQH) transition differ considerably between experimental ($\nu_\text{exp} \approx 2.38$) and numerical ($\nu_\text{CC} \approx 2.6$) values obtained in simulations of the Chalker-Coddington (CC) network model. We revisit the arguments leading to the CC model and consider a more general network with geometric (structural) disorder. Numerical simulations of this new model lead to the value $\nu \approx 2.37$ in very close agreement with experiments. We argue that in a continuum limit the geometrically disordered model maps to the free Dirac fermion coupled to various random potentials (similar to the CC model) but also to quenched two-dimensional quantum gravity. This explains the possible reason for the considerable difference between critical exponents for the CC model and the geometrically disordered model and may shed more light on the analytical theory of the IQH transition. We extend our results to network models in other symmetry classes.' author: - 'I. A. Gruzberg' - 'A. Klümper' - 'W. Nuding' - 'A. Sedrakyan' date: 'April 11, 2016' title: 'Geometrically disordered network models, quenched quantum gravity, and critical behavior at quantum Hall plateau transitions' --- The integer quantum Hall (IQH) transition [@Huckestein-Scaling-1995] is the most prominent example of an Anderson transition, a continuous quantum phase transition driven by disorder and accompanied by universal critical phenomena [@Evers-Anderson-2008]. Numerous experiments [@Wei-Experiments-1988; @Koch-Experiments-1991; @Koch-Size-dependent-1991; @Koch-Experimental-1992; @Engel-Microwave-1993; @Wei-Current-1994] demonstrated scaling near the IQH transition characterized by the localization length exponent $\nu$. The most recent and accurate experimental value is $\nu_\text{exp} = 2.38 \pm 0.02$ [@Li-Scaling-2005; @Li-Scaling-2009]. A similar value of $\nu$ was observed at the IQH transition in graphene [@Giesbers-Scaling-2009], confirming universality at the IQH transition. The IQH effect is usually modeled by neglecting electron-electron interactions, that is, within the paradigm of Anderson localization [@Anderson-Absence-1958; @Abrahams-Scaling-1979]. Existence of delocalized states in disorder-broadened Landau levels, which is necessary to explain the IQH transition, is consistent with the description of the transition by a nonlinear sigma model with a topological term [@Levine-Electron-1983; @Weidenmuller-Single-1987], and its two-parameter flow diagram [@Kmelnitskii-Quantization-1983; @Pruisken-Dilute-1985]. The critical point of the sigma model should possess conformal invariance and be described by a conformal field theory (CFT) with the central charge $c=0$ [@Gurarie-Conformal-2004], due to the use of replicas or supersymmetry (SUSY) to treat disorder averages. However, this fixed point is in the strong coupling regime, and notable attempts at identifying the CFT [@Zirnbauer-Conformal-1999; @Bhaseen-Towards-2000; @Tsvelik-Wave-2001; @Tsvelik-Evidence-2007] are inconclusive so far. The IQH transition is related to the problem of disordered Dirac fermions [@Ludwig-Integer-1994]. The generic model with random mass, scalar, and gauge potentials is believed to have a fixed point in the universality class of the IQH transition, but this fixed point is not perturbatively accessible. A simplified model where only a random gauge potential is kept, is analytically solvable, and the exact spectrum of multifractal (MF) exponents describing the scaling of the moments of critical wave functions is known [@Ludwig-Integer-1994; @Chamon-Instability-1996; @Mudry-Two-dimensional-1996; @Chamon-Localization-1996; @Kogan-Liouville-1996; @Castillo-Exact-1997]. More recently, alternative approaches to the IQH transition were advanced. One is based on a mapping to a classical model and conformal restriction [@Bettelheim-Quantum-2012], and another uses symmetry properties of the sigma model [@Gruzberg-Symmetries-2011; @Gruzberg-Classification-2013; @Bondesan-Pure-2014] to derive exact symmetry properties of the MF spectra at the IQH transition. In spite of these successes, no theoretical predictions for the exponent $\nu$ exist. Much intuition about the IQH transition, as well as the most accurate numerical estimates for critical exponents, come from a the Chalker-Coddington (CC) network model [@Chalker-Percolation-1988; @Kramer-Random-2005]. The model is based on the semiclassical picture of electrons drifting along the equipotential lines of a smooth disorder potential. Tunneling across saddle points of the potential leads to hybridization of the localized states and a possible delocalization. In the CC model this picture is drastically simplified, and all scattering nodes are placed at the vertices of a square lattice. The CC model in various limits can be mapped both to the nonlinear sigma model [@Read-1991; @Zirnbauer-Towards-1994], and the random Dirac fermions [@Ho-Models-1996]. The regular geometry of the CC model allows for an easy application of numerical transfer matrix (TM) techniques [@MacKinnon-The-scaling-1983]. The most recent and accurate implementations of this method [@Slevin-Critical-2009; @Obuse-Conformal-2010; @Amado-Numerical-2011; @Obuse-Finite-2012; @Slevin-Finite-2012; @Nuding-Localization-2015], as well as other methods [@Dahlhaus-Quantum-2011; @Fulga-Topological-2011] give the value $\nu_\text{CC}$ in the range 2.56–2.62, which is definitely different from the experimental value. One possible source for the discrepancy are electron-electron interactions whose effect on the scaling near the IQH transition has been studied in Refs. [@Lee-Effects-1996; @Wang-Short-range-2000; @Burmistrov-Wave-2011]. It was shown there that short-range interactions are irrelevant at the IQH critical point, and should not modify the value of $\nu$. This leaves the option that the Coulomb interaction may play a dominant role in experimental systems, but this issue is not fully understood, and remains unresolved. Here we propose another possible explanation for why the value of $\nu_\text{CC}$ differs from $\nu_\text{exp}$, namely that the CC model does not capture all types of disorder that are relevant at the IQH transition. Indeed, saddle points that connect the “puddles” of filled electron states do not form a regular lattice, and around each “puddle” there may be any number of them. Taking this into account leads us to consider structurally disordered, or [*random networks*]{} (RNs) that better represent the physics in a smooth disorder potential and strong magnetic field. ![Left: a random graph. Right: the corresponding random Manhattan lattice.[]{data-label="fig:random-graph-and-medial"}](random-network-4.pdf){height="3.7cm"} -5mm Let us list the main results of this paper. (1) We argue that an ensemble of RNs can be mapped in a continuum limit to the problem of free Dirac fermions coupled to random potentials (similar to the CC model) and also to two-dimensional quantum gravity (2DQG). Coupling to 2DQG modifies critical exponents of statistical mechanics models [@Knizhnik-Fractal-1988; @David-Conformal-1988; @Distler-Conformal-1988; @Kazakov-Exactly-1988; @Kazakov-Recent-1988; @Kazakov-Percolation-1989; @Duplantier-Geometrical-1990]. We suggest that a similar modification happens for RNs. (2) We demonstrate that RNs can be effectively constructed starting with the CC network and appropriately modifying it. The modified RNs can be numerically simulated, and for certain values of parameters specifying the geometric disorder, we obtain the localization length exponent $\nu = 2.374 \pm 0.018$, in excellent agreement with experiments. (3) We extend these ideas to quantum Hall transitions in symmetry classes C and D in the classification of Refs. [@Zirnbauer-Riemannian-1996; @Altland-Nonstandard-1997]. Properties of these transitions map to classical statistical mechanics models which were studied on random lattices, and for which the shift in critical exponents is given by the KPZ relation [@Knizhnik-Fractal-1988; @David-Conformal-1988; @Distler-Conformal-1988] from the theory of 2DQG. This fact allows us to predict various exact critical exponents for these transitions. [*Random networks.*]{} The network models we consider are built on planar directed graphs where every vertex has two incoming and two outgoing edges. The in- and out- edges, also called links of the network, alternate as one goes around a vertex (a node). Such graphs divide the plane into two sets of polygonal faces with opposite orientations of their edges, see Fig. \[fig:random-graph-and-medial\], left. We will only consider connected graphs, which are exactly the Feynman graphs of zero-dimensional (complex) matrix $\phi^4$ theory in the planar (large $N$) limit [@'tHooft-Planar-1973; @Brezin-Planar-1977]. -5mm A state of the network model on a given random graph is represented by a complex vector $Z \in \mathbb{C}^N$, where $N$ is the number of edges of the graph, and each component $z_e$ corresponds to the complex flux on the edge $e$. The model includes random scattering matrices connecting incoming $z_1, z_{1'}$ and outgoing $z_2, z_{2'}$ fluxes (see Fig. \[fig:S-and-R\], left): $$\begin{aligned} \Big(\!\begin{array}{c} z_2 \\ z_{2'} \end{array} \!\!\Big) = {\cal S} \Big(\!\begin{array}{c} z_1 \\ z_{1'} \end{array}\!\! \Big) = \Big(\!\begin{array}{cc} t e^{i\gamma} & r e^{i\gamma'} \\ r e^{i\gamma} & -t e^{i\gamma'}\end{array} \!\!\Big) \Big(\!\begin{array}{c} z_1 \\ z_{1'} \end{array} \!\!\Big),\end{aligned}$$ placed at the vertices. The scattering amplitudes satisfy $t^2 + r^2 =1$, and the scattering phases $\gamma$, $\gamma'$ are random. Evolution of the states of the network in discrete time steps is specified by an $N\times N$ unitary matrix $U$ composed of all node scattering matrices [@Klesse-Universal-1995]. In this description the basic object is the resolvent $(1 - e^{-\eta} U)^{-1}$. Its matrix element (a Green function) can be written as a superintegral $$\begin{aligned} \label{GF} G(e_1, e_2; \eta) &= \!\!\! \operatorname*{\ThisStyle{\hstretch{1.3}{\rotatebox{18} {$\SavedStyle\!\int\!$}}}}\!\!\! {\cal D} \Psi \, \psi_{e_1} {\bar \psi}_{e_2} e^{-\sum_{e,e'} {\bar \Psi}_{e} (1 - e^{-\eta}U)_{ee'} \Psi_{e'}}\end{aligned}$$ where $e_1$, etc., label edges of the graph, and ${\bar \Psi}_{e} = ({\bar\phi}_{e}, {\bar\psi}_{e})$ is a supervector assigned to the edge $e$, see Refs. [@Janssen-Point-contact-1999; @Cardy-Network-2005] for details. The real part of the parameter $\eta$ plays the role of the imaginary part of the energy (level broadening) in the Hamiltonian description. For our purposes it is sufficient to take $\eta=0$ in what follows. Formulation of a random network as a lattice model appeared in Ref. [@Sedrakyan-3DIM-1987] in connection with the so called sign factor problem in the string representation of the 3D Ising model. This approach was further developed in Refs. [@Sedrakyan-Edge-1999; @Sedrakyan-Integrable-2002; @Sedrakyan-Action-2003; @Khachatryan-Grassmann-2009; @Khachatryan-Network-2010]. Following these references, we connect the midpoint of each edge $e$ “forward” to two other midpoints by two vectors $\xi_e$. Then a scattering node is replaced by a rectangle (see Fig. \[fig:S-and-R\], right), and we get an alternative representation of the RN as a random Manhattan lattice (ML), see the right part of Fig. \[fig:random-graph-and-medial\]. The action for the RN written as $$\begin{aligned} \label{action} S = \sum_e {\bar \Psi}_{e} \Psi_e - \sum_{e,\xi_e} t_{e,\xi_e} e^{i\gamma_e} {\bar \Psi}_{e+\xi_e} \Psi_e\end{aligned}$$ represents hopping of fermions and bosons on the random ML, and the hopping amplitudes take values $r$ and $\pm t$ depending on the vector $\xi_e$. The SUSY method of Refs. [@Janssen-Point-contact-1999; @Cardy-Network-2005] is designed to describe only single-particle problems, while the approach of Refs. [@Sedrakyan-Edge-1999; @Sedrakyan-Integrable-2002; @Sedrakyan-Action-2003; @Khachatryan-Grassmann-2009; @Khachatryan-Network-2010] allows to consider interacting particles. To this end one uses the second quantization, and the scattering matrices at the nodes are “promoted” to R-matrices acting in the tensor product of Fock spaces attached to edges of the network (see Fig. \[fig:S-and-R\], right). On a random ML the R-matrices are represented by the quadrangular faces surrounding the scattering nodes, see Fig. \[fig:random-graph-and-medial\]. The trace of the product of the R-matrices over all nodes of the network gives the partition function. For a general interacting case the SUSY method does not apply, and one has to use replicas to treat disorder. In this paper we do not include interactions and continue to use SUSY. Then writing the trace of the product of the R-matrices in the basis of (super-)coherent states for each of the (super-)Fock spaces on the edges, we obtain the same action (\[action\]). [*Continuum limits.*]{} For the regular CC model the ML is a square lattice with vertices labeled by the Cartesian coordinates $x^\mu$ ($\mu = 1,2$). The vectors $\xi_e$ are $\pm \epsilon {\hat x}_\mu$, where ${\hat x}_\mu$ are unit vectors, and $\epsilon$ is the lattice spacing. Near the critical point of the CC model ($t_c = r_c = 1/\sqrt{2}$) the variations of the phases $\gamma_e$ and the fields $\Psi_e$ are slow, and we can pass to a continuum limit by expanding $\Psi_{x + \epsilon{\hat x}_\mu} \approx (1 + \epsilon \partial_\mu)\Psi_x$ and rescaling the fields $\Psi(x)$ in the continuum. In the limit we obtain, as in Ref. [@Ho-Models-1996], the action of the Dirac fermions (and their bosonic partners) $$\begin{aligned} \label{S2} S = \!\! \operatorname*{\ThisStyle{\hstretch{1.3}{\rotatebox{18} {$\SavedStyle\!\int\!$}}}}\!\!\! d^2 x \, \bar\Psi\big[\sigma^\mu \big( i \overset{\text{\tiny$\leftrightarrow$}}{\partial}_\mu + A_\mu \big) + m \sigma^3 + V \big]\Psi, $$ where $\overset{\text{\tiny$\leftrightarrow$}}{\partial}_\mu = (\overset{\text{\tiny$\rightarrow$}}{\partial}_\mu - \overset{\text{\tiny$\leftarrow$}}{\partial}_\mu)/2$, the mass $m \propto r-r_c$, and the (random) gauge $A_\mu(x)$ and scalar $V(x)$ potentials arise as certain combinations of the random phases $e^{i\gamma_e}$. -2mm -4mm Let us now consider the random ML shown in Fig. \[fig:flat-random-network\]. This lattice is not very different from the regular square lattice, its faces are still quadrangles, and we can introduce (curvilinear) coordinates $\xi^a$ ($a=1,2$) following the vectors $\xi_e$ in a natural way. It is clear that the physics cannot depend on the choice of coordinates, so we can use either $\xi^a$ or $x^\mu$ coordinates. We can use the formalism of frames (vielbeins) of differential geometry [@Nakahara-Geometry-2003] to relate coordinate and orthonormal bases of vectors ${\hat x}_\mu = e_\mu^a \partial/\partial \xi^a$ and forms $dx^\mu = e_a^\mu d\xi^a$, as well as the volume elements $d^2 x = e \, d^2 \xi$, where $e= \det e_a^\mu$. The action (\[S2\]) written in arbitrary coordinates and invariant under coordinate changes becomes $$\begin{aligned} \label{S3} S = \!\! \operatorname*{\ThisStyle{\hstretch{1.3}{\rotatebox{18} {$\SavedStyle\!\int\!$}}}}\!\!\! d^2 \xi \, e \, \bar\Psi \big[\sigma^\mu e_\mu^a \big(i \overset{\text{\tiny$\leftrightarrow$}}{\partial}_a + A_a \big) + m \sigma^3 + V \big]\Psi.\end{aligned}$$ The action (\[S3\]) is that of 2D fermions interacting with random gauge and scalar potentials as well as random geometry (gravity). In the case of weakly deformed lattices, Eqs. (\[S2\]) and (\[S3\]) are equivalent, they both describe the system on a flat surface. We propose that random frames can account for more complicated situations that correspond to curved surfaces represented by random graphs. In this case we define frames locally, on a given coordinate chart, and then connect them on overlapping charts by transition functions. The result is still given by Eq. (\[S3\]), but now we are supposed to average over “arbitrary” frame configurations. The above arguments leave open the question of the functional measure on random surfaces. We believe that the requirements of diffeomorphism and conformal invariance determine the appropriate measure uniquely, the same way it is fixed in string theory [@Polyakov-Quantum-1981]. The need to average observables over random geometry means that our system is coupled to [*quenched*]{} quantum gravity. However, in the SUSY formalism the partition function of a disordered system is always unity (implying $c=0$ for the CFT of the critical point), and there is no difference between quenched and annealed gravity. It is known that 2DQG modifies critical exponents of a CFT placed on a fluctuating surface in the way given by the KPZ relation [@Knizhnik-Fractal-1988; @David-Conformal-1988; @Distler-Conformal-1988]. The relation has been verified by solutions of critical models of statistical mechanics (related to the so-called minimal CFTs [@Belavin-Infinite-1984]) defined on random graphs [@Kazakov-Exactly-1988; @Kazakov-Recent-1988; @Kazakov-Percolation-1989; @Duplantier-Geometrical-1990]. When $c=0$, as for Anderson transitions and critical percolation, the relation is $$\begin{aligned} \label{KPZ} \Delta = (\sqrt{1 + 24 \Delta_0} - 1)/4,\end{aligned}$$ where $\Delta_0$ ($\Delta$) are chiral dimensions of operators on a flat (fluctuating) surface. Whether this relation can explain the difference between $\nu_\text{CC}$ and $\nu_\text{exp}$ is to be seen. However, Eq. (\[KPZ\]) should be applicable to properly defined MF exponents of critical wave functions at the IQH transition, as well as other 2D Anderson transitions. -4mm [*Construction and simulation of RNs.*]{} To simulate RNs numerically, we adopt the following construction. Starting with the regular CC network, at each node we set $t=0$ with probability $p_0$, $t=1$ with probability $p_1$, and leave the node unchanged with probability $p_c = 1 - p_0 - p_1$. The modified nodes with $t=0$ ($t=1$) are “open” in the horizontal (vertical) direction, and opening a node changes the four adjacent square faces into two triangles and one hexagon, see Fig. \[fig:open-nodes\]. Repeated opening of nodes can produce tilings of the plane by polygons with arbitrary numbers of edges. At the same time, our construction still allows us to use the transfer matrix (TM) of the CC model, but with modified $t$ and $r$ amplitudes. To maintain statistical isotropy of the model, we choose $p_0 = p_1$. In this case we expect that the critical point is still given by the value $t_c^2=1/2$ for the unchanged nodes. Moreover, in this paper we fix $p_0 = p_1 = p_c = 1/3$. We simulate the modified networks on strips of different width $M$ (the number of nodes per column) varying from $20$ to $200$, the length $L=5\cdot 10^6$, and a range of the parameter $x$ which encodes deviations of $t$ from $t_c$ [^1]. We use the LU decomposition of TMs [@numerical_recipes]. Since $t$ and $r$ appear in the denominators of the matrix elements of TMs, making them zero is a singular procedure, related to the disappearance of two horizontal channels upon opening a node in the vertical direction. To overcome this difficulty, for every open node we take either $t$ or $r$ to be equal to $\varepsilon \ll 1$. We then look at how the resulting Lyapunov exponents depend on $\epsilon$. We found that the results saturate at $\varepsilon = 10^{-5}$, and there are no changes when reducing $\epsilon$ to $10^{-7}$. For even smaller $\varepsilon$ the results start changing again. This is to be expected because the large differences of values in the entries of TMs cause numerical instabilities for the LU decomposition. We have chosen $\varepsilon=10^{-6}$ for our calculations. The smallest Lyapunov exponent $\gamma$ is expected to have the following finite-size scaling behavior: $$\label{ren_equ} \gamma M=\Gamma[M^{1/\nu}u_0(x), M^y u_1(x)].$$ Here $u_0(x)$ is the relevant field and $u_1(x)$ the leading irrelevant field. The relevant field vanishes at the critical point, and $y<0$. The fitting procedure of our numerical results, as well as the error analysis are presented in the Supplementary material. The results of the analysis are $$\begin{aligned} \nu &= 2.374 \pm 0.018, & y = -0.35 \pm 0.05.\end{aligned}$$ This value of $\nu$ is surprisingly close to $\nu_\text{exp}$, which suggests that the structural disorder is, indeed, a relevant perturbation that modifies the critical behavior. [*Other symmetry classes.*]{} Network models can be constructed for all 10 symmetry classes of disordered systems identified in Refs. [@Zirnbauer-Riemannian-1996; @Altland-Nonstandard-1997]. Superconductors with broken time-reversal invariance in 2D can exhibit QH transitions where the spin (class C) [@Kagalovsky-Quantum-1999; @Senthil-Spin-1999] and thermal (class D) [@Senthil-Quasiparticle-2000] conductivities jump in quantized units. The ideas developed above apply to network models for these transitions. In addition, both SQH and TQH are simpler than the IQH since many of their properties can be determined from mappings to classical models. The regular network in class C was mapped to classical bond percolation on a square lattice [@Gruzberg-Exact-1999; @Beamond-Quantum-2002; @Mirlin-Wavefunction-2003]. Many exact results are known for classical percolation. Thus, the mapping has lead to a host of exact critical properties at the SQH transition [@Gruzberg-Exact-1999; @Mirlin-Wavefunction-2003; @Cardy-Linking-2000; @Subramaniam-Surface-2006; @Subramaniam-Boundary-2008; @Bondesan-Exact-2012; @Bhardwaj-Relevant-2015]. The mapping was extended to network models in class C on arbitrary graphs [@Cardy-Network-2005]. The graphs relevant for our study are shown in Fig. \[fig:random-graph-and-dual\]. For a given RN we draw the dual bipartite graph with dots on the shaded faces and crosses on the empty faces of the original RN. The dual graph forms a random quadrangulation of the plane. We now dissect all quadrangles by diagonals connecting the dots, and remove the crosses and all edges connected to them. This results in a lattice (Fig. \[fig:random-graph-and-dual\], right) on which the classical bond percolation should be considered. -5mm Critical bond percolation on random quadrangulations (or their duals) was considered in Ref. [@Kazakov-Percolation-1989], and it was shown that the KPZ relation (\[KPZ\]) is valid in this case. We believe that the SQH transition on RNs lies in the same universality class, and that Eq. (\[KPZ\]) can be applied to all critical exponents obtained in Refs. [@Gruzberg-Exact-1999; @Mirlin-Wavefunction-2003; @Subramaniam-Surface-2006; @Subramaniam-Boundary-2008; @Bondesan-Exact-2012; @Bhardwaj-Relevant-2015]. This includes, in particular, the dimension of the “two-leg” operator that determines the localization length exponent $\nu$ as well as a few MF exponents. The TQH transition in class D can also be described and simulated by a network model [@Cho-Criticality-1997; @Chalker-Thermal-2002; @Merz-Two-dimensional-2002; @Mildenberger-Density-2007]. Its effective field theory (without geometric disorder) is given by the Majorana fermions with random mass, the same theory that describes the critical Ising model with a weak bond disorder [@Senthil-Quasiparticle-2000; @Bocquet-Disordered-2000]. The random mass is a marginally irrelevant perturbation, and critical exponents at the transition are given by their Ising model values. When the model is coupled to 2DQG, we still should consider the quenched situation, and the critical exponents should be modified according to Eq. (\[KPZ\]), see [@Janke-Two-dimensional-2006] and references therein. [*Discussion and outlook.*]{} The geometric disorder that we simulate by a modified CC model can be viewed as randomness in the heights $V$ of the saddle points in the disorder potential. Indeed, it is known that (at zero energy) $t^2 = (1 + e^{-V})^{-1}$ [@Fertig-Transmission-1987]. Our choice of $t$ is described by the tri-modal distribution $P(V) = p_0 \delta(V - 2 \ln \varepsilon) + p_c \delta(V - V_0) + p_0 \delta(V + 2 \ln \varepsilon).$ Previous studies of random $V$ [@Lee-Quantum-1993; @Evers-Semiclassical-1998] focused instead on the uniform distribution in the interval $V \in [-W, W]$ or the bimodal distribution $P(V) = [\delta(V-W) + \delta(V+W)]/2$. No choice of $W$ gives our type of randomness when $p_c > 0$. However, for $p_c = 0$ our distribution becomes bimodal, and describes classical percolation with $\nu = 4/3$. The other extreme, $p_c = 1$, gives the regular CC model. Since we only simulated the point $p_c = 1/3$, we cannot distinguish the following three possibilities: 1) a novel fixed point at a finite $p_c$, 2) a crossover from percolation to CC criticality, 3) a line of fixed points. We plan to study other values of $p_c$ to determine which scenario is actually realized. We also plan to simulate RNs in classes C and D, and try to solve the classical percolation problem on relevant graphs using matrix models techniques. We will, furthermore, consider the problem of Dirac fermions in an Abelian random gauge potential coupled to 2DQG, and determine the MF spectrum of the wave functions in order to test the applicability of the KPZ relation (\[KPZ\]). [*In summary*]{}, we have considered the possibility that a certain type of geometric (structural) disorder, previously missed in the study of the IQH transition, may change the universality class. Our numerical simulations support this idea. We have also proposed that the proper framework for a field-theoretic description of this type of disorder is provided by 2DQG coupled to matter fields. These ideas can be applied to other 2D Anderson transitions. A. S. thanks the Theoretical Physics group at Wuppertal University for hospitality. A. S. and A. K. acknowledge support by DFG grant KL 645/7-1. A. S. was partially supported by ARC grant 15T-1C058. I. G. was partially supported by the NSF Grant No. DMR-1508255. We are grateful to R. A. Roemer and A. W. W. Ludwig for helpful discussions. Extensive calculations have been performed on Rzcluster (Aachen), PC^2^ Paderborn) and particularly on JUROPA (Jülich). The authors gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JUROPA at Jülich Supercomputing Centre (JSC). [94]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/RevModPhys.67.357) [****,  ()](\doibase 10.1103/RevModPhys.80.1355) [****,  ()](\doibase 10.1103/PhysRevLett.61.1294) [****,  ()](\doibase 10.1103/PhysRevB.43.6828) [****,  ()](\doibase 10.1103/PhysRevLett.67.883) [****,  ()](\doibase 10.1103/PhysRevB.46.1596) [****,  ()](\doibase 10.1103/PhysRevLett.71.2638) [****,  ()](\doibase 10.1103/PhysRevB.50.14609) [****,  ()](\doibase 10.1103/PhysRevLett.94.206807) [****,  ()](\doibase 10.1103/PhysRevLett.102.216801) [****,  ()](\doibase 10.1103/PhysRevB.80.241411) [****,  ()](\doibase 10.1103/PhysRev.109.1492) [****,  ()](\doibase 10.1103/PhysRevLett.42.673) [****,  ()](\doibase 10.1103/PhysRevLett.51.1915) [****,  ()](\doibase 10.1016/0550-3213(87)90179-9) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevB.32.2636) @noop [ ()]{},  @noop [ ()]{},  @noop [ ()]{},  [****, ()](\doibase 10.1103/PhysRevB.75.184201) [****, ()](\doibase 10.1103/PhysRevB.50.7526) [****,  ()](\doibase 10.1103/PhysRevB.53.R7638) [****,  ()](\doibase 10.1016/0550-3213(96)00128-9) [****, ()](\doibase 10.1103/PhysRevLett.77.4194) [****,  ()](\doibase 10.1103/PhysRevLett.77.707) [****, ()](\doibase 10.1103/PhysRevB.56.10668) [****,  ()](\doibase 10.1103/PhysRevB.86.165324) [****, ()](\doibase 10.1103/PhysRevLett.107.086403) [****, ()](\doibase 10.1103/PhysRevB.87.125144) [****, ()](\doibase 10.1103/PhysRevLett.112.186803) [****,  ()](\doibase 10.1088/0022-3719/21/14/008) [****, ()](\doibase 10.1016/j.physrep.2005.07.001) @noop (),  @noop [****,  ()]{},  [****, ()](\doibase 10.1103/PhysRevB.54.8708) [****, ()](\doibase 10.1007/BF01578242) [****, ()](\doibase 10.1103/PhysRevB.80.041304) [****,  ()](\doibase 10.1103/PhysRevB.82.035309) [****, ()](\doibase 10.1103/PhysRevLett.109.206804) [****,  ()](\doibase 10.1142/S2010194512006162) [****, ()](\doibase 10.1103/PhysRevB.91.115107) [****,  ()](\doibase 10.1103/PhysRevB.84.115133) [****,  ()](\doibase 10.1103/PhysRevB.84.245447) [****,  ()](\doibase 10.1103/PhysRevLett.76.4014) [****, ()](\doibase 10.1103/PhysRevB.61.8326) [****,  ()](\doibase 10.1016/j.aop.2011.01.005) [****,  ()](\doibase 10.1142/S0217732388000982) [****, ()](\doibase 10.1142/S0217732388001975) [****,  ()](\doibase 10.1016/0550-3213(89)90354-4) [****,  ()](\doibase 10.1016/0920-5632(88)90089-8) [****,  ()](\doibase 10.1016/0550-3213(88)90146-0) [****,  ()](\doibase 10.1142/S0217732389001921) [****,  ()](\doibase 10.1016/0550-3213(90)90456-N) [****, ()](\doibase 10.1063/1.531675) [****, ()](\doibase 10.1103/PhysRevB.55.1142) [****,  ()](\doibase 10.1016/0550-3213(74)90154-0) [****,  ()](\doibase 10.1007/BF01614153) [****,  ()](\doibase 10.1209/0295-5075/32/3/007) [****, ()](\doibase 10.1103/PhysRevB.59.15836) [****,  ()](\doibase 10.1007/s00220-005-1304-y) [****,  ()](\doibase 10.1016/0550-3213(87)90338-5) in [[**]{}](\doibase 10.1007/978-94-010-0514-2_7), , Vol. ,  (, ) pp.  [****,  ()](\doibase 10.1103/PhysRevB.68.235329) [****,  ()](http://stacks.iop.org/1751-8121/42/i=30/a=304019) [****, ()](\doibase http://dx.doi.org/10.1016/j.nuclphysb.2009.09.033) @noop [**]{} (, , ) [****,  ()](\doibase 10.1016/0370-2693(81)90743-7) [****,  ()](\doibase 10.1016/0550-3213(84)90052-X) @noop [**]{}, Vol.  (, ) [****, ()](\doibase 10.1103/PhysRevLett.82.3516) [****,  ()](\doibase 10.1103/PhysRevB.60.4245) [****, ()](\doibase 10.1103/PhysRevB.61.9690) [****, ()](\doibase 10.1103/PhysRevLett.82.4524) [****,  ()](\doibase 10.1103/PhysRevB.65.214301) [****,  ()](\doibase 10.1088/0305-4470/36/12/323) [****, ()](\doibase 10.1103/PhysRevLett.84.3507) [****,  ()](\doibase 10.1103/PhysRevLett.96.126802) [****,  ()](\doibase 10.1103/PhysRevB.78.245105) [****,  ()](\doibase 10.1103/PhysRevLett.108.126801) [****, ()](\doibase 10.1103/PhysRevB.91.035435) [****, ()](\doibase 10.1103/PhysRevB.55.1025) [****,  ()](\doibase 10.1103/PhysRevB.65.012506) [****, ()](\doibase 10.1103/PhysRevB.65.054425) [****, ()](\doibase 10.1103/PhysRevB.75.245321) [****,  ()](\doibase 10.1016/S0550-3213(00)00208-X) [****, ()](\doibase 10.5488/CMP.9.2.263) [****, ()](\doibase 10.1103/PhysRevB.36.7969) [****,  ()](\doibase 10.1103/PhysRevLett.70.4130) [****, ()](\doibase 10.1103/PhysRevB.57.1805) Supplemental material\ Network model for plateau transitions in the quantum Hall effect {#supplemental-material-network-model-for-plateau-transitions-in-the-quantum-hall-effect .unnumbered} ================================================================= We calculate numerically the localization length index $\nu$ in the Chalker-Coddington (CC) network suitably modified to represent a random network. We use one relevant field and one irrelevant field in the fitting procedure. The results lead to the value $\nu \approx 2.37$ for the modified model, in very close agreement with experiments. Model description {#model_desc} ----------------- For the calculation of critical indices we used the transfer-matrix method developed in [@mackinnon1981scaling; @mackinnon1983scaling]. To calculate the smallest Lyapunov exponent of the CC-model it is necessary to calculate a product $T_L=\prod_{j=1}^L M_1 U_{1j}M_2 U_{2j}$ of layers of transfer matrices $M_1 U_{1j}M_2 U_{2j}$ corresponding to two columns $M_1$ and $M_2$ of vertical sequences of 2x2 scattering nodes, $$\label{M1} M_1= \begin{tikzpicture}[baseline=(current bounding box.center), ultra thick, loosely dotted] \matrix(M1)[matrix of math nodes, nodes in empty cells, right delimiter={)}, left delimiter={(}] { B^1 & 0 & & 0 \\ 0 & B^1 & & \\ & & & 0 \\ 0 & & 0 & B^1 \\ }; \draw (M1-2-2)--(M1-4-4); \draw (M1-1-2)--(M1-1-4); \draw (M1-4-1)--(M1-4-3); \draw (M1-2-1)--(M1-4-1); \draw (M1-1-4)--(M1-3-4); \draw (M1-1-2)--(M1-3-4); \draw (M1-2-1)--(M1-4-3); \end{tikzpicture}$$ and $$\label{M2} M_2=\begin{tikzpicture}[baseline=(current bounding box.center), ultra thick, loosely dotted] \matrix (M2) [matrix of math nodes,nodes in empty cells,right delimiter={)},left delimiter={(}] { B^2_{22} & 0 & & 0 & B^2_{21} \\ 0 & B^2 & & & 0 \\ & & & & \\ 0 & & & B^2 & 0 \\ B^2_{12} & 0 & & 0 & B^2_{11} \\ }; \draw (M2-1-2)--(M2-1-4); \draw (M2-1-2)--(M2-4-5); \draw (M2-2-1)--(M2-4-1); \draw (M2-2-1)--(M2-5-4); \draw (M2-2-2)--(M2-4-4); \draw (M2-2-5)--(M2-4-5); \draw (M2-5-2)--(M2-5-4); \end{tikzpicture}$$ with $$B^1=\begin{pmatrix} 1/t & r/t \\ r/t & 1/t \end{pmatrix} \qquad \text{and} \qquad B^2=\begin{pmatrix} 1/r & t/r \\ t/r & 1/r \end{pmatrix}$$ The $U$-matrices have a simple diagonal form with independent phase factors $U_{nm}=\exp{(i\alpha_n)}\,\delta_{nm}$ for $U=U_{1j}$ and $U_{2j}$. Here $t$ and $r$ are the transmission and reflection amplitudes at each node of the regular lattice which are parameterized by $$\label{rt} t={\frac{1}{\sqrt{1+e^{2x}}}} \qquad \text{and} \qquad r={\frac{1}{\sqrt{1+e^{-2x}}}}.$$ The parameter $x$ corresponds to the Fermi energy measured from the Landau band center scaled by the Landau band width (with the critical point at $x=0$). The phases $\alpha_{n}$ are random variables uniformly distributed in the range $[0,2\pi)$, reflecting that the phase of an electron approaching a saddle point of the random potential is arbitrary. To simulate random networks (RNs) numerically, we remove scattering nodes by opening them in horizontal or vertical direction with probabilities $p_0$ and $p_1$ by adopting the following construction. Starting with the regular CC network, at each node we set $t=\varepsilon\ll 1$ with probability $p_0$, $t=\sqrt{1-\varepsilon^2}$ with probability $p_1$, and leave the node unchanged with probability $p_c = 1 - p_0 - p_1$. Here the small number $\varepsilon$ is chosen as $\varepsilon=10^{-6}$: We found that the results saturate already at $\varepsilon = 10^{-5}$, and there are no changes when reducing $\varepsilon$ to $10^{-7}$. For even smaller $\varepsilon$ the results start changing again due to precision issues of the numerics. Furthermore, in this report we use $p_0=p_1=p_c=1/3$. The fitting procedure --------------------- For the scaling behavior of the Lyapunov exponent $\gamma$ near the critical point we expect the finite size dependence $$\label{eq:ren_equ} \gamma\cdot M=\Gamma(M^{1/\nu}u_0,M^y\,u_1) ,$$ Here we have taken into account the relevant field with exponent $\nu$ and the leading irrelevant field with exponent $y$. $M$ is the number of $2 \times 2$ blocks in the transfer matrices ($=$ half the number of horizontal channels of the lattice), $u_0=u_0(x)$ is the relevant field and $u_1=u_1(x)$ the leading irrelevant field. It is known that the relevant field vanishes at the critical point, and that $y<0$. On the left hand side of Eq. we use the numerical results for the eigenvalues of $T_L$, where we are particularly interested in the eigenvalue closest to 1. The Lyapunov exponent $\gamma$ is the smallest positive eigenvalue of \[LE\] \_[L]{} , which we calculate for various combinations of the parameter $x$ and the lattice width $M$. The right hand side of is expanded in a series in $x$ and powers of $M$, and the expansion coefficients are obtained from a fit. Some coefficients in this expansion vanish due to a symmetry argument [@SlevinOhtsuki2009]. If $x$ is replaced by $-x$ we see from that $t$ turns into $r$ and vice versa. Due to the periodic boundary conditions the lattice is unchanged. Therefore the left hand side of is invariant under the sign change of $x$. Hence the right hand side must be even in $x$. That renders $u_0(x)$ and $u_1(x)$ either even or odd in $x$. For the Chalker Coddington network the critical point is at $x=0$. This lets us choose $u_0(x)$ odd and $u_1(x)$ even. The fit now should use as few coefficients as possible while reproducing the data as closely as possible. The scaling function $\Gamma$ in the right side of is expanded in the fields $u_0$ and $u_1$ yielding $$\label{expansin_in_fields} \begin{split} \Gamma(&u_0(x)M^{1/\nu},u_1(x)M^y)= \Gamma_{00}+ \Gamma_{01} u_1M^y +\Gamma_{20}u_0^2M^{2/\nu}\\ & + \Gamma_{02}u_1^2M^{2y} +\Gamma_{21}u_0^2u_1M^{2/\nu}M^y +\Gamma_{03}u_1^3 M^{3y} \\ & +\Gamma_{40}u_0^4M^{4/\nu}+\Gamma_{22}u_0^2 M^{2/\nu}u_1^2 M^{2y} + \Gamma_{04}u_1^4M^{4y}+\dots \end{split}$$ We further expand $u_0$ and $u_1$ in powers of $x$ as was done, for example, in Refs. [@SlevinOhtsuki2009; @AmadoMalyshevSedrakyanEtAl2011]: $$\label{fields_expanded} u_0(x)=x+\sum_{k=1}^\infty a_{2k+1}x^{2k+1} \quad \text{and} \quad u_1(x)=1+\sum_{k=1}^\infty b_{2k}x^{2k} .$$ In Eq. we retained only terms that are even in $x$. Because of the ambiguity in the overall scaling of the fields, the leading coefficient in Eq. can be chosen to be 1. Weights and Errors {#weight_error} ------------------ The left hand side of Eq.  is determined by the results of numerical simulations of the random network model. Following Ref. [@AmadoMalyshevSedrakyanEtAl2011] we have produced large ensembles of the Lyapunov exponent $\gamma$ by simulating many disorder realizations for many combinations of $x$ and $M$. We calculated $624$ disorder realizations for any combination of $M=20, 40, 60, 80, 100, 120, 140, 160, 180, 200$ and $x=0.08/12\cdot [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]$ for fixed $L=5\,000\,000$. Our goal is to check whether the central limit theorem (CLT) [@tutubalin1965limit] also works in the case of randomness of the network or not. Fig. \[fig1\] shows the distribution of the Lyapunov exponent for $M=60$ and $x=0.02$ being nicely described by a Gaussian which demonstrates the validity of CLT. In the fitting procedure, the weight of each such $\gamma$ is given by the reciprocal of the variance of the corresponding ensemble. On the right hand side of Eq. the fitting formula depending on $x$ and $M$ is used. The coefficients of the expansion and the critical exponents are the fitting coefficients. The fits are performed in several steps. First a weighted nonlinear least square fit based on a trust region algorithm with specified regions for each parameter is applied. The resulting parameters are used in a further weighted nonlinear least square fit based on a Levenberg-Marquardt algorithm. Here no limits are imposed on the fit parameters. The last step is repeated until the resulting parameters stop changing. Evaluation of fits ------------------ The next step is the evaluation of the fit results. We present several methods to do this. Very common is the *${\chi^2}$-test*. $\chi^2$ is given by $$\chi^2=\sum_i \frac{(y_i-f_i)^2}{\sigma^2}$$ where $f_i$ is the value predicted by the fit and $y_i$ the measured value. $\sigma$ is given by the standard deviation. As our fit contains large ensembles of data points for the same $(x,M)$ coordinates, $\chi^2=0$ is not possible, actually it will be large due to the huge number of data points. The way to deal with this behavior is to consider the ratio $\chi^2$/*degrees of freedom*. The expectation value for this ratio is 1 for an ideal fit. The *degrees of freedom* is the number of data points in the fit minus the number of fit parameters. Deviations from 1 are evaluated by use of the cumulative probability $P(\tilde\chi^2 < \chi^2)$ which is the probability of observing – just for statistical reasons – a sample statistic with a smaller $\chi^2$ value than in our fit. A small value of $P$, i.e. a large value of the complement $Q:=1-P$ is taken as indicative for a good fit. However, values of $P$ lower than $1/2$ indicate problems in the estimation of the error bars of the individual data points. Another criterion is based on the width of the *confidence intervals*. This quantifies the quality of the prediction for a single parameter. We use 95% confidence intervals which means that for repeated independent generation of the same amount of data and application of the same kind of data analysis the resulting confidence intervals contain the true parameter values in 95% of the cases. A most sensitive criterion is the *Akaike information criterion* (AIC) [@Akaike1974]. AIC is founded on information theory; Akaike found a formal relationship between Kullback-Leibler information and likelihood theory. This finding makes it possible to combine estimation (i.e., maximum likelihood or least squares) and model selection under a unified optimization framework. Unlike in the case of hypothesis testing, AIC does not assume that the correct model is among the tested models. AIC rather offers a relative estimate of the information lost when a given model is used to represent the process that generates the data. This way, given a collection of models, AIC ranks those models if they are based on the same data. In this case a comparison to the best model can be calculated easily. In case a different data base has been used, the models cannot be ranked or compared. For the calculations presented in this article we have been using the AICc, which is a small sample version of AIC or, more precisely, a second order bias correction. AICc is also valid if $k$ is not small compared to $n$, where $n$ denotes the sample size and $k$ denotes the number of parameters, and is given by $$\text{AICc}=\text{AIC}+\frac{2k(k+1)}{n-k-1}.$$ This formula holds exactly if the model is univariate, linear, and has normally-distributed residuals, but may in other cases still be used unless a more precise correction is known. Further details on the AIC and the AICc can be found in [@BurnhamAnderson2002]. The AIC can be expressed in terms of $\chi^2$: $$\text{AIC}=2k+\chi^2-2C$$ Here $2C$ is a constant (dependent on the set of data points) that can be omitted because for comparisons we only need differences of AICc’s. For comparing models, the AIC (and the AICc) are used in the following way. Suppose, we have $l$ models with AIC$_1$, …AIC$_l$. The model with the smallest AICc — let us call it AIC$_{min}$, — is the favorite one. The relative probability of model $j$ compared to the model with AIC$_{min}$ is $$\begin{aligned} \exp \frac{\text{AIC}_{min}-\text{AIC}_j}{2}.\end{aligned}$$ Note that the exponential expression is smaller than one. The last criterion we present is the sum of *residuals*. It is given by $\mathit{res}=\sum_j \mathit{res}_j, \; \mathit{res}_j=y_j-f_j$. The sum of residuals should be small compared to the number of degrees of freedom. The residuals plotted should look like noise around zero. If the residuals significantly deviate from zero, we expect that the fit function is not correct. Results ------- In Fig.\[fig1\] we present an example of the distribution of Lyapunov exponents for fixed width $M$, parameter $x$ and chain length $L$. This distribution defines the data point and its accuracy for the combination $(x,M)$. The reciprocal of the variance is used as the weight the data point carries in the fitting procedure. In Fig.\[fig2\] we present the product $M \gamma$ (the left-hand side of Eq. (\[eq:ren\_equ\])) versus $x$ for various values of the width $M$. The corresponding fitting parameters are presented in the table below. Our best fitting results have been obtained by expanding $\Gamma$ up to second order in $u_0$ and $u_1$ , and expanding $u_0$ ($u_1$) up to the third (second) order in $x$. We found the following coefficients and goodness of fit parameters:\ Coefficients (confidence bounds 95%): $$\begin{aligned} \hline \\[-2.5ex] \Gamma_{00} =\; & \quad 0.864 & (0.856 &, 0.871) \\ \Gamma_{01} =\; & \quad 0.0898 & (-0.071 &, 0.250) \\ \Gamma_{02} =\; & \quad 0.976 & (0.907 &, 1.046) \\ \Gamma_{20} =\; & \quad 0.312 & (0.302 &, 0.321) \\ a_3 =\; & \quad 0.293 & (-0.221 &, 0.807) \\ b_2 =\; & -0.255 & (-0.460 &,-0.049) \\ \nu =\; & \quad 2.374 & (2.356 &, 2.391) \\ y =\; & -0.356 & (-0.407 &,-0.306) \\ \hline\end{aligned}$$ Goodness of fit parameters: $$\begin{aligned} \hline \\[-2.5ex] & \chi^2\text{\,: } && 81192.5 \\ & \text{degrees of freedom (dof) \,: } && 81112 \\ & \chi^2/\text{dof\,: } && 1.001 \\ & P\text{\,: } && 0.554 \\ & \text{AICc\,: } && -556356.5 \\ & \text{sum of residuals\,: }& & 181.86 \\ \hline\end{aligned}$$ The degrees of freedom have been calculated from the number of data points $624\cdot 13 \cdot 10$ minus 8, the number of fit parameters. We see $\chi^2/$dof is close to 1 and the cumulative probability $P=0.554$ is close to $1/2$, marking a good fit result. The sum of residuals is small compared to the number of degrees of freedom. As can be seen in Fig.\[fig:residuals\], the residuals are distributed around zero as judged by the eye. All this indicates that the fit is reliable and the data agree with the model equation. Fits with two irrelevant fields are clearly discouraged by the Akaike criterion. Those models produce a (relative) Akaike coefficient of at least $\text{AICc}=-556340$. Therefore their relative likelihood is about 0.0003. [7]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevLett.47.1546) [****,  ()](\doibase 10.1007/BF01578242) [****,  ()](\doibase 10.1103/PhysRevB.80.041304) [****, ()](\doibase 10.1103/PhysRevLett.107.066402) [****,  ()](\doibase 10.1137/1110002), [****,  ()](\doibase 10.1109/TAC.1974.1100705) [**](\doibase 10.1007/b97636),  ed. (, ) [^1]: See Supplementary material.
--- abstract: 'The delayed Duffing equation $\ddot{x}(t)+x(t-T)+x^3(t)=0$ is shown to possess an infinite and unbounded sequence of rapidly oscillating, asymptotically stable periodic solutions, for fixed delays such that $T^2<\tfrac{3}{2}\pi^2$. In contrast to several previous works which involved approximate solutions, the treatment here is exact.' author: - Si Mohamed Sah - Bernold Fiedler - 'B. Shayak' - 'Richard H. Rand' date: version of title: | Unbounded sequences of stable limit cycles\ in the delayed Duffing equation: an exact analysis --- Introduction {#intr} ============ This work concerns a differential-delay equation (DDE) known as the delayed Duffing equation $$\ddot{x}(t)+ x(t-T) + x(t)^3 = 0\,, \label{dde}$$ where $T>0$ is the time delay. The existence of an infinite number of *stable limit cycles*, i.e. of asymptotically stable periodic solutions, in this DDE was first suggested in a paper by Wahi and Chatterjee [@Wahi]. Formally and to leading order, they performed the method of averaging and obtained a slow flow that predicted infinitely many stable limit cycles. In their DDE, the time delay was fixed at $T=1$. Mitra&al [@Mitra] studied the same DDE with an added linear stiffness. By assuming an approximate solution in harmonic form $x(t)=A\sin(\omega t)$, they claimed that the system exhibits an infinite number of stable limit cycles for any value of the time delay $T$. In a paper by Davidow&al [@Davidow], the same claim was supported by a) harmonic balance, b) Melnikov’s integral with Jacobi elliptic functions, and c) the introduction of damping. Strictly speaking, however all these works on the delayed Duffing equation (\[dde\]) were restricted to small amplitudes of the limit cycles. In our work, we present an exact treatment of (\[dde\]), in the limit of unboundedly large amplitudes. In particular, the previously studied infinite sequences of “stable limit cycles” lose stability, eventually, for delays $T$ such that $T^2>\tfrac{3}{2}\pi^2$. Section \[nume\] gives a brief account of the numerical integration method used for our simulations. In section \[lift\] we study exact periodic solutions $x_n(t)$ of a slightly generalized Duffing ordinary differential equation (ODE), with vanishing time delay $T=0$; see . We show how the non-delay ODE solutions $x_n(t)$ of minimal (or fundamental) periods $p_n$ lift to exact solutions of the original delayed Duffing DDE with positive delay $T>0$, provided their minimal periods $$\label{p_n} p_n=2T/n$$ are integer fractions of the double delay $2T$. In particular we show how the more and more rapidly oscillating periodic solutions $x_n(t)$ develop unbounded amplitudes $A_n \nearrow \infty$, for $n \rightarrow \infty$. In section \[ampl\] we indicate how to determine the amplitudes $A_n$ of the lifted solutions $x_n$, numerically and by series expansions for $n \rightarrow \infty$. Section \[stab\] recalls our stability results from [@Fieetal]. These mathematical results basically assert local asymptotic stability of the solutions $x_n(t)$, for any fixed positive delay $T$ such that $T^2<\tfrac{3}{2}\pi^2$ and for sufficiently large odd $n=1,3,5,\dots$. They also show instability, for sufficiently large even $n=2,4,6,\dots$. For full mathematical details, including added linear stiffness, we refer to [@Fieetal]. We conclude with numerical illustrations of our results, in section \[disc\], and a short summary \[conc\]. **Acknowledgment.** Just as the more mathematically inclined account in [@Fieetal], the present work has originated at the *International Conference on Structural Nonlinear Dynamics and Diagnosis 2018, in memoriam Ali Nayfeh*, at Tangier, Morocco. We are deeply indebted to Mohamed Belhaq, Abderrahim Azouani, to all organizers, and to all helpers of this outstanding conference series. They indeed keep providing a unique platform of inspiration and highest level scientific exchange, over so many years, to the benefit of all participants. This work was partially supported by DFG/Germany through SFB 910 project A4. Authors RHR, BS and SMS gratefully acknowledge support by the National Science Foundation under grant number CMMI-1634664. Numerical integration {#nume} ===================== For zero delays, $T=0$, the delayed Duffing DDE (\[dde\]) reduces to a non-delayed ordinary differential equation (ODE) known as the classical Duffing equation. The equation is conservative and hence exhibits a continuum of periodic orbits, rather than any asymptotically stable limit cycles. Even for arbitrarily small fixed positive delays, $T>0$, in contrast, approximate analysis and numerical simulations suggest that an infinite number of stable limit cycles may coexist, their amplitudes going to infinity [@Davidow]. Figure \[fig:01\] shows the time history (a) and phase plane (b) of the first three stable limit cycles obtained by numerical integration of the delayed Duffing DDE (\[dde\]), for $T=0.3$ and with different initial conditions. The numerical integrations in the present work were performed using the Python library `pydelay` for DDEs . The integrator is based on the Bogacki-Shampine method [@Bogacki]. The maximal step size used to produce the plots in the present work was fixed at $\Delta t = 10^{-4}$. See section \[disc\] for further numerical examples. ![(a) Time histories of some periodic solutions $x_n(t)$ for the delayed Duffing DDE (\[dde\]) with fixed delay $T=0.3$. (b) Nested phase plane plots $(x_n(t), \dot{x}_n(t))$ of the periodic orbits $x_n$ with minimal period $2T/n$, $n=1, 3, 5$. Black dot corresponds to equilibrium point.[]{data-label="fig:01"}](Figure_01.pdf){width="90.00000%"} Lifting periodic solutions from the non-delayed to the delayed Duffing equation {#lift} =============================================================================== In this section we show the existence of infinitely many rapidly oscillating periodic solutions of specific periods $p$ in the delayed Duffing DDE (\[dde\]). Our approach is based on a lift of certain periodic solutions of the ordinary non-delayed Duffing ODE below, with minimal (or, fundamental) period $p$, to periodic solutions of the delayed Duffing DDE (\[dde\]) with time delay $T$. We will show this remarkable fact for minimal periods $p$ which are integer fractions of the doubled delay $2T=np$; see claim . We first recall some elementary facts on the non-delayed Duffing ODE, in subsection \[DuffingODE\]. We separately address the cases of even and odd fractions $n$ in subsections \[neven\] and \[nodd\], respectively. ![Three dimensional plots of Hamiltonian level sets (\[ham\_1\]) in (a,b), and projections into the ($x, \dot{x}$) plane in (c,d), for the general non-delayed Duffing ODE (\[ode\]). (a,c) $n$ even: the single-well Duffing ODE (\[ode+\]). (b,d) $n$ odd: the double-well Duffing ODE (\[ode-\]). The Hamiltonian $H$ of the double-well Duffing equation (\[ode-\]) in (b,d) can be strictly negative (green), zero (blue), or strictly positive (red) as assumed in . Black dots correspond to equilibrium points. []{data-label="fig:02"}](Figure_02.pdf){width="110.00000%"} General Duffing equation {#DuffingODE} ------------------------ We consider the following two general forms of the classical Duffing ODE [@Kovacic]: $$\ddot{x}(t) + (-1)^n x(t) + x(t)^3 = 0, \ \,\,\,\,\, n = 1, 2, 3, \dots , \label{ode}$$ The time-independent Hamiltonian energy of (\[ode\]) takes the form $$H(t) =\tfrac{1}{2}\,{\dot{x}^2}+ \tfrac{1}{2}\,(-1)^n\,x^2 + \tfrac{1}{4}x^4\,. \label{ham_1}$$ See Figure \[fig:02\]. For even $n$ the Hamiltonian is always positive; see Figure \[fig:02\]a,c. For odd $n$, however, see Figure \[fig:02\]b,d: the Hamiltonian is either strictly negative (green), identically zero (blue) or strictly positive (red), depending on the ODE initial conditions. Note how single trajectories in the $(x,\dot{x})$-plane are point symmetric to the origin, if and only if the positive energy condition $$\label{H>0} H>0$$ is satisfied. We assume this restriction to hold throughout our further analysis. For $H>0$, we may time-shift solutions $(x_n(t),\dot{x}_n(t))$ of such that the initial conditions $$\label{odeic} 0<x_n(0) =: A_n\,, \qquad \dot{x}_n(0)=0,$$ are satisfied. In particular, $A_n = \max |x_n(t)|$ is the amplitude of the solution $x_n$. For odd $n$, note how our positivity condition requires an amplitude $A_{n}>\sqrt{2}$ in ; see the red curve in Figure \[fig:02\]b,d, outside the blue figure-8 shaped separatrix loops. The periodic closed curves fill the part of the phase space $(x,\dot{x})$ where $H>0$. Each periodic orbit corresponds to specific initial conditions and possesses a specific minimal period. The exact periodic solutions of the Duffing ODE $(x_n(t),\dot{x}_n(t))$ of are easily determined. Indeed the energy $H\equiv E$ is identically constant. Solving for $\dot{x}$ and classical separation of variables therefore lead to the elliptic integrals $$\label{ell} t = \int^{x_n(t)}_{x_n(0)}\frac{dx}{\dot{x}(t)} = \pm \int^{A_n}_{x_n(t)} \frac{dx}{\sqrt{2\,E - (-1)^n\,x^2 - x^4/2 }}\,.$$ Here we have substituted the initial condition for $x_n(0)$. The minimal (fundamental) period $p_{n}$ can be determined as the special case $t=p_n/4$, where symmetry implies $x_n(t)=0$: $$\tfrac{1}{4}\,p_n = \int^{A_n}_{0} \frac{dx}{\sqrt{\left( 2\,E - (-1)^n\,x^2 - x^4/2 \right)}}\,. \label{per_n_q}$$ Evaluating the invariant Hamiltonian $H_{n}\equiv E$ at the initial condition provides the energy $$H_n = E = \tfrac{1}{2}\,(-1)^n\,A_n^2 + \tfrac{1}{4}A_n^4 \label{ham_2}$$ and the explicit elliptic integral $$\tfrac{1}{4}\,p_n = \int^{A_n}_{0} \frac{dx}{\sqrt{\left(A_n^2-x^2\right)\,\left((-1)^n + A_n^2/2 + x^2/2\right)}}\,. \label{per_n_q_2}$$ The elliptic integral allows us to express the exact periodic solution of the general Duffing ODE (\[ode\]) in terms of Jacobi elliptic function as $$x_{n}(t) = A_{n}\, \mathrm{cn}(\omega_{n}\,t, m_{n}). \label{sol}$$ Here $\mathrm{cn}$ denotes the Jacobi elliptic cosine function. The arguments $A_{n}$, $\omega_{n}$ and $0 <m_{n}< 1$ are the amplitude, the angular frequency, and the elliptic modulus, respectively. The frequency $\omega_{n}$ and the modulus $m_{n}$ in the solution (\[sol\]) are related to the amplitude $A_{n}$ such that $$m _n= \frac{A_n^2}{2(A_n^2+(-1)^n)}~~~~~~~~~\textrm{and} ~~~~~~~~~~ \omega_n = \sqrt{A_n^2+(-1)^n}\,. \label{init_4}$$ The minimal period can be expressed in terms of the complete elliptic integral of the first kind $K\equiv K(m_{n})$ as $$p_{n} = 4\,K/\omega_{n}. \label{p_1}$$ See [@Rand]. Figure \[fig:03\] indicates the relation between amplitude and frequency for the general Duffing ODE (\[ode\]). The two black curves are obtained from the second equation of (\[init\_4\]), and they correspond to the relation between amplitude and frequency of the periodic solutions (\[sol\]) in the non-delayed Duffing ODE (\[ode\]), for $n$ odd (upper curve) and $n$ even (lower curve). Each point represents a periodic orbit of the general Duffing ODE (\[ode\]). In the phase plane, each of the black curves therefore indicates a foliation by periodic solutions. For the delayed Duffing equation , the same periodic solutions $x_{n}$ of minimal period $p_{n}=2T/n$ on the upper curve ($n$ odd) will turn out locally asymptotically stable, for $T^2<\tfrac{3}{2}\pi^2$ and large $n$, while large $n$ of even parity (lower curve) always turn out linearly unstable; see Theorems \[thmodd\], \[thmeven\] below. ![ The relation between amplitude $A$ and frequency $\omega$ of the periodic solutions in the non-delayed Duffing ODE (\[ode\]) obtained from the second equation of (\[init\_4\]). Upper curve for $n$ odd and lower curve for $n$ even. Only the marked points on these two curves correspond to periodic solutions $x_n(t)$ of the delayed Duffing DDE (\[dde\]). The time delay for this plot is $T=3$.[]{data-label="fig:03"}](Figure_03.pdf){width=".9\textwidth"} Our lift construction from solutions of the non-delayed Duffing ODE (\[ode\]) to the delayed Duffing ODE (\[dde\]) is based on two interpretations of the mathematical expression $x(t-T)$. On the one hand, $x(t-T)$ represents a delay, as in (\[dde\]). The same expression, on the other hand, represents a periodic solution when equated to $\pm x(t)$ by $$x_{n}(t-T) = (-1)^n x_{n}(t)\,. \label{perd}$$ Here $2T$ represents any (not necessarily minimal) period of the periodic solution $x(t)$. Indeed, any positive energy solution of the Duffing ODE (\[ode\]) is periodic and will automatically satisfy the periodicity condition (\[perd\]), for some $T > 0$. Upon substitution of the periodicity condition (\[perd\]), however, the non-delayed Duffing ODE (\[ode\]) produces the delayed Duffing DDE (\[dde\]), where now the (half) period $T$ represents the delay. Thus any periodic solution of the Duffing ODE (\[ode\]) with periodicity condition lifts to a periodic solution of the DDE (\[dde\]), for that choice of the delay $T$. The marked points (red) on the two black curves in Figure \[fig:03\], for example, correspond to periodic solutions of the delayed Duffing DDE (\[dde\]), with delay $T=3$. Actually, the non-delayed ODE Duffing equation (\[ode\]) possesses an uncountable continuum of periodic orbits, foliating the phase plane. The number of periodic orbits $x_n$ which satisfy the periodicity condition , however, is (at most) countable. In particular, our lift construction from the non-delayed Duffing ODE (\[ode\]) to the delayed Duffing DDE (\[dde\]) restricts the allowable points on the curves in Figure \[fig:03\] to a countable set and therefore produces only a countable set of periodic solutions for the delayed Duffing DDE. We do not claim that our lift construction covers all possible periodic solutions of the DDE ; in section \[stab\] we will see indications of additional periodic solutions which cannot be obtained by our lift. In the following we will further detail the lift construction which is based on the known exact periodic solutions (\[sol\]) of the general Duffing ODE (\[ode\]). We consider the two cases, $n$ even and $n$ odd, separately. ![Solutions of the single well Duffing ODE (\[ode+\]), alias even $n$ in the delayed Duffing DDE (\[ode\]). Dashed red: solutions $x_{n}(t)$ of (\[ode+\]). Dotted blue: shifted delayed solutions $x_{n}(t-T)$, $T=2$. []{data-label="fig:04"}](Figure_04.pdf){width=".95\textwidth"} Even $n$ {#neven} -------- For even $n$, the general Duffing ODE (\[ode\]) reduces to the single-well case $$\ddot{x}(t) + x(t) + x(t)^3 = 0. \label{ode+}$$ By (\[sol\]), the exact periodic solutions are expressed as $$x_{n}(t) = A_{n}\, \mathrm{cn}(\omega_{n}\,t, m_{n}),$$ where now (\[init\_4\]) becomes $$m_{n} = \frac{A_{n}^2}{2(A_{n}^2+1)}~~~~~~~~~\textrm{and} ~~~~~~~~~~ \omega_n = \sqrt{A_{n}^2+1}. \label{init_1}$$ According to , minimal periods $p$ decrease monotonically from $p=2 \pi$, at amplitude $A = 0$, to $p = 0$, for unbounded amplitudes $A \nearrow \infty$; see Figure \[fig:03\]. To perform the lift from the Duffing ODE to the Duffing DDE , we fix a time delay T (black dot $T=2$ in Figure \[fig:04\]), a priori, such that $T < 2 \pi$. Then we can always find a solution (\[sol\]) to (\[ode+\]) with minimal period $p_{2} = T$; see solution $x_{2}(t)$ in Figure \[fig:04\]. If we shift the curve of $x_{2}(t)$ to the right by $T$ we obtain a new curve $x_{2}(t-T)$ that coincides with $x_{2}(t) = x_{2}(t-T)$; see Figure \[fig:04\]b. We can also find another solution $x_{4}(t)$, of larger amplitude $A_4>A_2$, whose minimal period is $p_{4} =T/2$. Shifting by $T$ we obtain a new curve $x_{4}(t-T)$ that coincides with $x_{4}(t) = x_{4}(t-T)$, see Figure \[fig:04\]b again. In the same manner, we can find infinitely many periodic solutions $x_n(t)$ with minimal periods $p_{n} = 2T/n$, for $n=2, 4, 6, \dots$. After time shift by their shared (non-minimal) period $T$ we obtain $$x_{n}(t-T) = x_{n}(t),~~~~~~~~~~ \textrm{for all even }n. \label{i_T}$$ Substituting (\[i\_T\]) into (\[ode+\]) lifts all those ODE Duffing solutions $x_n(t)$ to the delayed Duffing DDE (\[dde\]), for fixed delay $T<2\pi $. Note how $p_n\searrow 0$ implies unbounded amplitudes $A_n \nearrow \infty$, for $n\rightarrow \infty$. As the amplitudes $A_n$ of the periodic solutions of the Duffing ODE (\[ode+\]) increase to infinity, the minimal periods $p_n$ decrease to zero. Thus we obtain an unbounded sequence of more and more rapidly oscillating periodic solutions, with minimal periods $T, T/2, T/3, \dots$, which are also periodic with (non-minimal) period $T$. This proves our claim , for even $n$. Odd $n$ {#nodd} ------- For odd $n$, the general Duffing ODE (\[ode\]) reduces to the double-well case $$\ddot{x}(t) -x(t) + x(t)^3 = 0. \label{ode-}$$ Any solution conserves the Hamiltonian energy $$H =\tfrac{1}{2}\,{\dot{x}^2}- \tfrac{1}{2}\,x^2 + \tfrac{1}{4}x^4\,.$$ We recall how the phase portrait of the double-well Duffing ODE (\[ode-\]) is characterized by a figure-8 shaped separatrix $H=0$; see the blue curve in Figure \[fig:02\]b,d. For positive energy $H>0$, the (red) solutions of (\[ode-\]) oscillate around the exterior of the separatrix. Again, minimal periods $p$ decrease monotonically: this time from $p=\infty$, at the separatrix amplitude $A = \sqrt{2}$, to $p = 0$, for $A \nearrow \infty$. ![Solutions of the double-well Duffing ODE (\[ode-\]), alias odd $n$ in the delayed Duffing DDE (\[ode\]). Solid red: solutions $x_{n}(t)$ of (\[ode-\]). Dotted blue: shifted delayed solutions $x_{n}(t-T)$, $T=2$. []{data-label="fig:05"}](Figure_05.pdf){width=".95\textwidth"} Since each level of positive energy $H>0$ consists of a single periodic orbit $(x,\dot{x})$, with odd force law, the time taken to travel from any point $(x,\dot{x})$ on a level set to its antipode $(-x,-\dot{x})$ is half its minimal period, $p/2$. Indeed this fact holds for any odd force law, by time reversibility of the oscillator. Therefore, every solution of the double-well Duffing ODE (\[ode-\]) with positive energy $H$ and minimal period $p$ satisfies the oddness symmetry $$x(t)=-x(t-p/2)\,, \label{oddx}$$ for all $t$. To perform the lift from the double-well Duffing ODE to the delayed Duffing DDE , we now fix any time delay $T>0$ (black dot $T=2$ in Figure \[fig:05\]), this time without any further constraint. For $p_1:=2T$, the delay $T$ coincides with half the minimal period of the solution $x_1(t)$ of the non-delayed double-well Duffing ODE . The oddness symmetry at half period $p_1/2=T$ therefore implies that $x_1(t)$ also solves our original delayed Duffing DDE , $$\ddot{x}+x(t-p/2)+{{x}^{3}}=0 \label{x4} \,.$$ Analogously, we can perform the lift from the non-delayed double-well Duffing ODE to the delayed Duffing DDE , for any odd $n=1,3,5,\dots$, as follows. Let $x_n$ denotes the ODE solution of with minimal period $p_n:=2T/n$. Then oddness symmetry implies $$x_{n}(t-T) = -x_{n}(t),~~~~~~~~~~ \textrm{for all odd }n. \label{j_T}$$ Substitution into implies that $x_n(t)$ also solves . See Figure \[fig:05\]a,b for illustrations of the cases $n=1,3$. Note how $p_n\searrow 0$ implies unbounded amplitudes $\sqrt{2}<A_n \nearrow \infty$, for $n\rightarrow \infty$. Thus we obtain an unbounded sequence of more and more rapidly oscillating periodic solutions to the delayed Duffing DDE , with minimal periods $2T, 2T/3, 2T/5, \dots$, which are also periodic with (non-minimal) period $2T$. This proves our claim , for odd $n$. By (\[sol\]), the exact periodic solutions $x_n(t)$ are expressed as Jacobi elliptic functions $$x_{n}(t) = A_{n}\, \mathrm{cn}(\omega_{n}\,t, m_{n}),$$ where (\[init\_4\]) becomes $$m_{n} = \frac{A_{n}^2}{2(A_{n}^2-1)}~~~~~~~~~\textrm{and} ~~~~~~~~~~ \omega_{n} = \sqrt{A_{n}^2-1}\,. \label{init_2}$$ As we have mentioned in subsection \[DuffingODE\], the positivity and symmetry condition $H>0$ becomes equivalent to $A>\sqrt{2}$. Figure \[fig:06\] schematically illustrate the lift from the non-delayed Duffing ODE (\[ode\]) to the delayed Duffing DDE (\[dde\]), for both even and odd $n$. This lift will be used in the next section to numerically determine the amplitudes $A_n$ of the lifted, rapidly oscillating periodic solutions of the DDE (\[dde\]). Amplitudes {#ampl} ========== We sketch two practical approaches to determine the amplitudes $A_n$ of the rapidly oscillating periodic solutions $x_n(t)$ in the delayed Duffing equation (\[dde\]). One approach is essentially numerical; the other approach is analytic, based on an exact series expansion at $n=\infty$ and at infinite amplitude. The amplitudes $A_n$ of the lifted solutions $x_n(t)$ arise from the closed curves $H>0$ in the non-delayed Duffing ODE (\[ode\]) with specific values $$p \equiv p_n = 2T/n,~~~~~~~~~~~~~~~ n = 1, 2, 3, \dots, \label{p_2}$$ of their minimal period. See and section \[lift\] for details. Substitution of into the explicit elliptic integral provides the implicit equation $$\label{A_nK} 2T/n = p = 4\,K(m(A_n))/ \omega(A_n)\,$$ for $A_n$, given $T$ and $n$. Here the functions $m(A_n)$ and $\omega(A_n)$ are specified in ; we have suppressed explicit dependence on the parity of $n$ in this abbreviated notation. For high precision numerical solutions $A_n$ of we rely on the Python-based Newton solver `fsolve`. The Newton-method requires initial approximations for the desired solution $A_n$; for initial guesses we use the formal expansions in [@Davidow], Eq.(4). The complete elliptic integral $K(m)$ in is evaluated using the Python-based quadrature `quad`. The integration is performed using a Clenshaw-Curtis method which uses Chebyshev moments. For $T=3$, for example, the reference amplitudes $A_{n}$ corresponding to the red marked points in Figure \[fig:03\] are found to be $A_1=1.74566491$…, $A_{2}=2.16089536$…, $A_{3}=3.90053028$…, and $A_{4}=4.79499435$…. Note that time delays $T$ and $\bar{T}$ share the same reference amplitudes, if the relation $T/n=\bar{T}/\bar{n}$ holds. Here $n$ and $\bar{n}$ are required to be both odd, or both even. For example the amplitude $A_n=A_n(T)$, for $n=1$ and $T=0.1$, coincides with the amplitude $A_{\bar{n}}(\bar{T})$, for $\bar{n}=3$ and $\bar{T}=0.3$. Our second approach is analytic in nature. We start from with an exact Taylor expansion of $p(A):=4K(m(A))/ \omega(A)$, at $A=\infty$, with respect to $1/A$. For even $n$ the functions $m(A)$ and $\omega(A)$ have been specified in . Up to errors of order 13 in $1/A$ we obtain $$\begin{aligned} \label{p(A)even} \begin{split} p = \frac{\gamma}{\sqrt{\pi}} \Big( & A^{-1} - \left(\tfrac{1}{2} + 4 \pi^2/\gamma^2 \right) A^{-3} + \left(\tfrac{1}{2} + 6 \pi^2/\gamma^2 \right) A^{-5} - \left(\tfrac{5}{8} + 9 \pi^2/\gamma^2 \right) A^{-7} + \\ &+ \left(\tfrac{85}{96}+ 14 \pi^2/\gamma^2 \right) A^{-9} - \left(\tfrac{87}{64} + \tfrac{903}{40} \pi^2/\gamma^2 \right) A^{-11}\Big) + \mathcal{O}\left(A^{-13}\right) \,. \end{split}\end{aligned}$$ Here $\gamma := \Gamma(1/4)^2$ denotes the square of the Euler Gamma-function, evaluated at 1/4. Note $p=0$ at $A=\infty$. Inverting the above series provides an expansion of the inverse function $A(p)$. Specifically, the Taylor expansion of $A$ as a function of $1/p$ at $p=0$, up to errors of order 11 in $1/p$ reads $$\begin{aligned} \label{A(p)even} \begin{split} A = \frac{\gamma}{\sqrt{\pi}} \Big(& p^{-1} - \pi \left(\tfrac{1}{2}\gamma^2 + 4 \pi^2 \right)\gamma^{-4} p - 2\pi^4 \left(\gamma^2 + 16 \pi^2 \right)\gamma^{-8} p^3 -\\ &-8\pi^7\left(3\gamma^2 + 56 \pi^2 \right) \gamma^{-12} p^5 + \tfrac{1}{96} \pi^4 \left(\gamma^8 - 36\,864\,\gamma^2\pi^6 - 737\,280\, \pi^8 \right) \gamma^{-16} p^7 +\\ &+\tfrac{1}{960} \pi^5 \left(5\gamma^{10} + 328 \gamma^8\pi^2-6\,758\,400\,\gamma^2\pi^8-140\,574\,720\,\pi^{10} \right) \gamma^{-20} p^9 \Big) + \\ &+\mathcal{O}\left(p^{11}\right) \,. \end{split}\end{aligned}$$ Inserting $p=2T/n$ readily provides Taylor expansions of $A$ with respect to $n$, in the limit of large $n\rightarrow \infty$ and for any fixed delay $T>0$. Alternatively, of course, we may consider $n$ fixed and read as an expansion with respect to small delays $T>0$, or with respect to any small combination of $T/n$. For odd $n$, the analogous expansions have to be based on the functions $m(A)$ and $\omega(A)$ specified in . With the same notation as above we obtain $$\begin{aligned} \label{p(A)odd} \begin{split} p = \frac{\gamma}{\sqrt{\pi}} \Big(& A^{-1} + \left(\tfrac{1}{2} + 4 \pi^2/\gamma^2 \right) A^{-3} + \left(\tfrac{1}{2} + 6 \pi^2/\gamma^2 \right) A^{-5} + \left(\tfrac{5}{8} + 9 \pi^2/\gamma^2 \right) A^{-7} + \\ &+ \left(\tfrac{85}{96}+ 14 \pi^2/\gamma^2 \right) A^{-9} + \left(\tfrac{87}{64} + \tfrac{903}{40} \pi^2/\gamma^2 \right) A^{-11}\Big) + \mathcal{O}\left(A^{-13}\right) \,. \end{split}\end{aligned}$$ $$\begin{aligned} \label{A(p)odd} \begin{split} A = \frac{\gamma}{\sqrt{\pi}} \Big(& p^{-1} + \pi \left(\tfrac{1}{2}\gamma^2 + 4 \pi^2 \right)\gamma^{-4} p - 2\pi^4 \left(\gamma^2 + 16 \pi^2 \right)\gamma^{-8} p^3 +\\ &+8\pi^7\left(3\gamma^2 + 56 \pi^2 \right) \gamma^{-12} p^5 + \tfrac{1}{96} \pi^4 \left(\gamma^8 - 36\,864\,\gamma^2\pi^6 - 737\,280\, \pi^8 \right) \gamma^{-16} p^7 -\\ &-\tfrac{1}{960} \pi^5 \left(5\gamma^{10} + 328 \gamma^8\pi^2-6\,758\,400\,\gamma^2\pi^8-140\,574\,720\,\pi^{10} \right) \gamma^{-20} p^9 \Big) + \\ &+\mathcal{O}\left(p^{11}\right) \,. \end{split}\end{aligned}$$ Comparing the even and odd cases, we observe how their sign patterns are related by the complex linear transformation $p\mapsto \mathrm{i}p,\ A \mapsto \mathrm{i}A$. This is in agreement with a scaling of the Duffing ODE. We emphasize that all Taylor expansions – are convergent and hence can be performed up to any order. Worries like secular terms and other nuisances ubiquitous in formal asymptotics, disappear. In summary, analytic expansions work best for small $T/n$, e.g. for large $n$, where numerical methods face increasing difficulties. The numerical approach, on the other hand, is the method of choice for larger $T/n$, e.g. for small $n$. Stability {#stab} ========= In this section we summarize results from [@Fieetal] on local asymptotic stability and instability of the rapidly oscillating periodic solutions $x_n(t), \ n=1,2,3,\dots$, of the delayed Duffing DDE , as constructed in section \[lift\]. We recall how the ODE solutions $x_n$ of with positive energy $H$ are uniquely determined by their minimal periods $p_n=2T/n$, where $T>0$ denotes the delay in ; see and –. To be precise we recall that a periodic reference orbit $x_*$ is called *stable limit cycle*, or also *locally asymptotically stable*, if any other solution $x(t)$, which starts sufficiently nearby, remains near the set $x_*$ and converges to that set, for $t \rightarrow\infty$. A sufficient (but not necessary) condition for local asymptotic stability is *linear asymptotic stability*. In other words, all *Floquet* (alias *Lyapunov*) *exponents* $\eta$ of the periodic orbit $x_*$ possess strictly negative real part (except for the algebraically simple trivial exponent $\eta=0$). We speak of *linear instability*, in contrast, if $x_*$ possesses any Floquet (alias Lyapunov) exponent with strictly positive real part. Deeper results on unstable manifolds then imply *nonlinear instability*. In fact, there exists a solution $x(t)$ which is defined for all $t\leq 0$ and converges to $x_*$ in backwards time $t\rightarrow -\infty$. The stability results of [@Fieetal] specialize to our present context as follows. \[thmodd\] Let $n$ be odd and assume $$0<T^2<\tfrac{3}{2}\pi^2. \label{Ttorus}$$ Moreover assume that $n\geq n_0(T)$ is chosen large enough. Then the periodic orbit $x_n$ of the delayed Duffing equation is asymptotically stable, both linearly and locally. \[thmeven\] Let $n$ be even, $T>0$, and assume $n\geq n_0(T)$ is chosen large enough. Then the periodic orbit $x_n$ of the delayed Duffing equation is linearly and nonlinearly unstable. For the leading Floquet exponent $\eta$, i.e. the nontrivial exponent with real part closest to zero, the precise asymptotics $$\label{Floq} \eta = \tfrac{2}{3} (-1)^{n+1}T^2 + \dots$$ has been derived, for even and odd $n \rightarrow \infty$. Towards the stability boundary $T^2=\tfrac{3}{2}\pi^2$ of Theorem \[thmodd\], the periodic orbits $x_n$ with odd $n$ lose stability, and undergo a torus bifurcation of Neimark-Sacker-Sell type. In particular, rational rotation numbers on the bifurcating torus will indicate periodic orbits of the delayed Duffing DDE which are *not* lifts of the ODE Duffing orbits $x_n$ studied in the present paper. We caution the reader that Floquet theory for delay differential equations is not an entirely trivial matter. Therefore we only illustrate our stability results in the next section, numerically. For detailed mathematical proofs we have to refer to [@Fieetal]. Discussion {#disc} ========== ![ Time histories (a) and phase plane plots (b) for delay $T=0.5$. Red: exact periodic solution $x_1(t)$ for $n=1$, with reference amplitude $A_{1}=7.5139958\dots$ and minimal period $2T$; see . Blue: simulated solution of the delayed Duffing DDE (\[dde\]) with initial history function (\[init\_3\]) and initial amplitude $A = 4.3$. Green: initial history function (\[init\_3\]). Black: final state of the history function (\[init\_3\]). Note the convergence of the blue solution to the locally asymptotically stable red limit cycle $x_{1}$, for large times $t$.[]{data-label="fig:07"}](Figure_07.pdf){width="\textwidth"} Figure \[fig:07\] plots two solutions of the delayed Duffing equation (\[dde\]) with delay $T= 0.5$: a numerical solution $x(t)$ (blue), and the lifted exact solution $x_1(t)$ (red) specified in (\[sol\]). The minimal period $p_1$ of $x_1(t)$ coincides with $2T$; see . Figure \[fig:07\] contains the time history (a) and the phase plane (b). The green curve denotes the initial history function $$\left(x(t), \dot{x}(t)\right) = \left( A\, \mathrm{cn}(\omega\,t,m), -A\,\omega\, \mathrm{sn}(\omega\,t,m)\, \mathrm{dn}(\omega\,t,m) \right), \label{init_3}$$ for $-T<t<0$ and with initial amplitude $A = 4.3$. The values of $m$ and $\omega$ are obtained from (\[init\_4\]) with $n=1$. Note how $x(t)$ is a solution of the non-delayed Duffing ODE with minimal period $p = p(A) = 1.7972608\dots$. However, the initial history function $x(t)$ is *not* a solution of the delayed Duffing DDE (\[dde\]), because $T = 0.5$ is *not* an integer multiple of the larger ODE period $p = 1.7972608\dots$ . Therefore the simulated solution $x(t)$ of the delayed Duffing DDE (blue), is *not* periodic. Instead, the simulated solution (blue), with initial amplitude $A = 4.3$, approaches the exact periodic solution $x_1(t)$ (red) of minimal period $2 T$ and with amplitude $A_1=7.5139958\dots$ . Indeed, the black curve indicates the history function, for $100-T<t<100$, of the final state of the blue solution $x(t)$ at $t=100$. The stability result of Theorem \[thmodd\] only asserts local convergence to $x_n$ for large odd $n$, but not for $n=1$. The convergence to $x_1$ indicates how that stability result might actually extend, all the way, down to the smallest possible choice $n=1$. Moreover, “local” attraction to $x_1$ holds sway over quite a distance, down to an initial amplitude $A=4.3$ significantly smaller than the asymptotic amplitude $A_1=7.5139958\dots$ of $x_1$. ![ Time histories of $x(t)$, top (a), and of $\dot{x}(t)$, bottom (b), for delay $T=0.5$. Exact solutions $x_n(t)$ for $n=1$ (red) and $n=2$ (teal); see (\[sol\]). Their amplitudes are $A_1=7.5139958\dots$ and $A_2 = 14.7834172\dots$, respectively. The numerical solution of the delayed Duffing DDE (\[dde\]) with initial amplitude $A= 1.42$ (blue) illustrates wide asymptotic stability of the stable limit cycle $x_1$. The numerical solution with initial amplitude $A= 14.77$ (violet), quite close to $A_2$, indicates a heteroclinic orbit from the unstable periodic orbit $x_2$ to the stable limit cycle $x_1$.[]{data-label="fig:08"}](Figure_08.pdf){width="\textwidth"} Figure \[fig:08\] compares two lifted exact periodic solutions, $x_1(t)$ (red) and $x_2(t)$ (teal). Two numerical solutions of the delayed Duffing DDE (\[dde\]) for $T=0.5$ are included, which arise from the two initial history functions (\[init\_3\]) with initial amplitudes $A= 1.42$ (blue) and $A= 14.77$ (violet), respectively. The reference amplitudes corresponding to the exact $n=1$ (red) and $n=2$ (teal) periodic solutions (\[sol\]) are $A_1=7.5139958\dots$ and $A_2 = 14.7834172\dots$, respectively. Figure \[fig:08\] indicates how both simulated solutions (blue and violet) approach the exact stable limit cycle $x_1$ (red); see Theorem \[thmodd\]. Also note how the simulated solution with initial condition $A=14.77$ (violet) starts very close to the exact, but linearly unstable, periodic solution (teal) of $A_2 = 14.7834172\dots$, but eventually diverges as time $t$ increases. See Theorem \[thmeven\]. This indicates the presence of a heteroclinic orbit $x(t)$, from $x_2$ to $x_1$, which is defined for all positive and negative times $t$ and converges to $x_2$, for decreasing $t \searrow -\infty$, and to $x_1$, for increasing $t \nearrow +\infty$. Our periodicity Ansatz requires half minimal periods $p/2=T/n$ to be integer fractions $n=1,2,3, \dots$ of the delay $T$. Of course we have to caution the reader that there may be many periodic solutions of the DDE (\[dde\]) which are not captured by this Ansatz. Conclusion {#conc} ========== In this work we showed how the Duffing equation (\[dde\]) with time delay $T$ possesses an unbounded sequence of infinitely many rapidly oscillating periodic solutions $x_n(t),\ n=1,2,3,\dots$ . Each solution $x_n$ arises from a periodic solution $x_n(t)$ of the non-delayed classical Duffing equation (\[ode\]) with minimal period $p_n=2T/n$. In particular, the classical non-delayed Duffing oscillator provides an unbounded sequence of exact periodic solutions of the delayed Duffing equation. Based on the Hamiltonian energy of the classical Duffing equation, and standard Jacobi elliptic integrals, we have also derived high-precision reference amplitudes of these periodic solutions $x_n$. For delays $T$ such that $0<T^2<\tfrac{3}{2} \pi^2$, and for odd $n$ large enough, the solutions $x_n$ are locally asymptotically stable limit cycles. For large even $n$, in contrast, the solutions $x_n$ are linearly and nonlinearly unstable. We have illustrated our results with numerical simulations, for low $n=1,2$. [99]{} P. Bogacki, L. F. Shampine. A 3(2) pair of Runge - Kutta formulas. *Applied Mathematics Letters* 2, 4, 321 ISSN 0893-9659, (1989). M. Davidow, B. Shayak, R. H. Rand. Analysis of a remarkable singularity in a nonlinear DDE. [*Nonlinear Dynamics*]{}, (2017) 90:317-323. B. Fiedler, A. López Nieto, R.H. Rand, S.M. Sah, I. Schneider, B. de Wolff. Coexistence of infinitely many large, stable, rapidly oscillating periodic solutions in time-delayed Duffing oscillators. arXiv:1906.06602 (2019) V. Flunkert. *Pydelay: A Simulation Package. In: Delay-Coupled Complex Systems*. Springer Theses. Springer, Berlin, Heidelberg (2011). I. Kovacic and M.J. Brennan (eds.). *The Duffing Equation: Nonlinear Oscillators and their Behaviour.* John Wiley & Sons, Chichester (2011). R.K. Mitra, S. Chatterjee, A.K. Banok. Limit cycle oscillation and multiple entrainment phenomena in a Duffing oscillator under time-delayed displacement feedback. [*J. Vibration and Control*]{}, (2017) 23:2742-2756. R.H. Rand. *Topics in Nonlinear Dynamics with Computer Algebra, Computation in Education: Mathematics, Science and Engineering.* Vol. 1, Gordon and Breach, Langhorne, PA (1994). P. Wahi, A. Chatterjee. Averaging oscillations with small fractional damping and delayed terms. [*Nonlinear Dynamics*]{}, (2004) 38: 3–22.
--- abstract: | We investigate some basic questions about the interaction of regular and rational relations on words. The primary motivation comes from the study of logics for querying graph topology, which have recently found numerous applications. Such logics use conditions on paths expressed by regular languages and relations, but they often need to be extended by rational relations such as subword or subsequence. Evaluating formulae in such extended graph logics boils down to checking nonemptiness of the intersection of rational relations with regular or recognizable relations (or, more generally, to the generalized intersection problem, asking whether some projections of a regular relation have a nonempty intersection with a given rational relation). We prove that for several basic and commonly used rational relations, the intersection problem with regular relations is either undecidable (e.g., for subword or suffix, and some generalizations), or decidable with non-primitive-recursive complexity (e.g., for subsequence and its generalizations). These results are used to rule out many classes of graph logics that freely combine regular and rational relations, as well as to provide the simplest problem related to verifying lossy channel systems that has non-primitive-recursive complexity. We then prove a dichotomy result for logics combining regular conditions on individual paths and rational relations on paths, by showing that the syntactic form of formulae classifies them into either efficiently checkable or undecidable cases. We also give examples of rational relations for which such logics are decidable even without syntactic restrictions. address: - '[a]{}Department of Computer Science, University of Chile' - 'Laboratory for Foundations of Computer Science, University of Edinburgh' author: - 'Pablo Barcel[ó]{}a' - Diego Figueirab - Leonid Libkinc title: Graph Logics with Rational Relations --- [^1] Introduction {#sec:intro} ============ The motivation for the problems investigated in this paper comes from the study of logics for querying graphs. Such logics form the basis of query languages for graph databases, that have recently found numerous applications in areas including biological networks, social networks, Semantic Web, crime detection, etc. (see [@AG-survey] for a survey) and led to multiple systems and prototypes. In such applications, data is usually represented as a labeled graph. For instance, in social networks, people are nodes, and labeled edges represent different types of relationship between them; in RDF – the underlying data model of the Semantic Web – data is modeled as a graph, with RDF triples naturally representing labeled edges. The questions that we address are related to the interaction of various classes of relations on words, for instance, rational relations (examples of those include subword and subsequence) or regular relations (such as prefix, or equality of words). An example of a question we are interested in is as follows: is it decidable whether a given regular relation contains a pair $(w,w')$ so that $w$ is a subword/subsequence of $w'$? Problems like this are very basic and deserve a study on their own, but they are also necessary to answer questions on the power and complexity of querying graph databases. We now explain how they arise in that setting. Logical languages for querying graph data have been developed since the late 1980s (and some of them became precursors of languages later used for XML). They query the [topology]{} of the graph, often leaving querying data that might be stored in the nodes to a standard database engine. Such logics are quite different in their nature and applications from another class of graph logics based on spatial calculi [@CGG02; @DGG07]. Their formulae combine various reachability patterns. The simplest form is known as [*regular path queries (RPQs)*]{} [@CMW87; @CM90]; they check the existence of a path whose label belongs to a regular language. Those are typically used as atoms and then closed under conjunction and existential quantification, resulting in the class of [*conjunctive regular path queries ([[CRPQ]{}]{}s)*]{}, which have been the subject of much investigation [@CGLV00; @DT01; @FLS98]. For instance, a [[CRPQ]{}]{} may ask for a node $v$ such that there exist nodes $v_1$ and $v_2$ and paths from $v$ to $v_i$ with the label in a regular language $L_i$, for $i=1,2$. The expressiveness of these queries, however, became insufficient in applications such as the Semantic Web or biological networks due to their inability to [*compare*]{} paths. For instance, it is a common requirement in RDF languages to compare paths based on specific semantic associations [@AS03]; biological sequences often need to be compared for similarity, based, for example, on the edit distance. To address this, an extension of [[CRPQ]{}]{}s with relations on paths was proposed [@pods10]. It used [*regular*]{} relations on paths, i.e., relations given by synchronized automata [@oldstuff; @frenchreinvention]. Equivalently, these are the relations definable in automatic structures on words [@jacm2003; @graedel2000; @bruyere]. They include prefix, equality, equal length of words, or fixed edit distance between words. The extension of [[CRPQ]{}]{}s with them, called [[ECRPQ]{}]{}s, was shown to have acceptable complexity ([[NLogSpace]{}]{} with respect to data, [[[PSpace]{}]{}]{} with respect to query). However, the expressive power of [[ECRPQ]{}]{}s is still short of the expressiveness needed in many applications. For instance, semantic associations between paths used in RDF applications often deal with [subwords]{} or [subsequences]{}, but these relations are [*not*]{} regular. They are [*rational*]{}: they are still accepted by automata, but those whose heads move asynchronously. Adding them to a query language must be done with extreme care: simply replacing regular relations with rational in the definition of [[ECRPQ]{}]{}s makes query evaluation undecidable! So we set out to investigate the following problem: given a class of graph queries, e.g., [[CRPQ]{}]{}s or [[ECRPQ]{}]{}s, what happens if one adds the ability to test whether pairs of paths belong to a rational relation $S$, such as subword or subsequence? We start by observing that this problem is a generalization of the [*intersection problem*]{}: given a regular relation $R$, and a rational relation $S$, is $R\cap S\neq\emptyset$? It is well known that there exist rational relations $S$ for which it is undecidable [@berstel]; however, we are not interested in artificial relations obtained by encoding PCP instances, but rather in very concrete relations used in querying graph data. The intersection problem captures the essence of graph logics [[ECRPQ]{}]{}s and [[CRPQ]{}]{}s (for the latter, when restricted to the class of recognizable relations [@berstel; @choffrut-bulletin]). In fact, query evaluation can be cast as the [*generalized intersection problem*]{}. Its input includes an $m$-ary regular relation $R$, a binary rational relation $S$, and a set $I$ of pairs from $\{1,\ldots,m\}$. It asks whether there is a tuple $(w_1,\ldots,w_m)\in R$ so that $(w_i,w_j)\in S$ whenever $(i,j)\in I$. For $m=2$ and $I=\{(1,2)\}$, this is the usual intersection problem. Another motivation for looking at these basic problems comes from verification of lossy channel systems (finite-state processes that communicate over unbounded, but lossy, FIFO channels). Their reachability problem is known to be decidable, although the complexity is not bounded by any multiply-recursive function [@CS-lics08]. In fact, a “canonical” problem used in reductions showing this enormous complexity [@CS-fsttcs07; @CS-lics08] can be restated as follows: given a binary rational relation $R$, does it have a pair $(w,w')$ so that $w$ is a subsequence of $w'$? This naturally leads to the question whether the same bounds hold for the simpler instance of the intersection problem when we use [regular]{} relations instead of rational ones. We actually show that this is true. ### {#section .unnumbered} We start by showing that evaluating [[CRPQ]{}]{}s and [[ECRPQ]{}]{}s extended with a rational relation $S$ can be cast as the generalized intersection problem for $S$ with recognizable and regular relations respectively. Moreover, the complexity of the basic intersection problem is a lower bound for the complexity of query evaluation. We then study the complexity of the intersection problem for fixed relations $S$. For recognizable relations, it is well known to be efficiently decidable for every rational $S$. For regular relations, we show that if $S$ is the subword, or the suffix relation, then the problem is undecidable. That is, it is undecidable to check, given a binary regular relation $R$, whether it contains a pair $(w,w')$ so that $w$ is a subword of $w'$, or even a suffix of $w'$. We also present a generalization of this result. The analogous problem for the subsequence relation is known to be decidable, and, if the input is a rational relation $R$, then the complexity is non-multiply-recursive [@CS-fsttcs07]. We extend this in two ways. First, we show that the lower bound remains true even for regular relations $R$. Second, we extend decidability to the class of all rational relations for which one projection is closed under subsequence (the subsequence relation itself is trivially such, obtained by closing the first projection of the equality relation). In addition to establishing some basic facts about classes of relations on words, these results tell us about the infeasibility of adding rational relations to [[ECRPQ]{}]{}s: in fact adding subword makes query evaluation undecidable, and while it remains decidable with subsequence, the complexity is prohibitively high. So we then turn to the generalized intersection problem with recognizable relations, corresponding to the evaluation of [[CRPQ]{}]{}s with an extra relation $S$. We show that the shape of the relation $I$ holds the key to decidability. If its underlying undirected graph is acyclic, then the problem is decidable in [[[PSpace]{}]{}]{} for every rational relation $S$ (and for a fixed formula the complexity drops to [[NLogSpace]{}]{}). In the cyclic case, the problem is undecidable for some rational relation $S$. For relations generalizing subsequence, we have decidability when $I$ is a DAG, and for subsequence itself, as well as for suffix, query evaluation is decidable regardless of the shape of [[CRPQ]{}]{}s. Thus, under the mild syntactic restriction of acyclicity of comparisons with respect to rational relations, such relations can be added to the common class [[CRPQ]{}]{} of graph queries, without incurring a high complexity cost. ### {#section-1 .unnumbered} We give basic definitions in Section \[sec:prelim\] and define the main problems we study in Section \[sec:gip\]. Section \[gl:sec\] introduces graph logics and establishes their connection with the (generalized) intersection problem. Section \[intp:sec\] studies decidable and undecidable cases of the intersection problem. Section \[restr:sec\] looks at the case of recognizable relations and [[CRPQ]{}]{}s and establishes decidability results based on the intersection pattern. Preliminaries {#sec:prelim} ============= Let ${\mathbb{N}}= {\{1,2,\dotsc\}}$, $[i..j] = {\{i, i+1, \dotsc, j\}}$ (if $i > j$, $[i..j]=\emptyset$), $[i] = [1..i]$. Given, $A,B \subseteq {\mathbb{N}}$, an *increasing* function $f : A \to B$ is one such that $f(i) \geq f(j)$ whenever $i>j$. If $f(i) > f(j)$ we call it *strictly* increasing. ### {#section-2 .unnumbered} We shall use letters $\Sigma$, $\Gamma$ to denote finite alphabets. The set of all finite words over an alphabet $\Sigma$ is denoted by ${\Sigma^*}$. We write ${\varepsilon}$ for the empty word, $w\cdot w'$ for the concatenation of two words, and $|w|$ for the length of a word $w$. Given a word $w \in \Sigma^*$, $w[i..j]$ stands for the substring in positions $[i..j]$, $w[i]$ for $w[i..i]$, and $w[i..]$ for $w[i..|w|]$. Positions in the word start with $1$. If $w=w' \cdot u \cdot w''$, then [$\bullet$]{} $u$ is a [*subword*]{} of $w$ (also called *factor* in the literature, written as $u {\preceq}w$), $w'$ is a [*prefix*]{} of $w$ (written as $w' {\preceq_{{\rm pref}}}w$), and $w''$ is a [*suffix*]{} of $w$ (written as $w'' {\preceq_{{\rm suff}}}w$). We say that $w'$ is a [*subsequence*]{} of $w$ (also called *subword embedding* or *scattered subword* in the literature, written as $w' {\sqsubseteq}w$) if $w'$ is obtained by removing some letters (perhaps none) from $w$, i.e., $w=a_1\ldots a_{n}$, and $w'=a_{i_1}a_{i_2}\ldots a_{i_k}$, where $1 \leq i_1 < i_2 < \ldots < i_k\leq n$. If $\Sigma \subset \Gamma$ and $w\in\Gamma^*$, then by $w_\Sigma$ we denote the projection of $w$ on $\Sigma$. That is, if $w=a_1\ldots a_n$ and $a_{i_1},\ldots, a_{i_k}$ are precisely the letters from $\Sigma$, with $i_1 < \ldots < i_k$, then $w_\Sigma=a_{i_1}\ldots a_{i_k}$. Recall that a [*monoid*]{} $M=\langle U,\cdot,1\rangle$ has an associative binary operation $\cdot$ and a neutral element $1$ satisfying $1x=x1=x$ for all $x$ (we often write $xy$ for $x\cdot y$). The set $\Sigma^*$ with the operation of concatenation and the neutral element ${\varepsilon}$ forms a [monoid]{} $\langle \Sigma^*,\cdot,{\varepsilon}\rangle$, the free monoid generated by $\Sigma$. A function $f: M\to M'$ between two monoids is a [*morphism*]{} if it sends the neutral element of $M$ to the neutral element of $M'$, and if $f(xy)=f(x)f(y)$ for all $x,y\in M$. Every morphism $f: \langle \Sigma^*,\cdot,{\varepsilon}\rangle \to M$ is uniquely determined by the values $f(a)$, for $a\in \Sigma$, as $f(a_1\ldots a_n)=f(a_1)\cdots f(a_n)$. A morphism $f: \langle \Sigma^*,\cdot,{\varepsilon}\rangle \to \langle \Gamma^*,\cdot,{\varepsilon}\rangle$ is called [*alphabetic*]{} if $f(a)\in \Gamma\cup\{{\varepsilon}\}$, and [*strictly alphabetic*]{} if $f(a)\in \Gamma$ for each $a\in \Sigma$, see [@berstel]. A language $L$ is a subset of $\Sigma^*$, for some finite alphabet $\Sigma$. It is [*recognizable*]{} if there is a finite monoid $M$, a morphism $f: \langle \Sigma^*,\cdot,{\varepsilon}\rangle \to M$, and a subset $M_0$ of $M$ such that $L=f^{-1}(M_0)$. A language $L$ is [*regular*]{} if there exists an NFA (non-deterministic finite automaton) ${{{\mathcal{A}}}}=\langle Q,\Sigma,q_0,\delta,F\rangle$ such that $L={\mathcal{L}({{{\mathcal{A}}}})}$, the language of words accepted by ${{{\mathcal{A}}}}$. We use the standard notation for NFAs, where $Q$ is the set of states, $q_0$ is the initial state, $F$ is the set of final states, and $\delta\subseteq Q\times\Sigma\times Q$ is the transition relation. A language is [*rational*]{} if it is denoted by a regular expression; such expressions are built from $\emptyset$, ${\varepsilon}$, and alphabet letters by using operations of concatenation ($e\cdot e'$), union ($e \cup e'$), and Kleene star ($e^*$). It is of course a classical result of formal language theory that the classes of recognizable, regular, and rational languages coincide. ### {#section-3 .unnumbered} While the notions of recognizability, regularity, and rationality coincide over languages $L\subseteq \Sigma^*$, they differ over [relations]{} over $\Sigma$, i.e., subsets of $\Sigma^* \times \ldots \times \Sigma^*$. We now define those (see [@berstel; @carton; @choffrut-bulletin; @oldstuff; @frenchreinvention; @thomas92]). Since $\langle \Sigma^*,\cdot,{\varepsilon}\rangle$ is a monoid, the product $(\Sigma^*)^n$ has the structure of a monoid too. We can thus define [*recognizable $n$-ary relations*]{} over $\Sigma$ as subsets $R\subseteq (\Sigma^*)^n$ so that there exists a finite monoid $M$ and a morphism $f: ({\Sigma^*})^n\to M$ such that $R=f^{-1}(M_0)$ for some $M_0\subseteq M$. The class of $n$-ary recognizable relations will be denoted by ${{\sf REC}}_n$; when $n$ is clear or irrelevant, we write just ${{\sf REC}}$. It is well-known that a relation $R \subseteq ({\Sigma^*})^n$ is in ${{\sf REC}}_n$ iff it is a finite union of the sets of the form $L_1\times\ldots\times L_n$, where each $L_i$ is a regular language over $\Sigma$, see [@berstel; @oldstuff]. Next, we define the class of regular relations. Let $\bot\not\in\Sigma$ be a new alphabet letter, and let ${\Sigma_\bot}$ be $\Sigma\cup\{\bot\}$. Each tuple $\bar w=(w_1,\ldots,w_n)$ of words from ${\Sigma^*}$ can be viewed as a word over ${\Sigma_\bot}^n$ as follows: pad words $w_i$ with $\bot$ so that they all are of the same length, and use as the $k$th symbol of the new word the $n$-tuple of the $k$th symbols of the padded words. Formally, let ${\ell}=\max_i |w_i|$. Then $w_1\otimes \ldots \otimes w_n$ is a word of length ${\ell}$ whose $k$th symbol is $(a_1,\ldots,a_n)\in {\Sigma_\bot}^n$ such that $$a_i = \begin{cases} \text{the }k\text{th letter of }w_i & \text{ if }|w_i| \geq k \\ \bot & \text{ otherwise.}\end{cases}$$ We shall also write $\otimes\bar w$ for $w_1\otimes \ldots \otimes w_n$. We define ${\pi}_i (u_1 \otimes \dotsb \otimes u_k) = u_i$ for all $i \in [k]$. A relation $R\subseteq (\Sigma^*)^n$ is called a [*regular $n$-ary relation*]{} over $\Sigma$ if there is a finite automaton ${{{\mathcal{A}}}}$ over ${{\Sigma_\bot}}^n$ that accepts $\{\otimes\bar w\ | \ \bar w\in R\}$. The class of $n$-ary regular relations is denoted by ${{\sf REG}}_n$; as before, we write ${{\sf REG}}$ when $n$ is clear or irrelevant. Finally, we define rational relations. There are two equivalent ways of doing it. One uses regular expressions, which are now built from tuples $\bar a \in (\Sigma\cup\{{\varepsilon}\})^n$ using the same operations of union, concatenation, and Kleene star. Binary relations ${\preceq_{{\rm suff}}}$, ${\preceq}$, and ${\sqsubseteq}$ are all rational: the expression $\big(\bigcup_{a\in\Sigma}({\varepsilon},a)\big)^*\cdot \big(\bigcup_{a\in\Sigma}(a,a)\big)^*$ defines ${\preceq_{{\rm suff}}}$, the expression $\big(\bigcup_{a\in\Sigma}({\varepsilon},a)\big)^*\cdot \big(\bigcup_{a\in\Sigma}(a,a)\big)^* \cdot \big(\bigcup_{a\in\Sigma}({\varepsilon},a)\big)^*$ defines ${\preceq}$, and the expression $\big(\bigcup_{a\in\Sigma}({\varepsilon},a) \cup (a,a)\big)^*$ defines ${\sqsubseteq}$. Alternatively, $n$-ary rational relations can be defined by means of $n$-tape automata, that have $n$ heads for the tapes and one additional control; at every step, based on the state and the letters it is reading, the automaton can enter a new state and move some (but not necessarily all) tape heads. The classes of $n$-ary relations so defined are called [ *rational $n$-ary relations*]{}; we use the notation ${{\sf RAT}}_n$ or just ${{\sf RAT}}$, as before. ### {#section-4 .unnumbered} While it is well known that ${{\sf REC}}_1={{\sf REG}}_1={{\sf RAT}}_1$, we have strict inclusions $${{\sf REC}}_k \ \subsetneq\ {{\sf REG}}_k\ \subsetneq\ {{\sf RAT}}_k$$ for every $k>1$ (see for example [@berstel]). For instance, ${{\preceq_{{\rm pref}}}} \in {{\sf REG}}_2 - {{\sf REC}}_2$ and ${{\preceq_{{\rm suff}}}}\in{{\sf RAT}}_2-{{\sf REG}}_2$. The classes of recognizable and regular relations are closed under intersection; however the class of rational relations is not. In fact, one can find $R\in{{\sf REG}}_2$ and $S\in{{\sf RAT}}_2$ so that $R\cap S\not\in{{\sf RAT}}_2$. However, if $R\in{{\sf REC}}_m$ and $S\in{{\sf RAT}}_m$, then $R\cap S\in{{\sf RAT}}_m$. Binary rational relations can be characterized as follows [@berstel; @nivat68]. A relation $R\subseteq \Sigma^*\times\Sigma^*$ is rational iff there is a finite alphabet $\Gamma$, a regular language $L\subseteq \Gamma^*$ and two alphabetic morphisms $f,g: \Gamma^*\to\Sigma^*$ such that $R=\{(f(w),g(w)) \ | \ w\in L\}$. If we require $f$ and $g$ to be strictly alphabetic morphisms, we get the class of [*length-preserving*]{} regular relations, i.e., $R\in{{\sf REG}}_2$ so that $(w,w')\in R$ implies $|w|=|w'|$. Regular binary relations are then finite unions of relations of the form $\{(w\cdot u,w')\ | \ (w,w')\in R, \ u\in L\}$ and $\{(w,w'\cdot u)\ | \ (w,w')\in R, \ u\in L\}$, where $R$ ranges over length-preserving regular relations, and $L$ over regular languages. ### {#section-5 .unnumbered} Since relations in ${{\sf REC}}$ and ${{\sf REG}}$ are given by NFAs, they inherit all the closure/decidability properties of regular languages. If $R\in{{\sf RAT}}$, then each of its projections is a regular language, and can be effectively constructed (e.g., from the description of $R$ as an $n$-tape automaton). Hence, the nonemptiness problem is decidable for rational relations. However, testing nonemptiness of the intersection of two rational relations is undecidable [@berstel]. Also, for $R, R' \in {{\sf RAT}}$, the following are undecidable: checking whether $R\subseteq R'$ or $R=R'$, universality ($R={\Sigma^*}\times{\Sigma^*}$), and checking whether $R\in{{\sf REG}}$ or $R\in{{\sf REC}}$ [@berstel; @carton; @lisovik]. ### Remark {#remark .unnumbered} We defined recognizable, regular, and rational relations over the same alphabet, i.e., as subsets of $({\Sigma^*})^n$. Of course it is possible to define them as subsets of $\Sigma_1 \times \ldots \times \Sigma_n$, with the $\Sigma_i$’s not necessarily distinct. Technically, there are no differences and all the results will continue to hold. Indeed, one can simply consider a new alphabet $\Sigma$ as the disjoint union of $\Sigma_i$’s, and enforce the condition that the $i$th projection only use the letters from $\Sigma_i$ (this is possible for all the classes of relations we consider). In fact, in the proofs we shall be using both types of relations. ### {#section-6 .unnumbered} A well-quasi-order ${\leq} \subseteq A \times A$ is a reflexive and transitive relation such that for every infinite sequence $(a_i)_{i \in {\mathbb{N}}}$ over $A$ there are $i<j$ with $a_i \leq a_j$. We will make use of the following two lemmas. For every alphabet $\Sigma$, the subsequence relation ${{\sqsubseteq}} \subseteq \Sigma^* \times \Sigma^*$ is a well quasi-order. For every well-quasi-order ${\leq} \subseteq A \times A$, the product order ${\leq^k} \subseteq A^k \times A^k$ (where $(a_1, \dotsc, a_k) \leq^k (a'_1, \dotsc, a'_k)$ if[f]{} $a_i \leq a'_i$ for all $i\in [k]$) is a well-quasi-order. Generalized intersection problem {#sec:gip} ================================ We now formalize the main technical problem we study. Let ${{{\mathcal{R}}}}$ be a class of relations over $\Sigma$, and ${{{\mathcal{S}}}}$ a class of binary relations over $\Sigma$. We use the notation $[m]$ for $\{1,\ldots,m\}$. If $R$ is an $m$-ary relation, $S$ is a binary relation, and $I \subseteq [m]^2$, we write $R \cap_I S$ for the set of tuples $(w_1,\ldots, w_m)$ in $R$ such that $(w_i,w_j)\in S$ whenever $(i,j)\in I$. The [*generalized intersection problem*]{} ${({{{{\mathcal{R}}}}} \mathrel{\cap_{I}} {{{{\mathcal{S}}}}}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is defined as: If ${{{\mathcal{S}}}}=\{S\}$, we write $S$ instead of $\{S\}$. We write ${\text{\sc GenInt}}_S({{{\mathcal{R}}}})$ for the class of all problems ${({{{{\mathcal{R}}}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ where $S$ is fixed, i.e., the input consists of $R\in{{{\mathcal{R}}}}$ and $I$. As was explained in the introduction, this problem captures the essence of evaluating queries in various graph logics, e.g., [[CRPQ]{}]{}s or [[ECRPQ]{}]{}s extended with rational relations $S$. The classes ${{{\mathcal{R}}}}$ will typically be ${{\sf REC}}$ and ${{\sf REG}}$. If $m=2$ and $I=\{(1,2)\}$, the generalized intersection problem becomes simply the [*intersection problem*]{} for the classes ${{{\mathcal{R}}}}$ and ${{{\mathcal{S}}}}$ of binary relations: The problem ${({{{\sf REC}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable for every rational relation $S$, simply by constructing $R\cap S$, which is a rational relation, and testing its nonemptiness. However, ${({{{\sf REG}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ could already be undecidable (we shall give one particularly simple example later). Graph logics and the generalized intersection problem {#gl:sec} ===================================================== In this section we show how the (generalized) intersection problems provide us with upper and lower bounds on the complexity of evaluating a variety of logical queries over graphs. We start by recalling the basic classes of logics used in querying graph data, and show that extending them with rational relations allows us to cast the query evaluation problem as an instance of the generalized intersection problem. The key observations are that: [$\bullet$]{} the complexity of ${\text{\sc GenInt}}_S({{\sf REC}})$ and ${({{{\sf REC}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ provide an upper and a lower bound for the complexity of evaluating [[CRPQ]{}]{}($S$) queries; and for [[ECRPQ]{}]{}($S$), these bounds are provided by the complexity of ${\text{\sc GenInt}}_S({{\sf REG}})$ and of ${({{{\sf REG}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. The standard abstraction of graph databases [@AG-survey] is finite $\Sigma$-labeled graphs $G=\langle V,E\rangle$, where $V$ is a finite set of nodes, or vertices, and $E\subseteq V \times \Sigma \times V$ is a set of labeled edges. A [*path*]{} $\rho$ from $v_0$ to $v_m$ in $G$ is a sequence of edges $(v_0,a_0,v_1)$, $(v_1,a_1,v_2), \cdots, (v_{m-1},a_{m-1},v_m)$ from $E$, for some $m \geq 0$. The [*label*]{} of $\rho$, denoted by $\lambda(\rho)$, is the word $a_0 \cdots a_{m-1} \in \Sigma^*$. The main building blocks for graph queries are [*regular path queries*]{}, or [*RPQ*]{}s [@CMW87]; they are expressions of the form $x {\stackrel{L}{\to}} y$, where $L$ is a regular language. We normally assume that $L$ is represented by a regular expression or an NFA. Given a $\Sigma$-labeled graph $G=\langle V,E\rangle$, the answer to an RPQ above is the set of pairs of nodes $(v,v')$ such that there is a path $\rho$ from $v$ to $v'$ with $\lambda(\rho)\in L$. [*Conjunctive RPQs*]{}, or [*[[CRPQ]{}]{}s*]{} [@CGLV00; @CGLV00b; @CM90] are the closure of RPQs under conjunction and existential quantification. Formally, they are expressions of the form $$\label{crpq-eq} {\varphi}(\bar x) \ \ =\ \ \exists \bar y\ \bigwedge_{i=1}^m (u_i {\stackrel{L_i}{\longrightarrow}} u_i')$$ where variables $u_i,u_i'$s come from $\bar x, \bar y$. The semantics naturally extends the semantics of RPQs: ${\varphi}(\bar a)$ is true in $G$ iff there is a tuple $\bar b$ of nodes such that for every $i\leq m$ and every $v_i,v_i'$ interpreting $u_i$ and $u_i'$, respectively, we have a path $\rho_i$ between $v_i$ and $v_i'$ whose label $\lambda(\rho_i)$ is in $L_i$. [[CRPQ]{}]{}s can further be extended to [*compare*]{} paths. For that, we need to name path variables, and choose a class of allowed relations on paths. The simplest such extension is the class of [[CRPQ]{}]{}$(S)$ queries, where $S$ is a binary relation over $\Sigma^*$. Its formulae are of the form $$\label{crpqs-eq} {\varphi}(\bar x) \ \ =\ \ \exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\chi_i,\chi_j)\Big)$$ where $I\subseteq [m]^2$. We use variables $\chi_1,\ldots,\chi_m$ to denote paths; these are quantified existentially. That is, the semantics of $G\models {\varphi}(\bar a)$ is that there is a tuple $\bar b$ of nodes and paths $\rho_k$, for $k\leq m$, between $v_k$ and $v_k'$ (where, as before, $v_k,v_k'$ are elements of $\bar a, \bar b$ interpreting $u_k,u_k'$) such that $(\lambda(\rho_i),\lambda(\rho_j))\in S$ whenever $(i,j)\in I$. For instance, the query $$\exists y, y'\ \big( (x {\stackrel{\chi:\Sigma^*a}{\longrightarrow}} y) \wedge (x{\stackrel{\chi':\Sigma^*b}{\longrightarrow}} y') \wedge \chi {\sqsubseteq}\chi'\big)$$ finds nodes $v$ so that there are two paths starting from $v$, one ending with an $a$-edge, whose label is a subsequence of the other one, that ends with a $b$-edge. The input to the [*query evaluation problem*]{} consists of a graph $G$, a tuple $\bar v$ of nodes, and a query ${\varphi}(\bar x)$; the question is whether $G\models{\varphi}(\bar v)$. This corresponds to the [*combined complexity*]{} of query evaluation. In the context of query evaluation, one is often interested in [*data complexity*]{}, when the typically small formula ${\varphi}$ is fixed, and the input consists of the typically large graph $(G,\bar v)$. We now relate it to the complexity of ${\text{\sc GenInt}}_S({{\sf REC}})$. \[crpqs-lemma-one\] Fix a [[CRPQ]{}]{}($S$) query ${\varphi}$ as in (\[crpqs-eq\]). Then there is a [[DLogSpace]{}]{} algorithm that, given a graph $G$ and a tuple $\bar v$ of nodes, constructs an $m$-ary relation $R \in {{\sf REC}}$ so that the answer to the generalized intersection problem ${({R} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is ‘yes’ iff $G\models{\varphi}(\bar v)$. Given a $\Sigma$-labeled graph $G=\langle V, E\rangle$ and two nodes $v,v'$, we write ${{{\mathcal{A}}}}(G,v,v')$ for $G$ viewed as an NFA with the initial state $v$ and the final state $v'$ (that is, the set of states is $V$, the transition relation is $E$, and the alphabet is $\Sigma$). The language of such an automaton, ${{{\mathcal{L}}}}({{{\mathcal{A}}}}(G,v,v'))$, is the set of labels of all paths between $v$ and $v'$. Now consider a [[CRPQ]{}]{}($S$) query ${\varphi}(\bar x)$ given by $$\exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\chi_i,\chi_j)\Big),$$ as in (\[crpqs-eq\]). Suppose we are given a graph $G$ as above and a tuple of nodes $\bar v$, of the same length as the length of $\bar x$. The [[DLogSpace]{}]{} algorithm works as follows. First we enumerate all tuples $\bar b$ of nodes of $G$ of the same length as $\bar y$; since ${\varphi}$ is fixed, this can be done in [[DLogSpace]{}]{}. For each $\bar b$, we construct an $m$-ary relation $R_{\bar b}$ in ${{\sf REC}}$ as follows. Let $n_i$ and $n_i'$ be the interpretations of $u_i$ and $u_i'$, when $\bar x$ is interpreted as $\bar v$ and $\bar y$ as $\bar b$. Then $$R_{\bar b} \ = \ \prod_{i=1}^m ({{{\mathcal{L}}}}({{{\mathcal{A}}}}(G,n_i,n_i')) \cap L_i).$$ Note that it can be constructed in [[DLogSpace]{}]{}; indeed each coordinate of $R_{\bar b}$ is simply a product of the automaton ${{{\mathcal{A}}}}(G,n_i,n_i')$ and a fixed automaton defining $L_i$. Next, let $R=\bigcup_{\bar b} R_{\bar b}$. This is constructed in [[DLogSpace]{}]{} too. Now it follows immediately from the construction that $R\cap_I S\neq \emptyset$ iff for some $\bar b$, there exist paths $\rho_i$ between $n_i,n_i'$, for $i\leq m$, such that $(\lambda(\rho_l),\lambda(\rho_j))\in S$ whenever $(l,j)\in I$, i.e., iff $G\models {\varphi}(\bar v)$. Conversely, the intersection problem for recognizable relations and $S$ can be encoded as answering [[CRPQ]{}]{}($S$) queries. \[crpqs-lemma-two\] For any given binary relation $S$, there is a [[CRPQ]{}]{}($S$) query ${\varphi}(x,x')$ and a [[DLogSpace]{}]{} algorithm that, given a relation $R\in{{\sf REC}}_2$, constructs a graph $G$ and two nodes $v,v'$ so that $G\models{\varphi}(v,v')$ iff $R\cap S\neq \emptyset$. Let $R$ be in ${{\sf REC}}_2$. It is given as $\bigcup_{i=1}^n (L_i \times K_i)$, where the $L_i$s and the $K_i$s are regular languages over $\Sigma$. These languages are given by their NFAs which we can view as $\Sigma$-labeled graphs. Let $\langle V_i, E_i\rangle$ be the underlying graph of the NFA defining $L_i$, such that $v^i_0$ is the initial state, and $F_i$ is the set of final states. Likewise, let $\langle W_i, H_i\rangle$ be the underlying graph of the NFA defining $K_i$, such that $w^i_0$ is the initial state, and $C_i$ is the set of final states. We now construct the graph $G$. Its labeling alphabet is the union of $\Sigma$ and $\{\#, \$, !\}$. Its set of vertices is the disjoint union of all the $V_i$s, $W_i$s, as well as two distinguished nodes [*start*]{} and [*end*]{}. Its edges include all the edges from $E_i$s and $H_i$s, and the following: [$\bullet$]{} $\#$-labeled edges from [*start*]{} to each initial state, i.e., to each $v_i^0$ and $w_i^0$ for all $i \leq n$. $\$$-labeled edges between the initial states of automata with the same index, i.e., edges $(v^i_0,\$,w^i_0)$ for all $i \leq n$. $!$-labeled edges from final states to [*end*]{}, i.e., edges $(v,!,\text{\em end})$, where $v \in \bigcup_{i\leq n} F_i \cup \bigcup_{i \leq n} C_i$. We now define a [[CRPQ]{}]{}($S$) query ${\varphi}(x,y)$ (omitting path variables for paths that are not used in comparisons): $$\exists x_1, x_2, z_1, z_2\ \left( \begin{array}{cccc} & x {\stackrel{\#}{\to}} x_1 & \wedge & x {\stackrel{\#}{\to}} x_2\\ \wedge & x_1 {\stackrel{\$}{\to}} x_2 & & \\ \wedge & x_1 {\stackrel{\chi:{\Sigma^*}}{\to}} z_1 & \wedge & x_2 {\stackrel{\chi':{\Sigma^*}}{\to}} z_2\\ \wedge & z_1 {\stackrel{!}{\to}} y & \wedge & z_2 {\stackrel{!}{\to}} y\\ \wedge & S(\chi,\chi') \end{array} \right)$$ The query says that from [*start*]{}, we have $\#$-edges to the initial states $v^i_0$ and $w^i_0$: they must have the same index since there is a $\$$-edge between them. From there we have two paths, $\rho$ and $\rho'$, corresponding to the variables $\chi$ and $\chi'$, which are $\Sigma$-labeled, and thus are paths in the automata for $L_i$ and $K_i$, respectively. From the end nodes of those paths we have $!$-edges to [*end*]{}, so they must be final states; in particular, $\lambda(\rho)\in L_i$ and $\lambda(\rho')\in K_i$. We finally require $(\lambda(\rho),\lambda(\rho'))\in S$, i.e., $(\lambda(\rho),\lambda(\rho'))\in (L_i\times K_i)\cap S$. Hence, if $G\models{\varphi}(\text{\em start},\text{\em end})$ then for some $i\leq n$ we have two words $(w,w')$ that belong to $(L_i\times K_i)\cap S$, i.e., $R\cap S\neq \emptyset$. Conversely, if $R\cap S\neq \emptyset$, then $(L_i\times K_i)\cap S\neq \emptyset$ for some $i\leq n$, and the witnessing paths of the nonemptiness of $(L_i\times K_i)\cap S$ will witness the formula ${\varphi}(\text{\em start},\text{\em end})$ (together with initial states of the automata of $L_i$ and $K_i$ and some of their final states). Combining the lemmas, we obtain: \[crpq-thm\] Let ${{{\mathcal{K}}}}$ be a complexity class closed under [[DLogSpace]{}]{} reductions. Then: 1. If the problem ${\text{\sc GenInt}}_S({{\sf REC}})$ is in ${{{\mathcal{K}}}}$, then data complexity of [[CRPQ]{}]{}($S$) queries is in ${{{\mathcal{K}}}}$; and 2. If the problem ${({{{\sf REC}}} \cap {S}) \stackrel{\text{\tiny ?}}{=}\emptyset}$ is hard for ${{{\mathcal{K}}}}$, then so is data complexity of [[CRPQ]{}]{}($S$) queries. We now consider [*extended CRPQs*]{}, or [*ECRPQs*]{}, which enhance CRPQs with regular relations [@pods10], and prove a similar result for them, with the role of ${{\sf REC}}$ now played by ${{\sf REG}}$. Formally, [[ECRPQ]{}]{}s are expressions of the form $$\label{ecrpq-eq} {\varphi}(\bar x) \ \ =\ \ \exists \bar y\ \Big(\bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \ \ \wedge\ \ \bigwedge_{j=1}^k R_j(\bar\chi_j)\Big)$$ where each $R_j$ is a relation from ${{\sf REG}}$, and $\bar\chi_j$ a tuple from $\chi_1,\ldots,\chi_m$ of the same arity as $R_j$. The semantics of course extends the semantics of [[CRPQ]{}]{}s: the witnessing paths $\rho_1,\ldots,\rho_m$ should also satisfy the condition that for every atom $R(\rho_{i_1},\ldots,\rho_{i_l})$ in (\[ecrpq-eq\]), the tuple $(\lambda(\rho_{i_1}),\ldots,\lambda(\rho_{i_l}))$ is in $R$. Finally, we obtain [[ECRPQ]{}]{}($S$) queries by adding comparisons with respect to a relation $S\in{{\sf RAT}}$, getting a class of queries ${\varphi}(\bar x)$ of the form $$\label{ecrpqs-eq} \exists \bar y\ \Big(\!\bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \wedge \bigwedge_{j=1}^k R_j(\bar\chi_j) \wedge\! \bigwedge_{(i,j)\in I} S(\chi_i,\chi_j)\Big)$$ Similarly to the case of [[CRPQ]{}]{}s, we can establish a connection between data complexity of [[ECRPQ]{}]{}($S$) queries and the complexity of the generalized intersection problem: \[ecrpq-thm\] Let ${{{\mathcal{K}}}}$ be a complexity class closed under [[DLogSpace]{}]{} reductions. Then: 1. If the problem ${\text{\sc GenInt}}_S({{\sf REG}})$ is in ${{{\mathcal{K}}}}$, then data complexity of [[ECRPQ]{}]{}($S$) queries is in ${{{\mathcal{K}}}}$; and 2. If the problem ${({{{\sf REG}}} \cap {S}) \stackrel{\text{\tiny ?}}{=}\emptyset}$ is hard for ${{{\mathcal{K}}}}$, then so is data complexity of [[ECRPQ]{}]{}($S$) queries. Similarly to the proof of Theorem \[crpq-thm\], the result will be an immediate consequence of two lemmas. First, evaluation of [[ECRPQ]{}]{}($S$) queries is reducible to the generalized intersection problem for regular relations. \[ecrpqs-lemma-one\] Fix an [[ECRPQ]{}]{}($S$) query ${\varphi}$ as in (\[ecrpqs-eq\]). Then there is a [[DLogSpace]{}]{} algorithm that, given a graph $G$ and a tuple $\bar v$ of nodes, constructs an $m$-ary relation $R \in {{\sf REG}}$ so that the answer to the generalized intersection problem ${({R} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is ‘yes’ iff $G\models{\varphi}(\bar v)$. Conversely, the intersection problem for regular relations and $S$ can be encoded as answering [[ECRPQ]{}]{}($S$) queries. \[ecrpqs-lemma-two\] For each binary relation $S$, there is an [[ECRPQ]{}]{}($S$) query ${\varphi}(x,x')$ and a [[DLogSpace]{}]{} algorithm that, given a relation $R\in{{\sf REG}}_2$, constructs a graph $G$ and two nodes $v,v'$ so that $G\models{\varphi}(v,v')$ iff $(R\cap S)\neq \emptyset$. The proof of Lemma \[ecrpqs-lemma-one\] is almost the same as the proof of Lemma \[crpqs-lemma-one\]: as before, we enumerate tuples $\bar b$, construct relations $R_{\bar b}$ and $R=\bigcup_{\bar b} R_{\bar b}$, but this time we take the product of this recognizable relation with regular relations mentioned in the query. Since the query is fixed, and hence we take a product with a fixed number of fixed automata, such a product construction can be done in [[DLogSpace]{}]{}. The result is now a regular $m$-ary relation. The rest of the proof is exactly the same as in Lemma \[crpqs-lemma-one\]. We now prove Lemma \[ecrpqs-lemma-two\]. Let $R\in{{\sf REG}}_2$ be given by an NFA over ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$ whose underlying graph is $G_R=\langle V_R, E_R\rangle$, where $E_R\subseteq V_R \times ({{\Sigma_\bot}}\times{{\Sigma_\bot}}) \times V_R$. Let $v_0$ be its initial state, and let $F$ be the set of final states. We now define the graph $G$. Its labeling alphabet $\Gamma$ is the disjoint union of ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$, the alphabet $\Sigma$ itself, and a new symbol $\#$. Its nodes $V$ include all nodes in $V_R$ and two extra nodes, $v_f$ and $v'$. The edges are: [$\bullet$]{} all the edges in $E_R$; edges $(v,\#,v_f)$ for every $v\in F$; edges $(v',a,v')$ for every $a\in\Sigma$. We now define two regular relations over $\Gamma$. The first, $R_1$, consists of pairs $(w, w')$, where $w \in ({{\Sigma_\bot}}\times{{\Sigma_\bot}})^*$ and $w'\in {\Sigma^*}$. Furthermore, $w$ is of the form $w' \otimes w''$ for some $w''\in{\Sigma^*}$. It is straightforward to check that this relation is regular. The second one, $R_2$, is the same except $w$ is of the form $w''\otimes w'$. In other words, the first component is $w_1\otimes w_2$, and the second is either $w_1$ or $w_2$, for $R_1$ or $R_2$, respectively. Next, we define the [[ECRPQ]{}]{}($S$) ${\varphi}(x,y)$: $$\exists x_1, y_1, x_2, y_2, z\ \left( \begin{array}{cccccc} & x {\stackrel{\chi: {{\Sigma_\bot}}\times{{\Sigma_\bot}}}{\to}} z & \wedge & z {\stackrel{\#}{\to}} y && \\ \wedge & x_1 {\stackrel{\chi_1:{\Sigma^*}}{\to}} y_1 & \wedge & x_2 {\stackrel{\chi_2:{\Sigma^*}}{\to}} y_2 &&\\ \wedge & R_1(\chi,\chi_1) & \wedge & R_2(\chi,\chi_2) & \wedge & S(\chi_1,\chi_2) \end{array} \right)$$ Note that when this formula is evaluated over $G$, with $x$ interpreted as $v_0$ and $y$ interpreted as $v_f$, the paths $\chi_1$ and $\chi_2$ can have arbitrary labels from $\Sigma^*$. Paths $\chi$ can have arbitrary labels over ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$; however, since they start in $v_0$ and must be followed by an $\#$-edge, they end in a final state of the automaton for $R$, and hence labels of these paths are precisely words in ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$ of the form $w_1\otimes w_2$, where $(w_1,w_2)\in R$. Now $R_1$ ensures that the label of $\chi_1$ is $w_1$ and that the label of $\chi_2$ is $w_2$. Hence the labels of $\chi_1$ and $\chi_2$ are precisely the pairs of words in $R$, and the query asks whether such a pair belongs to $S$. Hence, $G\models {\varphi}(v_0,v_f)$ iff $R\cap S\neq \emptyset$. It is straightforward to check that the construction of $G$ can be carried out in [[DLogSpace]{}]{}. This proves the lemma and the theorem. Thus, our next goal is to understand the behaviors of the generalized intersection problem for various rational relations $S$ which are of interest in graph logics; those include subword, suffix, subsequence. In fact to rule out many undecidable or infeasible cases it is often sufficient to analyze the intersection problem. We do this in the next section, and then analyze the decidable cases to come up with graph logics that can be extended with rational relations. The intersection problem: decidable and undecidable cases {#intp:sec} ========================================================= We now study the problem ${({{{\sf REG}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ for binary rational relations $S$ such as subword and subsequence, and for classes of relations generalizing them. The input is a binary regular relation $R$ over $\Sigma$, given by an [NFA]{}over ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$. The question is whether $R\cap S\neq\emptyset$. We also derive results about the complexity of [[ECRPQ]{}]{}($S$) queries. For all lower-bound results in this section, we assume that the alphabet contains at least two symbols. As already mentioned, there exist rational relations $S$ such that ${({{{\sf REG}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is undecidable. However, we are interested in relations that are useful in graph querying, and that are among the most commonly used rational relations, and for them the status of the problem was unknown. Note that the problem ${({{{\sf REC}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is tractable: given $R\in{{\sf REC}}$, the relation $R\cap S$ is rational, can be efficiently constructed, and checked for nonemptiness. Undecidable cases: subword and relatives {#subword-subsec} ---------------------------------------- We now show that even for such simple relations as subword and suffix, the intersection problem is undecidable. That is, given an NFA over ${{\Sigma_\bot}}\times{{\Sigma_\bot}}$ defining a regular relation $R$, the problem of checking for the existence of a pair $(w,w')\in R$ with $w {\preceq_{{\rm suff}}}w'$ or $w{\preceq}w'$ is undecidable. \[suffix-thm\] The problems and are undecidable. As an immediate consequence of this, we obtain: The query evaluation problem for [[ECRPQ]{}]{}(${\preceq_{{\rm suff}}}$) and [[ECRPQ]{}]{}(${\preceq}$) is undecidable. Thus, some of the most commonly used rational relations cannot be added to [[ECRPQ]{}]{}s without imposing further restrictions. We skip the proof of Theorem \[suffix-thm\] for the time being and concentrate first on how to obtain a more general undecidability result out of it. As we will see below, the essence of the undecidability result is that relations such as ${\preceq_{{\rm suff}}}$ and ${\preceq}$ can be decomposed in a way that one of the components of the decomposition is a graph of a nontrivial strictly alphabetic morphism. More precisely, let $R\cdot R'$ be the binary relation $\{(w\cdot w', u\cdot u') \ | \ (w,u)\in R \text{ and }(w',u')\in R'\}$. Let $\text{Graph}(f)$ be the graph of a function $f:\Sigma^*\to\Sigma^*$, i.e., $\{(w,f(w))\ |\ w\in{\Sigma^*}\}$. \[gen-subw-cor\] Let $R_0,R_1$ be binary relations on $\Sigma$ such that $R_0$ is recognizable and its second projection is $\Sigma^*$. Let $f$ be a strictly alphabetic morphism that is not constant (i.e. the image of $f$ contains at least two letters). Then, for $S=R_0\cdot\text{\rm Graph}(f)\cdot R_1$, the problem ${({{{\sf REG}}} \cap {S})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is undecidable. Note that both ${\preceq_{{\rm suff}}}$ and ${\preceq}$ are of the required shape: suffix is $(\{{\varepsilon}\}\times\Sigma^*) \cdot \text{Graph(id)} \cdot (\{{\varepsilon}\}\times\{{\varepsilon}\})$, and subword is $(\{{\varepsilon}\}\times\Sigma^*) \cdot \text{Graph(id)} \cdot (\{{\varepsilon}\}\times\Sigma^*)$, where id is the identity alphabetic morphism. We present the proof for the suffix relation ${\preceq_{{\rm suff}}}$. The proofs for the subword relation, and more generally, for the relations containing the graph of an alphabetic morphism follow the same idea and will be explained after the proof for ${\preceq_{{\rm suff}}}$. The proof is by encoding nonemptiness for linearly bounded automata (LBA). Recall that an LBA ${{{\mathcal{A}}}}$ has a tape alphabet $\Gamma$ that contains two distinguished symbols, $\alpha$ and $\beta$, which are the left and the right marker. The input word $w\in (\Gamma-\{\alpha,\beta\})^*$ is written between them, i.e., the content of the input tape is $\alpha \cdot w\cdot \beta$. The LBA behaves just like a Turing machine, except that when it is reading $\alpha$ or $\beta$, it cannot rewrite them, and it cannot move left of $\alpha$ or right of $\beta$. The problem of checking whether the language of a given LBA is nonempty is undecidable. We encode this as follows. The alphabet $\Sigma$ is the disjoint union of the tape alphabet $\Gamma$ of the LBA ${{{\mathcal{A}}}}$, the set of its states $Q$, and the designated symbol $\$$ (we assume, of course, that these are disjoint). A configuration $C$ of the LBA consists of the tape content $a_0\ldots a_n$, where $a_0=\alpha$ and $a_n=\beta$, and all the $a_i$s, for $0< i < n$, are letters from $\Gamma-\{\alpha,\beta\}$, the state $q$, and the position $i$, for $0 \leq i \leq n$, that the head is pointing to. We encode this as a word $$w_C\ =\ \$a_0\ldots a_{i-1}qa_i\ldots a_n\$ \in \Sigma^*$$ of length $n+4$. Of course if the head is pointing to $\alpha$, the configuration is $\$qa_0\ldots a_n\$$. Note that if we have a run of the LBA with configurations $C_0, C_1, \ldots$, then the lengths of all the $w_{C_i}$s are the same. Next, note that the relation $$R^{{{\mathcal{A}}}}_{{\rm imm}}\ = \ \{(w_C,w_{C'})\ | \ C' \text{ is an immediate successor of }C\}$$ is regular (in fact such a relation is well-known to be regular even for arbitrary Turing machines [@jacm2003; @graedel2000; @bruyere]). Since all configurations are of the same length, we obtain that the relation $$R_{{{{\mathcal{A}}}}}' \ =\ \{(w_{C_0}w_{C_1}\ldots w_{C_m},w_{C_1'}\ldots w_{C_m'}) \ | \ C_{i+1}'\ \text{is an immediate successor of }C_i\text{ for } i<m\}$$ is regular too (since only one configuration in the first projection does not correspond to a configuration in the second projection). By taking the product with a regular language that ensures that the first symbol from $Q$ in a word is $q_0$, and the last such symbol is from $F$, we have a regular relation $$R_{{{{\mathcal{A}}}}} \ =\ \biggl\{(w_{C_0}w_{C_1}\ldots w_{C_m},w_{C_1'}\ldots w_{C_m'}) \ \biggl| \begin{array}{l} C_{i+1}'\ \text{is an immediate successor of }C_i\text{ for } i<m;\\ C_0 \text{ is an initial configuration };\\ C_m \text{ is a final configuration } \end{array} \biggr\}$$ which can be effectively constructed from the description of the LBA. Now assume that $R_{{{{\mathcal{A}}}}}\cap\mbox{${\preceq_{{\rm suff}}}$}$ is nonempty. Then, since all encodings of configurations are of the same length, it must contain a pair $ (w_{C_0}w_{C_1}\ldots w_{C_m},w_{C_1}\ldots w_{C_m})$ such that $C_{i+1}$ is an immediate successor of $C_i$ for all $i<m$. Since $C_0$ is an initial configuration and $C_m$ is a final configuration, this implies that the LBA has an accepting computation. Conversely, if there is an accepting computation with a sequence of configurations $C_0,C_1,\ldots,C_m$ of the LBA, then the pair $ (w_{C_0}w_{C_1}\ldots w_{C_m},w_{C_1}\ldots w_{C_m})$ is both in $R_{{{{\mathcal{A}}}}}$ and in the suffix relation. Hence, $R_{{{{\mathcal{A}}}}}\cap\mbox{${\preceq_{{\rm suff}}}$}$ is nonempty iff there is an accepting computation of the LBA, proving undecidability. The proof for the subword relation is practically the same. We change the definition of relation $R_{{{{\mathcal{A}}}}}$ so that there is an extra \$ symbol inserted between $w_{C_0}$ and $w_{C_1}$, and two extra \$ symbols after $w_{C_m}$ in the first projection; in the second projection we insert extra two \$ symbols before $w_{C_1'}$ and after $w_{C_m'}$. Note that the relation remains regular: even if the components are not fully synchronized, at every point there is a constant delay between them (either 2 or 1), and this can be captured by simply encoding one or two alphabet symbols into the state. Since in each word there are precisely two places where the subword \$\$\$ appears, the subword relation in this case becomes the suffix relation, and the previous proof applies. The same proof can be applied to deduce Proposition \[gen-subw-cor\]. Note that we can encode letters of alphabet $\Sigma$ within the alphabet $\{0,1\}$ so that the encodings of each letter of $\Sigma$ will have the same length, namely $\lceil \log_2 (|\Gamma|+|Q|+1)\rceil$. Then the same proof as before will apply to show undecidability over the alphabet $\{0,1\}$, since the encodings of configurations still have the same length. Since $R_0$ is regular, it is of the form $\bigcup_i L_i \times K_i$, and by the assumption, $\bigcup_i K_i={\Sigma^*}$. Thus, the encoding of the initial configuration will belong to one of the $K_i$s, say $K_j$. We then take a fixed word $w_0\in L_j$ and assume that the second component of the relation starts with $w_0$ (which can be enforced by the regular relation). Likewise, we take a fixed pair $(w_1,w_2)\in R_1$, and assume that $w_1$ is the suffix of the first component of the relation, and $w_2$ is the suffix of the second. This too can be enforced by the regular relation. Now if we have a non-constant alphabetic morphism $f$, we have two letters, say $a$ and $b$, so that $f(a)\neq f(b)$. We now simply use these letters, with $a$ playing the role of $0$, and $b$ playing the role of $1$ in the first projection of relation $R$, and $f(a), f(b)$ playing the roles of $0$ and $1$ in the second projection, to encode the run of an LBA as we did before. The only difference is that instead of a sequence of \$ symbols to specify the positions of the encoding we use a (fixed-length) sequence that is different from $w_0,w_1,w_2$ above, to identify its position uniquely. Then the proof we have presented above applies verbatim. Decidable cases: subsequence and relatives {#subseq-subsec} ------------------------------------------ We now show that the intersection problem is decidable for the subsequence relation ${\sqsubseteq}$ and, much more generally, for a class of relations that do not, like the relations considered in the previous section, have a “rigid” part. More precisely, the problem is also decidable for any relation so that its projection on the first component is closed under subsequence. However, the complexity bounds are extremely high. In fact we show that the complexity of checking whether $(R \cap \mbox{${\sqsubseteq}$}) \neq \emptyset$, when $R$ ranges over ${{\sf REG}}_2$, is not bounded by any multiply-recursive function. This was previously known for $R$ ranging over ${{\sf RAT}}_2$, and was viewed as the simplest problem with non-multiply-recursive complexity [@CS-fsttcs07]. We now push it further and show that this high complexity is already achieved with regular relations. Some of the ideas for showing this come from a decidable relaxation of the Post Correspondence Problem (PCP), namely the *regular Post Embedding Problem*, or ${\textup{PEP}^{\textit{reg}}}$, introduced in [@CS-fsttcs07]. An instance of this problem consists of two morphisms $\sigma,\sigma': \Sigma^* \to \Gamma^*$ and a regular language $L \subseteq \Sigma^*$; the question is whether there is some $w \in L$ such that $\sigma(w) {\sqsubseteq}\sigma'(w)$ (recall that in the case of the PCP the question is whether $\sigma(w) = \sigma'(w)$ with $L = \Sigma^+$). We call $w$ a *solution* to the instance $(\sigma,\sigma',L)$. The ${\textup{PEP}^{\textit{reg}}}$ problem is known to be decidable, and as hard as the reachability problem for lossy channel systems [@CS-fsttcs07] which cannot be bounded by any primitive-recursive function —in fact, by any multiply-recursive function (a generalization of primitive recursive functions with hyper-Ackermannian complexity, see [@rose]). More precisely, it is shown in [@schsch] to be precisely at the level $\textup F_{\omega^\omega}$ of the fast-growing hierarchy of recursive functions [@fast; @rose].[^2] The problem ${\textup{PEP}^{\textit{reg}}}$ is just a reformulation of the problem ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. Indeed, relations of the form $\{(f(w), g(w))\ | \ w\in L\}$, where $L\subseteq {\Sigma^*}$ ranges over regular languages and $f,g$ over morphisms $\Sigma^*\to\Gamma^*$ are precisely the relations in ${{\sf RAT}}_2$ [@berstel; @nivat68]. Hence, ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable, with non-multiply-recursive complexity. \[prop:subseq-rat-dec-nmr\] ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable, non-multiply-recursive. We show that the lower bound already applies to regular relations. \[subseq-dec\] The problem ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable, and its complexity is not bounded by any multiply-recursive function. The proof of the theorem above will be shown further down, after some preparatory definitions and lemmas are introduced. It is worth noticing that one cannot solve the problem ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ by simply reducing to nonemptiness of rational relations due to the following. \[prop:reg-subseq-nonrat\] There is a binary regular relation $R$ such that $(R \cap {{\sqsubseteq}})$ is not rational. Let $\Sigma = {\{ a, b\}}$, and consider the following regular relation, $$R = {\{(a^m,b^m \cdot a^{m'}) \mid m, m' \in {\mathbb{N}}\}} .$$ Note that the relation $R \cap {{\sqsubseteq}}$ is then ${\{(a^m,b^m \cdot a^{m'} ) \mid m, m' \in {\mathbb{N}}, m' \geq m\}}$. We show that $R \cap {{\sqsubseteq}}$ is not rational by means of contradiction. Suppose that it is, and let ${{\mathcal{A}}}$ be an NFA over ${\{a,b,{\varepsilon}\}} \times {\{a,b,{\varepsilon}\}}$ that recognizes $R \cap {{\sqsubseteq}}$. Suppose $Q$ is the set of states of ${{\mathcal{A}}}$, and $|Q|=n$. Consider the following pair $$(a^{n+1}, b^{n+1} \cdot a^{n+1}) \quad\in\quad R \cap {{\sqsubseteq}}.$$ Then there must be some $u \in ({\{a,b,{\varepsilon}\}} \times {\{a,b,{\varepsilon}\}})^*$ such that $$({\pi}_1(u),{\pi}_2(u)) = (a^{n+1}, b^{n+1} \cdot a^{n+1})$$ and $u \in {\mathcal{L}({{\mathcal{A}}})}$. Let ${\rho_{{\mathcal{A}}}}: [0..|u|] \to Q$ be the accepting run of ${{\mathcal{A}}}$ on $u$, and let $1 \leq i_1< \dotsb < i_{n+1} \leq |u|$ be such that ${\pi}_2(u[i_j]) = a$ for all $j \in [n+1]$. Clearly, among ${\rho_{{\mathcal{A}}}}(i_1), \dotsc, {\rho_{{\mathcal{A}}}}(i_{n+1})$ there must be two repeating elements by the pigeonhole principle. Let $1\leq j_1<j_2\leq n+1$ be such elements, where ${\rho_{{\mathcal{A}}}}(i_{j_1})={\rho_{{\mathcal{A}}}}(i_{j_2})$. Hence $u' = u[1..i_{j_1}-1] \cdot u[i_{j_2}..] \in {\mathcal{L}({{\mathcal{A}}})}$, and therefore $$\big({\pi}_1(u'), {\pi}_2(u')\big) \quad\in\quad R \cap {{\sqsubseteq}}.$$ Notice that ${\pi}_2(u') = b^{n+1} \cdot a^{n+1 - (j_2 - j_1)}$. But by definition of $R \cap {{\sqsubseteq}}$ we have that ${\pi}_1(u') = a^{n+1}$ with $n+1 - (j_2 - j_1) \geq n+1$, which is clearly false. The contradiction comes from the assumption that $R \cap {{\sqsubseteq}}$ is rational. As already mentioned, the decidability part of Theorem \[subseq-dec\] follows from Proposition \[prop:subseq-rat-dec-nmr\]. We prove the lower bound by reducing ${\textup{PEP}^{\textit{reg}}}$ into ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. This reduction is done in two phases. First, we show that there is a reduction from ${\textup{PEP}^{\textit{reg}}}$ into the problem of finding solutions of ${\textup{PEP}^{\textit{reg}}}$ with a certain shape, which we call a *strict codirect solutions* (Lemma \[lemma:instrumental\]). Second, we show that there is a reduction from the problem of finding strict codirect solutions of a ${\textup{PEP}^{\textit{reg}}}$ instance into ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ (Proposition \[prop:reducRPEP-inter-REG-subseq\]). Both reductions are elementary and thus the hardness result of Theorem \[subseq-dec\] follows. In the next section we define the strict codirect solutions for ${\textup{PEP}^{\textit{reg}}}$, showing that we can restrict to this kind of solutions. In the succeeding section we show how to reduce the problem into ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. ### Codirect solutions of ${\textup{PEP}^{\textit{reg}}}$ There are some variations of the ${\textup{PEP}^{\textit{reg}}}$ problem that result being equivalent problems. These variations restrict the solutions to have certain properties. Given a ${\textup{PEP}^{\textit{reg}}}$ instance $(\sigma,\sigma',L)$, we say that $w \in L$ with $|w|=m$ is a *codirect solution* if there are (possibly empty) words $v_1, \dotsc, v_m$ such that 1. \[item:codir:1\] $v_k {\sqsubseteq}\sigma'(w[k])$ for all $1 \leq k \leq m$, 2. \[item:codir:2\] $\sigma(w[1..m]) = v_1 \dotsb v_m$, and 3. \[item:codir:3\] $|\sigma(w[1..k])| \geq |v_1 \dotsb v_k|$ for all $1 \leq k \leq m$. If furthermore 1. \[item:codir:4\] $|\sigma(w[1..k])| > |v_1 \dotsb v_k|$ for all $1 \leq k < m$, we say that it is a *strict codirect solution*. In this case we say that the solution $w$ is *witnessed by* $v_1, \dotsc, v_m$. In [@CS-fsttcs07] it has been shown that the problem of whether an instance of the ${\textup{PEP}^{\textit{reg}}}$ problem has a codirect solution is equivalent to the problem of whether it has a solution. Moreover, it can be shown that this also holds for strict codirect solutions. \[lemma:instrumental\] The problem of whether a ${\textup{PEP}^{\textit{reg}}}$ instance has a strict codirect solution is as hard as whether a ${\textup{PEP}^{\textit{reg}}}$ instance has a solution. We only show how to reduce from finding a codirect solution problem to finding a strict codirect solution problem. The other direction is trivial, since a strict codirect solution is in particular a solution. Let $(\sigma, \sigma', L)$ be a ${\textup{PEP}^{\textit{reg}}}$ instance, and $w \in L$ be a codirect solution with $|w|=m$, minimal in size, and witnessed by $v_1, \dotsc, v_k$. Let ${\mathcal{A}}=(Q,\Sigma,q_0,\delta,F)$ be an [NFA]{}representing $L$, where $|Q|=n$. Let $\rho : [0..m] \to Q$ be an accepting run of ${\mathcal{A}}$ on $w$. Let $0 \leq k_1 < \dotsb < k_{t} \leq m$ be all the elements of ${\{ s \geq 0 : |\sigma(w[1..s])| = |v_1 \dotsb v_{s}|\}}$. Observe that $k_1=0$, and $k_t = m$ by condition \[item:codir:2\]. It is not difficult to show that by minimality of $m$ there cannot be more than $n$ indices. \[claim:bound-codir-sol\] $t \leq n$. Suppose ad absurdum that $t \geq n+1$. Then, there must be two $k_l < k_{l'}$ such that $\rho(k_l)=\rho(k_{l'})$. Hence, $w' = w[1..k_l] \cdot w[k_{l'}+1..] \in L$ is also a codirect solution, contradicting that $w$ is a minimal size solution. Let $L[q,q']$ be the regular language denoted by the [NFA]{}$(Q,\Sigma,q,\delta,{\{q'\}})$. For every $i<t$, $(\sigma, \sigma', L[\rho(k_i),\rho(k_{i+1})])$ has a strict codirect solution. We show that for every $i<t$, $w[k_i+1 .. k_{i+1}]$ is a solution for $(\sigma, \sigma', L[\rho(k_i),\rho(k_{i+1})])$, witnessed by $v_{k_i+1}, \dotsc, v_{k_{i+1}}$. Clearly, condition \[item:codir:1\] still holds. Further, since $$|\sigma(w[1..k_i])| = |v_1 \dotsb v_{k_i}| \qquad\text{and}\qquad |\sigma(w[1..k_{i+1}])| = |v_1 \dotsb v_{k_{i+1}}|,$$ we have that $|\sigma(w[k_i+1..k_{i+1}])| = |v_{k_i+1} \dotsb v_{k_{i+1}}|$ and then $$\sigma(w[k_i+1..k_{i+1}]) = v_{k_i+1} \dotsb v_{k_{i+1}},$$ verifying condition \[item:codir:2\]. Finally, by the fact that $k_i$ and $k_{i+1}$ are consecutive indices we cannot have some $k'$ with $k_i+1 < k' < k_{i+1}$ so that $|\sigma(w[k_i +1..k'])| = |v_{k_i+1} \dotsb v_{k'}|$ since it would imply $|\sigma(w[1..k'])| = |v_1 \dotsb v_{k'}|$ and in this case $k' \geq k_{i+1}$. Then, conditions \[item:codir:3\] and \[item:codir:4\] hold. Therefore, we obtain the following reduction. $(\sigma, \sigma', L)$ has a codirect solution if, and only if, there exist ${\{q_1, \dotsc, q_t\}} \subseteq Q$ with $q_1 = q_0$ and $q_t \in F$, such that for every $i$, $(\sigma,\sigma',L[q_i,q_{i+1}])$ has a strict codirect solution. This reduction being exponential is outweighed by the fact that we are dealing with a much harder problem. With the help of Lemma \[lemma:instrumental\] we prove Theorem \[subseq-dec\] in the next section. ### Proof of Theorem \[subseq-dec\] Since decidability follows from Proposition \[prop:subseq-rat-dec-nmr\], we only show the lower bound. To this end, we show how to code the existence of a strict codirect solution as an instance of ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. \[prop:reducRPEP-inter-REG-subseq\] There is an elementary reduction from the existence of strict codirect solutions of ${\textup{PEP}^{\textit{reg}}}$ into ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. Given a ${\textup{PEP}^{\textit{reg}}}$ instance $(\sigma,\sigma',L)$, remember that the presence of a strict codirect solution enforces that if there is a pair $(u,v)=(\sigma(w),\sigma'(w))$ with $w \in L$ and $u {\sqsubseteq}v$, it is such that for every proper prefix $u'$ of $u$ the smallest prefix $v'$ of $v$ such that $u' {\sqsubseteq}v'$ must be so that $|v'| > |u'|$. In the proof, we convert the *rational* relation $R = {\{(\sigma(w),\sigma'(w)) \mid w \in L\}}$ into a length-preserving *regular* relation $R'$ over an extended alphabet $\Gamma\cup {\{\#\}}$, defined as the set of all pairs $(u,v) \in (\Gamma\cup {\{\#\}})^* \times (\Gamma\cup {\{\#\}})^*$ so that $|u|=|v|$ and $(u_\Gamma, v_\Gamma) \in R$. If we now let $R''$ to be the regular relation $R' \cdot {\{({\varepsilon},v) \mid v \in {\{\#\}}^*\}}$, we obtain that: (i) if $w \in {R'' \cap {{\sqsubseteq}}}$ then $w' \in {R \cap {{\sqsubseteq}}}$, where $w'$ is the projection of $w$ onto $\Gamma^* \times \Gamma^*$; and (ii) if there is some strict codirect solution $w' \in R \cap {{\sqsubseteq}}$, then there is some $w \in R'' \cap {{\sqsubseteq}}$ such that $w'$ is the projection of $w$ onto $\Gamma^* \times \Gamma^*$. Whereas (i) is trivial, (ii) follows from the fact that $w'$ is a strict codirect solution. If $w' = (u,v) \in R''$, where $f(w) =(u)_\Gamma$, $g(w)= (v)_\Gamma$, the complication is now that, since $u \in \Gamma\cup {\{\#\}}$, it could be that $u \not{\sqsubseteq}v$ just because there is some $\#$ in $u$ that does not appear in $v$. But we show how to build $(u,v)$ such that whenever $u[i] = \#$ forces $v[j] = \#$ with $j > i$ then we also have that $u[j] = \#$. This repeats, forcing $v[k] = \#$ for some $k>j$ and so on, until we reach the tail of $v$ that has sufficiently many $\#$’s to satisfy all the accumulated demands for occurrences of $\#$. Let $(\sigma,\sigma',L)$ be a ${\textup{PEP}^{\textit{reg}}}$ instance. For every $a \in \Sigma$, consider the binary relation $R_a$ consisting of all pairs $(u,u') \in (\Gamma\cup{\{\#\}})^*\times (\Gamma\cup{\{\#\}})^*$ such that $u_\Gamma = \sigma(a)$, $u'_\Gamma = \sigma'(a)$ and $|u|=|u'|$. Note that $R_a$ is a length-preserving regular relation. Let $R'$ be the set of pairs $(u_1 \dotsb u_m, u'_1 \dotsb u'_m)$ such that there exists $w \in L$ where $|w|=m$ and $(u_i,u'_i) \in R_{w[i]}$ for all $i$. Note that $R'$ is still a length-preserving regular relation. Finally, we define $R$ as the set of pairs $(u,u' \cdot u'')$ such that $(u,u') \in R'$ and $u'' \in {\{\#\}}^*$. $R$ is no longer a length-preserving relation, but it is regular. Observe that if $R \cap {{\sqsubseteq}} \neq \emptyset$, then $(\sigma,\sigma',L)$ has a solution. Conversely, we show that if $(\sigma,\sigma',L)$ has a strict codirect solution, then $R \cap {{\sqsubseteq}} \neq \emptyset$. Suppose that the ${\textup{PEP}^{\textit{reg}}}$ instance $(\sigma, \sigma', L)$ has a strict codirect solution $w \in L$ with $|w|=m$, witnessed by $v_1, \dotsc, v_m$. Assume, without any loss of generality, that $\sigma$ and $\sigma'$ are alphabetic morphisms and that $m>1$. We exhibit a pair $(u,u') \in R$ such that $u {\sqsubseteq}u'$. We define $(u,u') = (u_1 \dotsb u_m, u'_1 \dotsb u'_m \cdot u'_{m+1})$, where $(u_i,u'_i) \in R_{w[i]}$ for every $i \leq m$, and $u'_{m+1} \in {\{\#\}}^*$. In order to give the precise definition of $(u,u')$, we need to introduce some concepts first. Let $\sigma_\#(a) \in \Gamma\cup{\{\#\}}$ be $\#$ if $\sigma(a) = \epsilon$, or $\sigma(a)$ otherwise; likewise for $\sigma'_\#$. By definition of strict codirect solution, we have the following. \[claim:fst-elem-B\] $\sigma(w[1]) \in \Gamma$. Indeed, if $\sigma(w[1]) \neq \Gamma$, then $\sigma(w[1])={\varepsilon}$ and $|\sigma(w[1])|=0$, and then condition \[item:codir:4\] of strict codirectness stating that $|\sigma(w[1])| > |v_1|$, would be falsified. Let us define the function $g: [m] \to [m]$ so that $g(i)$ is the minimum $j$ such that $v_1 \dotsb v_j = \sigma(w[1..i])$. Note that there is always such a $j$, since $|\sigma(w[1..i])| > 0$ by Claim \[claim:fst-elem-B\]. Now we show some easy properties of $g$, necessary to correctly define the witnessing pair $(u,u') \in R$ such that $u {\sqsubseteq}u'$. \[claim:g\] $g(i) > i$ for all $1 \leq i<m$, and $g(m)=m$. Let $g(i)=j$ and hence $|\sigma(w[1..i])| = |v_1 \dotsb v_j|$. First, notice that $|v_1 \dotsb v_j| = |\sigma(w[1..i])| \geq |v_1 \dotsb v_i|$ by condition \[item:codir:3\] of codirectness, and then that $j \geq i$. If $i < m$, $|v_1 \dotsb v_i| < |\sigma(w[1..i])|$ by condition \[item:codir:4\], and thus $|v_1 \dotsb v_i| < |v_1 \dotsb v_j|$ which implies $i < j$. If $i = m$, then $j=i$ by the fact that $j \geq i=m$. \[claim:g-monotone\] $g$ is increasing: $g(i) \geq g(j)$ if $i \geq j$. Given $m \geq i \geq j \geq 1$, we have that $$\begin{aligned} |v_1 \dotsb v_{g(i)}| & =|\sigma(w[1..i])| \tag{\text{by definition of $g$}}\\ & \geq |\sigma(w[1..j])| \tag{\text{since $i\geq j$}}\\ & =|v_1 \dotsb v_{g(j)}| \tag{\text{by definition of $g$}}\end{aligned}$$ which implies that $g(i) \geq g(j)$. \[rem:1\] For all $i \leq m$, if $\sigma(w[i]) \in \Gamma$ then $\sigma(w[i]) = \sigma'(w[g(i)])$. The most important pairs of positions $(i,j) \in [m] \times [m]$ that witness $u {\sqsubseteq}u'$, are those so that $j = g(i)$ and $\sigma(w[i]) \neq {\varepsilon}$. Once those are fixed, the remaining elements in the definition of $g$ are also fixed. Let us call ${G}$ to this set, and let us state some simple facts for later use. $${G}= {\{ (i,g(i)) \in [m] \times [m] \mid \sigma(w[i])\in \Gamma \}}$$ \[rem:g-injective\] For every $(i,j), (i',j') \in {G}$, if $i\neq i'$ then $j\neq j'$. In other words, $g$ restricted to ${\{i \mid \sigma(w[i]) \in \Gamma\}}$ is injective. \[claim:g-two-elem\] Given $i,j$ with $(i,j) \in {G}$ and $i<m$, then $|\sigma(w[i..j])|\geq 2$. This is because $i < j$ by Claim \[claim:g\], $\sigma(w[i]) \in \Gamma$ by definition of ${G}$, and $\sigma(w[j])=\sigma(w[g(i)]) \in \Gamma$ by definition of $g$. Since our coding uses the letter $\#$ as some sort of blank symbol, it will be useful to define the factors $\tilde u_1, \tilde u_2, \dotsc$ of $u$ that contain exactly one letter from $\Gamma$. We then define $\tilde u_i$ as the maximal prefix of $u_i \dotsb u_m$ belonging to the following regular expression: $\Gamma \cdot {\{\#\}}^*$. We are now in good shape to define precisely $u_j, u'_j$ for every $j \in [m]$. For every $j < m$, [$\bullet$]{} if $(i,j) \in {G}$ for some $i$, then $$u'_j = \tilde u_i \quad \text{and} \quad u_j = \sigma_\#(w[j]) \cdot u'_j[2..]; \text{ and}$$ if there is no $i$ so that $(i,j) \in G$, then $$(u_j,u'_j) = (\sigma_\#(w[j]), \sigma'_\#(w[j]) ).$$ And on the other hand, $(u_m,u'_m) = (\sigma_\#(w[m]), \sigma'_\#(w[m]))$ and $u'_{m+1} = \#^{|u_1 \dotsb u_m|}$. Figure \[fig:example-reg-npr\] contains an example with all the previous definitions. Notice that the definition of $u_j$ makes use of $\tilde u _j$ and the definition of $\tilde u_j$ seems to make use of $u_j$. We next show that in fact $\tilde u_j$ does not depend on $u_j$, and that the strings above are well defined. ![Exemplary reduction from ${\textup{PEP}^{\textit{reg}}}$ to ${{({{{{\sf REG}}}} \cap {{{\sqsubseteq}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$, for the case $\sigma(w) = abacaba$, $\sigma'(w) = aababacacbcba$. []{data-label="fig:example-reg-npr"}](img/ex-reg-subseq.pdf){width="\textwidth"} \[rem:tilde-prefix\] For $i<m$, $\tilde u_i$ is a prefix of $u_i \dotsb u_{g(i)-1}$. By Claim \[claim:g\] and Claim \[claim:g-two-elem\], $\sigma(w[i..g(i)])$ contains at least two elements and hence $u_i \dotsb u_{g(i)}$ contains at least two elements from $\Gamma$, namely $u_i[1]$ and $u_{g(i)}[1]$. Then, $\tilde u_i$ cannot contain $u_i \dotsb u_{g(i)-1} \cdot (u_{g(i)}[1])$ as a prefix. By the above Observation \[rem:tilde-prefix\], to compute $\tilde u_i$ we only need $u_j$’s and $u'_j$’s with $j<i$, and hence $(u,u')$ is well defined. \[rem:form-ui\] All the $u_i$’s, $u'_i$’s and $\tilde u_i$’s are of the form $a \cdot \# \dotsb \#$ or $\# \dotsb \#$, for $a \in \Gamma$. From the definition of $(u,u')$ we obtain the following. \[rem:size-of-ui’s-projected\] For every $n \leq m$, (1) \[item:size-of-ui’s-projected:1\] $|(u_1 \dotsb u_n)_\Gamma| = {\{i \in [n] \mid \exists j . (i,j) \in {G}\}} = |\sigma(w[1..n])|$, and (2) \[item:size-of-ui’s-projected:2\] $|(u'_1 \dotsb u'_n)_\Gamma| = {\{j \in [n] \mid \exists i . (i,j) \in {G}\}} = |\sigma'(w[1..n])|$. We now show that $(u, u') \in R$ and that $u {\sqsubseteq}u'$. \[claim:u-u’-in-R\] $(u,u') \in R$. Note that $u_i = \sigma_\#(w[i])$ for all $i$ and then $(u_i)_\Gamma = \sigma(w[i])$. We also show that $(u'_i)_\Gamma = \sigma'(w[i])$. If $u'_j$ is such that there is no $(i,j) \in {G}$, or $j=m$, then it is plain that $(u'_j)_\Gamma = \sigma'(w[j])$ by definition of $u'_j$. On the other hand, if $u'_j = \tilde u_i$ for $(i,j) \in {G}$, then $$\begin{aligned} (u'_j)_\Gamma &= (\tilde u_i)_\Gamma = (u_i)_\Gamma = (u_i[1])_\Gamma \tag{by Observation~\ref{rem:form-ui}}\\ &=(\sigma(w[i]))_\Gamma \tag{by def.\ of $u_i$}\\ &=\sigma(w[i]) \tag{since $\sigma(w[i]) \in \Gamma$ by def.\ of ${G}$} \\ &= \sigma'(w[g(i)]) = \sigma'(w[j]). \tag{by Observation~\ref{rem:1}}\end{aligned}$$ Thus, every $(u_i,v_i)$ with $i \leq m$ is such that $(u_i)_\Gamma = \sigma(w[i])$ and $(u'_i)_\Gamma = \sigma'(w[i])$, meaning that $(u_i,v_i) \in R_{w[i]}$ for every $i \leq m$. Hence, we have that $(u_1 \dotsb u_m, u'_1 \dotsb u'_m) \in R'$ and since $u'_{m+1} \in {\{\#\}}^*$, $(u,u') \in R$. Next, we prove that $u {\sqsubseteq}u'$, but before doing so, we need an additional straightforward claim. Let ${\{i_1 < \dotsb < i_{|{G}|}\}} = {\{ i \mid (i,g(i)) \in {G}\}}$. Note that $i_1 = 1$ by Claim \[claim:fst-elem-B\]. \[claim:ij-gij\] $i_{j+1} \leq g(i_j)$ By means of contradiction, suppose $g(i_j) < i_{j+1}$. Then, $$\begin{aligned} |\sigma(w[1..g(i_j)])|&= |{\{i \in [g(i_j)] \mid \exists j . (i,j) \in {G}\}}| \tag{by Observation~\ref{rem:size-of-ui's-projected}.\ref{item:size-of-ui's-projected:1}}\\ &= |{\{i \in [g(i_j)] \mid \exists j . (i,j) \in {G}\}}| \tag{since $g(i_j) < i_{j+1}$} \\ &=|\sigma'(w[1..g(i_j)])|. \tag{by Observation~\ref{rem:size-of-ui's-projected}.\ref{item:size-of-ui's-projected:2}}\end{aligned}$$ In other words, there is some $k < m$ such that $|\sigma(w[1..k])| = |\sigma'(w[1..k])|$. This is in contradiction with condition \[item:codir:4\] of strict codirectness. Hence, $g(i_j) \geq i_{j+1}$. \[claim:u-subseq-u’\] $u {\sqsubseteq}u'$. We factorize $u = \hat u_1 \dotsb \hat u_{|{G}|}$ and we show that each $\hat u_i$ is a substring of $u'$ that appears in an increasing order. We define $\hat u_j = u_{i_j} \dotsb u_{i_{(j+1)}-1}$ for every $j<|{G}|$, and $\hat u_{|{G}|} = u_{i_{|{G}|}} \dotsb u_m$. Hence, the $\hat u_i$’s form a factorization of $u$. Indeed, this is the unique factorization in which each $\hat u_i$ is of the form $b \cdot \# \dotsb \#$ for $b \in \Gamma$. For every $j<|{G}|$, we show that $\hat u_j {\sqsubseteq}u'_{g(i_j)}$. $$\begin{aligned} \hat u_j &= u_{i_j} \dotsb u_{i_{(j+1)}-1} \\ &{\sqsubseteq}u_{i_j} \dotsb u_{g(i_j)-1} \tag{by Claim~\ref{claim:ij-gij}}\\ &{\sqsubseteq}\tilde{u}_{i_j} \tag{by Observation~\ref{rem:tilde-prefix}}\\ &= \tilde u_{g^{-1}(g(i_j))} \tag{by Observation~\ref{rem:g-injective}} \\ &= u'_{g(i_j)} \tag{by def.\ of $u'$}\end{aligned}$$ On the other hand, $\hat u_{|{G}|} {\sqsubseteq}u'_{g(i_{|{G}|})} \cdot u'_{m+1} = u'_m \cdot u'_{m+1}$. By Claim \[claim:g-monotone\], $g$ is increasing. Hence, $u {\sqsubseteq}u'$. By Claims \[claim:u-u’-in-R\] and \[claim:u-subseq-u’\], we conclude that $R \cap {{\sqsubseteq}} \neq \emptyset$. ### Subsequence-closed relations The next question is how far we can extend the decidability of ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. It turns out that if we allow one projection of a rational relation to be closed under taking subsequences, then we retain decidability. Let $R\subseteq\Sigma^*\times\Gamma^*$ be a binary relation. Define another binary relation $$R_{{\sqsubseteq}} = \{(u,w) \ | \ u {\sqsubseteq}u' \text{ and }(u',w)\in R \text{ for some }u'\}$$ Then the class of [*subsequence-closed relations*]{}, or ${{\sf SCR}}$, is the class $\{R_{{\sqsubseteq}} \ | \ R\in{{\sf RAT}}\}$. Note that the subsequence relation itself is in ${{\sf SCR}}$, since it is obtained by closing the (regular) equality relation under subsequence. That is, ${\sqsubseteq}\ \ = \ \{(w,w)\ |\ w\in{\Sigma^*}\}_{{\sqsubseteq}}$. Not all rational relations are subsequence-closed (for instance, subword is not). The following summarizes properties of subsequence-closed relations. \[scr-prop\] 1. ${{\sf SCR}}\subsetneq {{\sf RAT}}$. 2. ${{\sf SCR}}\not\subseteq {{\sf REG}}$ and ${{\sf REG}}\not\subseteq {{\sf SCR}}$. 3. A relation $R$ is in ${{\sf SCR}}$ iff $\{ w \otimes w' \ | \ (w,w')\in R\}$ is accepted by an [NFA]{}${{{\mathcal{A}}}}=\langle Q,{{\Sigma_\bot}}\times{{\Sigma_\bot}},q_0,\delta,F\rangle$ such that $(q,(a,b),q')\in\delta$ implies $(q,(\bot,b),q')\in\delta$ for all $q,q'\in Q$ and $a,b\in{{\Sigma_\bot}}$. We call an automaton with such property a *subsequence-closed automaton*. Note that $(3)$ is immediate by definition of $R_{\sqsubseteq}$, $(1)$ is a consequence of $(3)$, and $(2)$ is due to the fact that ${\sqsubseteq}$ is not regular and that, for example, the identity ${\{(u,u) \mid u \in \Sigma^*\}}$ is not a subsequence-closed relation. When an ${{\sf SCR}}$ relation is given as an input to a problem, we assume that it is represented as a subsequence-closed automaton as defined in item (3) in the above proposition. Note also that ${({{{\sf SCR}}} \cap {{{\sf SCR}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable in polynomial time: if $R,R'\in{{\sf SCR}}$ and $R\cap R'\neq\emptyset$, then $({\varepsilon},w)\in R\cap R'$ for some $w$, and hence the problem reduces to simple [NFA]{}nonemptiness checking. The main result about ${{\sf SCR}}$ relations generalizes decidability of ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$. \[scr-thm\] The problem ${({{{\sf RAT}}} \cap {{{\sf SCR}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable, with non-mutiply recursive complexity. In order to prove Theorem \[scr-thm\] we use Lemmas \[lem:interpb-&gt;synchronizedinterpb\] and \[lem:LDC-syn-decidable\], as shown below. But first we need to introduce some additional terminology. We say that $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$ is an [*instance*]{} of ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$ over $\Sigma,\Gamma$ if ${{\mathcal{A}}_1}$ is a [subsequence-closed]{}automaton over ${\Sigma_\bot}\times {\Gamma_\bot}$, and ${{\mathcal{A}}_0}$ is a [NFA]{}over ${\Sigma_\bot}\times {\Gamma_\bot}$. Given a ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$ instance $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$ over $\Sigma, \Gamma$, we say that $(w_1,w_2)$ is a [*solution*]{} if $w_1, w_2 \in ({\Sigma_\bot}\times {\Gamma_\bot})^*$, $w_1 \in {\mathcal{L}({{\mathcal{A}}_1})}, w_2 \in {\mathcal{L}({{\mathcal{A}}_0})}$. We say that a solution $(w_0, w_1)$ of an instance $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$ over $\Sigma,\Gamma$ is [*synchronized*]{} if ${\pi}_2(w_0)={\pi}_2(w_1)$. We write ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})^{\text{syn}}} \stackrel{?}{=} \emptyset}$ for the problem of whether there is a synchronized solution. \[lem:interpb-&gt;synchronizedinterpb\] There is a polynomial-time reduction from the problem ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$ into ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})^{\text{syn}}} \stackrel{?}{=} \emptyset}$. We show that ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$ is reducible to the problem of whether there exists a synchronized solution of ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$. Suppose that $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$ is an instance of ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$ over the alphabets $\Sigma, \Gamma$. Consider the automata ${\mathcal{A}}'_0, {\mathcal{A}}'_1$ as the result of adding all transitions $(q,(\bot, \bot), q)$ for every possible state $q$ to both automata. It is clear that the relations recognized by these remain unchanged, and that ${\mathcal{A}}'_0$ is still a [subsequence-closed]{}automaton. Moreover, this new instance has a synchronized solution if there is any, as stated in the following claim. There is a synchronized solution for $({\mathcal{A}}'_0, {\mathcal{A}}'_1)$ if, and only if, there is a solution for $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$. The ‘only if’ part is immediate. For the ‘if’ part, let $(w_0,w_1)$ be a solution for $({{\mathcal{A}}_0},{{\mathcal{A}}_1})$. Let $w_0 = w_{0,1} \dotsb w_{0,n}$, $w_1 = w_{1,1} \dotsb w_{1,n}$ be factorizations of $w_0$ and $w_1$ such that for every $i \in {\{0,1\}}$, ${\pi}_2(w_{i,1})$ is in ${\{\bot\}}^*$; and for each $j>1, i \in {0,1}$, ${\pi}_2(w_{i,j})$ is in $\Gamma \cdot {\{\bot\}}^*$. It is plain that there is always such factorization and that it is unique. For every $j \in [n]$, we define $w'_{0,j} = w_{0,j} \cdot (\bot,\bot)^{k}$ and $w'_{1,j} = w_{1,j} \cdot (\bot,\bot)^{-k}$, with $k = |w_{1,j}| - |w_{0,j}|$, where we assume that $(\bot,\bot)^m$ with $m \leq 0$ is the empty string. We define $w'_0 = w'_{0,1} \dotsb w'_{0,n}$, $w'_1 = w'_{1,1} \dotsb w'_{1,n}$. Note that $(w'_0, w'_1)$ is a solution of $({\mathcal{A}}'_0,{\mathcal{A}}'_1)$ since it is the result of adding letters $(\bot, \bot)$ to $(w_0,w_1)$, which is also a solution of $({\mathcal{A}}'_0,{\mathcal{A}}'_1)$. We have that ${\pi}_2(w'_0) = {\pi}_2(w'_1)$, and therefore that $(w'_0,w'_1)$ is a synchronized solution for $({\mathcal{A}}'_0, {\mathcal{A}}'_1)$. \[lem:LDC-syn-decidable\] There is a polynomial-time reduction from ${{({{{{\sf RAT}}}} \cap {{\ensuremath{\mathsf{SCR}}}})^{\text{syn}}} \stackrel{?}{=} \emptyset}$ into ${{({{{{\sf RAT}}}} \cap {{\sqsubseteq}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}}$. The problem of finding a synchronized solution for ${\mathcal{A}}_0,{\mathcal{A}}_1$ can be then formulated as the problem of finding words $v, u_0, u_1 \in {\Sigma_\bot}^*$ with $|v|=|u_0|=|u_1|$, so that $(u_0 \otimes v, u_1 \otimes v)$ is a solution. We can compute an [NFA]{}${\mathcal{A}}$ over ${\Sigma_\bot}^2 \times {\Gamma_\bot}$ from ${\mathcal{A}}_0,{\mathcal{A}}_1$, such that $(u_0,u_1,v) \in {\mathcal{L}({\mathcal{A}})}$ if, and only if, $u_0 \otimes v \in {\mathcal{L}({\mathcal{A}}_1)}$ and $u_1 \otimes v\in {\mathcal{L}({\mathcal{A}}_0)}$. Consider now an automaton ${\mathcal{A}}'$ over ${\Sigma_\bot}^2$ such that ${\mathcal{L}({\mathcal{A}}')} = {\{(u_0,u_1) \mid \exists v \ (u_0,u_1,v) \in {\mathcal{L}({\mathcal{A}})} \}}$. It corresponds to the rational automaton of the projection onto the first and second components of the ternary relation of ${\mathcal{A}}$, and it can be computed from ${\mathcal{A}}$ in polynomial time. We then deduce that there exists $u_0 \otimes u_1 \in {\mathcal{L}({\mathcal{A}}')}$ so that $(u_0)_\Sigma {\sqsubseteq}(u_1)_\Sigma$ if, and only if, there is $v \in {\Gamma_\bot}^*$ with $|v|=|u_0|=|u_1|$ so that $u_0 \otimes v \in {\mathcal{L}({{{\mathcal{A}}}}_0)}$ and $u_1 \otimes v \in {\mathcal{L}({{{\mathcal{A}}}}_1)}$, where $(u_0)_\Sigma {\sqsubseteq}(u_1)_\Sigma$. But this condition is in fact equivalent to $R_0 \cap R_1 \neq \emptyset$ (where $R_i = {\{ ((u)_\Sigma, (v)_\Sigma) \mid u \otimes v \in {\mathcal{L}({{{\mathcal{A}}}}_i)}\}}$), since [$\bullet$]{} if $((u_1)_\Sigma, (v)_\Sigma) \in R_1$ and $(u_0)_\Sigma {\sqsubseteq}(u_1)_\Sigma$, then $((u_0)_\Sigma, (v)_\Sigma) \in R_1$ (since $R_1 \in {{\sf SCR}}$) and hence $((u_0)_\Sigma, (v)_\Sigma) \in R_0 \cap R_1$; and if $R_0 \cap R_1 \neq \emptyset$, then there exists a synchronized solution $(u_0 \otimes v, u_1 \otimes v)$ of ${\mathcal{A}}_0,{\mathcal{A}}_1$; in other words, there are $|v|=|u_0|=|u_1|$ so that $u_0 \otimes v \in {\mathcal{L}({{{\mathcal{A}}}}_0)}$, $u_1 \otimes v \in {\mathcal{L}({{{\mathcal{A}}}}_1)}$, and $(u_0)_\Sigma = (u_1)_\Sigma$. We have thus reduced the problem to an instance of ${({{{\sf RAT}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$: whether there is $(u,v)$ in the relation denoted by ${\mathcal{A}}'$ so that $u {\sqsubseteq}v$. The decidability part of Theorem \[scr-thm\] follows as a corollary of Lemmas \[lem:interpb-&gt;synchronizedinterpb\] and \[lem:LDC-syn-decidable\], and Proposition \[prop:subseq-rat-dec-nmr\]. Of course the complexity is non-multiply-recursive, since the problem subsumes ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ of Theorem \[subseq-dec\]. Coming back to graph logics, we obtain: \[ecrpq-subseq\] The complexity of evaluation of [[ECRPQ]{}]{}(${\sqsubseteq}$) queries is not bounded by any multiply-recursive function. Another corollary can be stated in purely language-theoretic terms. \[nonempty-rel-cor\] Let ${{{\mathcal{C}}}}$ be a class of binary relations on ${\Sigma^*}$ that is closed under intersection and contains ${{\sf REG}}$. Then the nonemptiness problem for ${{{\mathcal{C}}}}$ is: [$\bullet$]{} undecidable if ${\preceq}$ or ${\preceq_{{\rm suff}}}$ is in ${{{\mathcal{C}}}}$; non-multiply-recursive if ${\sqsubseteq}$ is in ${{{\mathcal{C}}}}$. Discussion ---------- In addition to answering some basic language-theoretic questions about the interaction of regular and rational relations, and to providing the simplest yet problem with non-multiply-recursive complexity, our results also rule out logical languages for graph databases that freely combine regular relations and some of the most commonly used rational relations, such as subword and subsequence. With them, query evaluation becomes either undecidable or non-multiply-recursive (which means that no realistic algorithm will be able to solve the hard instances of this problem). This does not yet fully answer our questions about the evaluation of queries in graph logics. First, in the case of subsequence (or, more generally, ${{\sf SCR}}$ relations) we still do not know if query evaluation of [[ECRPQ]{}]{}s with such relations is decidable (i.e., what happens with ${\text{\sc GenInt}}_S({{\sf REG}})$ for such relations $S$). Even more importantly, we do not yet know what happens with the complexity of [[CRPQ]{}]{}s (i.e., ${\text{\sc GenInt}}_S({{\sf REC}})$) for various relations $S$. These questions are answered in the next section. Restricted logics and the generalized intersection problem {#restr:sec} ========================================================== The previous section already ruled out some graph logics with rational relations as either undecidable or decidable with extremely high complexity. This was done merely by analyzing the intersection problem for binary rational and regular relations. We now move to the study of the generalized intersection problem, and use it to analyze the complexity of graph logics in full generality. We first deal with the generalization of the decidable case (${{\sf SCR}}$ relations), and then consider the problem ${\text{\sc GenInt}}_S({{\sf REC}})$, corresponding to [[CRPQ]{}]{}s extended with relations $S$ on paths. Generalized intersection problem and subsequence ------------------------------------------------ We know that ${({{{\sf REG}}} \cap {\mbox{${\sqsubseteq}$}})\ensuremath{\stackrel{\text{\tiny ?}}{=}}\emptyset}$ is decidable, although not multiply-recursive. What about its generalized version? It turns out it remains decidable. \[genintp-subseq-thm\] The problem ${\text{\sc GenInt}}_{{\sqsubseteq}}({{\sf REG}})$ is decidable. That is, there is an algorithm that decides, for a given $m$-ary regular relation $R$ and $I \subseteq [m]^2$, whether $R \cap_I \mbox{${\sqsubseteq}$}\neq \emptyset$. Let $k \in {\mathbb{N}}$, $I \subseteq [k] \times [k]$ and $R \in {{{\sf REG}}_{k}}$ be an instance of the problem. Let us define $G = {\{(w_1, \dotsc, w_k) \mid \forall (i,j) \in I, w_i {\sqsubseteq}w_j\}}$. We show how to compute if $R \cap G$ is empty or not. Let ${\mathcal{A}} = (Q,({\Sigma_\bot})^k,q_0,\delta,F)$ be a NFA over $(\Sigma_\bot)^k$ corresponding to $R$, for simplicity we assume that it is complete. Remember that every $w \in {\mathcal{L}({\mathcal{A}})}$ is such that ${\pi}_i(w)$ is in $\Sigma^*; {\{\bot\}} ^*$ for every $i \in [k]$. Given $u, v \in \Sigma^*$, we define $u \setminus v$ as $u[i..]$, where $i$ is the maximal index such that $u[1..i-1] {\sqsubseteq}v$. In other words, $u \setminus v$ is the result of removing from $u$ the maximal prefix that is a subsequence of $v$. We define a finite tree ${\mathbf{t}}$ whose every node is labeled with [$\bullet$]{} a depth $n \geq 0$, $k$ words $w_1, \dotsc, w_k \in{\Sigma_\bot}^n$, for every $(i,j) \in I$, a word $\alpha_{ij} \in \Sigma^*$, and a state $q \in Q$. For a node $x$ we denote these labels by $x.n$, $x.w_1, \dotsc, x.w_k$, $x.\alpha_{ij}$ for every $(i,j) \in I$ and $x.q$ respectively. The tree is such that the following conditions are met. [$\bullet$]{} The root is labeled by $x.n=0$, $x.w_1 = \dotsb = x.w_k = {\varepsilon}$, for very $(i,j) \in I$, $x.\alpha_{ij} = {\varepsilon}$, and $x.q = q_0$. A node $x$ has a child $y$ in ${\mathbf{t}}$ if and only if [$-$]{} $y.n = x.n+1$, $x.w_i = y.w_i[1..y.x-1]$ for every $i \in [k]$, there is a transition $(x.q, \bar a , y.q) \in \delta$ with $\bar a = (y.w_i[y.n])_{i \in [k]}$, and $y.\alpha_{ij} = (w_i)_\Sigma \setminus (w_j)_\Sigma$ for every $(i,j) \in I$. A node $x$ is a leaf in ${\mathbf{t}}$ if and only if is final or saturated (as defined below). A node $x$ is **final** if $x.q \in F$ and $x.\alpha_{ij} = {\varepsilon}$ for all $(i,j) \in I$. It is **saturated** if it is not final and there is an ancestor $y \neq x$ such that $y.q = x.q$ and $y.\alpha_{ij} {\sqsubseteq}x.\alpha_{ij}$ for all $(i,j) \in I$. \[lem:kary-tree-finite\] The tree ${\mathbf{t}}$ is finite and computable. The root is obviously computable, and for every branch, one can compute the list of children nodes of the bottom-most node of the branch. Indeed these are finite and bounded. The tree ${\mathbf{t}}$ cannot have an infinite branch. If there was an infinite branch, then as a result of Higman’s Lemma [*cum*]{}Dickson’s Lemma (and the Pigeonhole principle) there would be two nodes $x\neq y$, where $x$ is an ancestor of $y$, $x.q = y.q$, and for all $(i,j) \in I$, $x.\alpha_{ij} {\sqsubseteq}y.\alpha_{ij}$. Therefore, $y$ is saturated and it does not have children, contradicting the fact that $x$ and $y$ are in an infinite branch of ${\mathbf{t}}$. Since all the branches are finite and the children of any node are finite, by K[ő]{}nig’s Lemma, ${\mathbf{t}}$ is finite, and computable. \[lem:finite-nonempt\] If ${\mathbf{t}}$ has a final node, $R \cap G \neq \emptyset$. If a leaf $x$ is final, consider all the $x.n$ ancestors of $x$: $x_0, \dotsc, x_{n-1}$, such that $x_i.n = i$ for every $i \in [n-1]$. Consider the run ${\rho}: [0..x.n] \to Q$ defined as ${\rho}(x.n) = x.q$ and ${\rho}(i) = x_i.q$ for $i < x.n$. It is easy to see that ${\rho}$ is an accepting run of ${\mathcal{A}}$ on $x.w_1 \otimes \dotsc \otimes x.w_k$ and therefore that $((x.w_1)_\Sigma, \dotsc, (x.w_k)_\Sigma) \in R$. On the other hand, for every $(i,j) \in I$, $(x.w_i)_\Sigma {\sqsubseteq}(x.w_j)_\Sigma$ since $\alpha_{ij} = {\varepsilon}$. Hence, $((x.w_1)_\Sigma, \dotsc, (x.w_k)_\Sigma) \in G$ and thus $R \cap G \neq \emptyset$. \[lem:allsat-empt\] If all the leaves of ${\mathbf{t}}$ are saturated, $R \cap G = \emptyset$. By means of contradiction suppose that there is $w = w_1 \otimes \dotsb \otimes w_k \in ({\Sigma_\bot}^k)^*$ such $w \in {\mathcal{L}({\mathcal{A}})}$ through an accepting run ${\rho}: [0..n] \to Q$, and for every $(i,j) \in I$, $(w_i)_\Sigma {\sqsubseteq}(w_j)_\Sigma$. Let $|w| = n$ be of minimal size. By construction of ${\mathbf{t}}$, the following claims follow. There is a maximal branch $x_0, \dotsc, x_m$ in ${\mathbf{t}}$ such that $x_\ell.n = \ell$, $x_\ell.w_j = w_j[1..\ell]$, $x_\ell.q = {\rho}(\ell)$ for every $\ell \in [0..m]$ and $j \in [k]$. \[cl:subseq-alphaij\] For every $\ell \in [0..m]$ and $(i,j) \in I$, $$\begin{aligned} x_\ell.\alpha_{ij} \cdot (w_i[\ell+1 ..])_\Sigma &{\sqsubseteq}(w_j[\ell+1 ..])_\Sigma ,\label{cl:subseq-alphaij:1} \\ (w_i[1.. \ell-|x_\ell.\alpha_{ij}|])_\Sigma &{\sqsubseteq}(w_j[1.. \ell])_\Sigma . \label{cl:subseq-alphaij:2} \end{aligned}$$ Since we assume that all the leaves of ${\mathbf{t}}$ are saturated, in particular $x_m$ is saturated and there must be some $m' < m$ such that $x_m$ and $x_{m'}$ verify the saturation conditions. Consider the following word. $$w' = w[1..m'] \cdot w[m+1..]$$ The run ${\rho}$ trimmed with the positions $[m'+1 .. m]$ is still an accepting run on $w'$ (since ${\rho}(m')={\rho}(m)$), and therefore $(({\pi}_1(w'))_\Sigma, \dotsc, ({\pi}_k(w'))_\Sigma) \in R$. For an arbitrary $(i,j) \in I$, we show that $({\pi}_i(w'))_\Sigma {\sqsubseteq}({\pi}_j(w'))_\Sigma$. First, note that by we have that $$\begin{aligned} ({\pi}_i(w')[1..m' - |x_{m'}.\alpha_{ij}|])_\Sigma &=(w_i[1..m' - |x_{m'}.\alpha_{ij}|])_\Sigma \\ &{\sqsubseteq}(w_j[1..m'])_\Sigma \tag{by \eqref{cl:subseq-alphaij:2}}\\ &=({\pi}_j(w')[1..m'])_\Sigma . \end{aligned}$$ Since $x_{m'}$ and $x_{m}$ verify the saturation conditions, $x_{m'}.\alpha_{ij} {\sqsubseteq}x_m.\alpha_{ij}$. Therefore, $$\begin{aligned} ({\pi}_i(w')[m' - |x_{m'}.\alpha_{ij}|+1 ..])_\Sigma &= ({\pi}_i(w')[m' - |x_{m'}.\alpha_{ij}|+1 .. m'])_\Sigma \cdot ({\pi}_i(w')[m' +1..])_\Sigma\\ &=x_{m'}.\alpha_{ij} \cdot (w_i[m +1..])_\Sigma\\ &{\sqsubseteq}x_m.\alpha_{ij} \cdot (w_i[m +1..])_\Sigma \tag{since $x_{m'}.\alpha_{ij} {\sqsubseteq}x_m.\alpha_{ij}$}\\ &{\sqsubseteq}(w_j[m+1 ..])_\Sigma \tag{by \eqref{cl:subseq-alphaij:1}}\\ &=({\pi}_j(w')[m'+1..])_\Sigma \end{aligned}$$ Hence, we showed that there are some $\ell, \ell'$ such that $({\pi}_i(w')[1..\ell])_\Sigma {\sqsubseteq}({\pi}_j(w')[1..\ell'])_\Sigma$ and $({\pi}_i(w')[\ell+1..])_\Sigma {\sqsubseteq}({\pi}_j(w')[\ell'+1..])_\Sigma$, for $\ell = m' - |x_{m'}.\alpha_{ij}|$ and $\ell' = m'$. Thus, $({\pi}_i(w'))_\Sigma {\sqsubseteq}({\pi}_j(w'))_\Sigma$. This means that $(({\pi}_1(w'))_\Sigma, \dotsc, ({\pi}_k(w'))_\Sigma) \in G$ and thus $(({\pi}_1(w'))_\Sigma, \dotsc, ({\pi}_k(w'))_\Sigma) \in R \cap G$. But this cannot be since $|w'| < |w|$ and $w$ is of minimal length. The contradiction arises from the assumption that $R \cap G \neq \emptyset$. Then, $R \cap G = \emptyset$. Hence, by Lemmas \[lem:kary-tree-finite\], \[lem:finite-nonempt\] and \[lem:allsat-empt\], $R \cap G \neq \emptyset$ if and only if ${\mathbf{t}}$ has a final node, which is computable. The query evaluation problem for [[ECRPQ]{}]{}(${\sqsubseteq}$) queries is decidable. Of course the complexity is extremely high as we already know from Corollary \[ecrpq-subseq\]. Note that while the intersection problem of ${\sqsubseteq}$ with rational relations is decidable, as is ${\text{\sc GenInt}}_{{\sqsubseteq}}({{\sf REG}})$, we lose the decidability of ${\text{\sc GenInt}}_{{\sqsubseteq}}({{\sf RAT}})$ even in the simplest cases that go beyond the intersection problem (that is, for ternary relations in ${{\sf RAT}}$ and any $I$ that does not force two words to be the same). \[no-ternary-lemma\] The problem ${({{{\sf RAT}}} \mathrel{\cap_{I}} {\mbox{${\sqsubseteq}$}}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is undecidable even over ternary relations when $I$ is one of the following: 1. \[lem:k-ary:undec:1\] ${\{(1,2), (2,3)\}}$, 2. \[lem:k-ary:undec:2\] ${\{(1,2), (1,3)\}}$, or 3. \[lem:k-ary:undec:3\] ${\{(1,2), (3,2)\}}$. The three proofs use a reduction from the PCP problem. Recall that this is defined as follows. The input are two equally long lists $u_1,u_2,\dots,u_n$ and $v_1,v_2,\dots,v_n$ of strings over alphabet $\Sigma$. The PCP problems asks whether there exists a solution for this input, that is, a sequence of indices $i_1,i_2,\dots,i_k$ such that $1 \leq i_j \leq n$ ($1 \leq j \leq k$) and $u_{i_1} u_{i_2} \cdots u_{i_k} = v_{i_1} v_{i_2} \cdots v_{i_k}$. ${\{(1,2), (2,3)\}}$: The proof goes by reduction from an arbitrary PCP instance given by lists $u_1,\dots,u_n$ and $v_1, \dots,v_n$ of strings over alphabet $\Sigma$. The following relation $$R = {\{(u_{i_1} \dotsb u_{i_m},v_{i_1} \dotsb v_{i_m},u_{i_1} \dotsb u_{i_m}) \mid m\in {\mathbb{N}}\text{ and } i_1, \dotsc, i_m \in [n]\}}$$ is rational and $R \cap {\{(x,y,z) \mid x {\sqsubseteq}y {\sqsubseteq}z\}}$ is non-empty if and only if the instance has a solution. ${\{(1,2), (1,3)\}}$: The proof again goes by reduction from an arbitrary PCP instance given by lists $u_1,\dots,u_n$ and $v_1, \dots,v_n$ of strings over alphabet $\Sigma$. For simplicity, and without any loss of generality, we assume that $|u_i|, |v_i| \leq 1$ for every $i$. Let $\hat \Sigma = {\{ \hat a \mid a \in \Sigma\}}$, and for every $w = a_1 \dotsb a_\ell \in \Sigma^*$, let $\hat w = \hat a_1 \dotsb \hat a_\ell$. Consider $$\begin{aligned} R &= \{ (x,y,z) \mid m\in {\mathbb{N}}\text{, } i_1, \dotsc, i_m \in [n] \text{, } w_1, w'_1, \dotsc, w_{m+1}, w'_{m+1}\in \Sigma^*\text{, }\\ &\hspace{14ex} x = u_{i_1} \hat v_{i_1} u_{i_2} \hat v_{i_2} \dotsb u_{i_m} \hat v_{i_m}, \\ &\hspace{14ex} y = w'_1 \hat u_{i_1} w'_2 \dotsb w'_{m} \hat u_{i_m} w'_{m+1}, \\ &\hspace{14ex} z = \hat w_1 v_{i_1} \hat w_2 \dotsb \hat w_{m} v_{i_m} \hat w_{m+1} \} \end{aligned}$$ which is a rational relation. Note that there is some $(x,y,z) \in R$ with $x {\sqsubseteq}y$ if and only if there is some $v_{i_1} \dotsb v_{i_m} {\sqsubseteq}u_{i_1} \dotsb u_{i_m}$. Similarly for $x {\sqsubseteq}z$. Therefore, there is $(x,y,z) \in R$ with $x {\sqsubseteq}y$, $x {\sqsubseteq}z$ if and only if $v_{i_1} \dotsb v_{i_m} = u_{i_1} \dotsb u_{i_m}$ for some choice of $i_1, \dotsc, i_m$. ${\{(1,2), (3,2)\}}$: This is similar to , but this time we consider the following rational relation. $$\begin{aligned} R &= \{ (x,y,z) \mid m\in {\mathbb{N}}\text{, } i_1, \dotsc, i_m \in [n] \text{, } w_1, w'_1, \dotsc, w_{m+1}, w'_{m+1}\in \Sigma^*\text{, }\\ &\hspace{14ex} y = u_{i_1} \hat v_{i_1} u_{i_2} \hat v_{i_2} \dotsb u_{i_m} \hat v_{i_m}, \\ &\hspace{14ex} x = w'_1 \hat u_{i_1} w'_2 \dotsb w'_{m} \hat u_{i_m} w'_{m+1}, \\ &\hspace{14ex} z = \hat w_1 v_{i_1} \hat w_2 \dotsb \hat w_m v_{i_m} \hat w_{m+1} \}\end{aligned}$$ Analogously as before, there is $(x,y,z) \in R$ with $x {\sqsubseteq}y$, $z {\sqsubseteq}y$ if and only if the PCP instance has a solution. Generalized intersection problem for recognizable relations ----------------------------------------------------------- We now consider the problem of answering [[CRPQ]{}]{}s with rational relations $S$, or, equivalently, the problem ${\text{\sc GenInt}}_S({{\sf REC}})$. Recall that an instance of such a problem consists of an $m$-ary recognizable relation $R$ and a set $I\subseteq [m]^2$. The question is whether $R \cap_I S\neq \emptyset$, i.e., whether there exists a tuple $(w_1,\ldots,w_m)\in R$ so that $(w_i,w_j)\in S$ whenever $(i,j)\in I$. It turns out that the decidability of this problem hinges on the graph-theoretic properties of $I$. In fact we shall present a *dichotomy result*, classifying problems ${\text{\sc GenInt}}_S({{\sf REC}})$ into [[PSpace]{}]{}-complete and undecidable depending on the structure of $I$. Before stating the result, we need to decide how to represent a recognizable relation $R$. Recall that an $m$-ary $R\in{{\sf REC}}$ is a union of relations of the form $L_1\times\ldots\times L_m$, where each $L_i$ is a regular language. Hence, as the representation of $R$ we take the set of all such $L_i$s involved, and as the measure of its complexity, the total size of NFAs defining the $L_i$s. With a set $I\subseteq [m]^2$ we associate an [*undirected*]{} graph $G_I$ whose nodes are $1,\ldots,m$ and whose edges are $\{i,j\}$ such that either $(i,j)\in I$ or $(j,i)\in I$. We call an instance of ${({{{\sf REC}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ [*acyclic*]{} if $G_I$ is an acyclic graph. Now we can state the dichotomy result. \[acyclic-thm\] [$\bullet$]{} Let $S$ be a binary rational relation. Then acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$ are decidable in [[[PSpace]{}]{}]{}. Moreover, there is a fixed binary relation $S_0$ such that the problem ${({{{\sf REC}}} \mathrel{\cap_{I}} {S_0}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is [[[PSpace]{}]{}]{}-complete. For every $I$ such that $G_I$ is not acyclic, there exists a binary rational relation $S$ such that the problem ${({{{\sf REC}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is undecidable. For [[[PSpace]{}]{}]{}-hardness we can do an easy reduction from nonemptiness of the intersection of $m$ given NFA’s, which is known to be [[[PSpace]{}]{}]{}-complete [@kozen77]. Given $m$ NFAs ${{{\mathcal{A}}}}_1,\dots,{{{\mathcal{A}}}}_m$, define the (acyclic) relation $I = \{(i,i+1) \mid 1 \leq i < m\}$. Then $\bigcap_i {{{\mathcal{L}}}}({{{\mathcal{A}}}}_i)$ is nonempty if and only if $\prod_i {{{\mathcal{L}}}}({{{\mathcal{A}}}}_i) \cap_I S_0 \neq \emptyset$, where $S_0$ is the regular relation $\{(w,w)\ | \ w \in \Sigma^*\}$. For the upper bound, we use the following idea: First we show how to construct, in exponential time, the following for each $m$-ary recognizable relation $R$, binary rational relation $S$ and acyclic $I \subseteq [m]^2$: An $m$-tape automaton ${{{\mathcal{A}}}}(R,S,I)$ that accepts precisely those $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that $\bar w \in R$ and $(w_i,w_j) \in S$, for each $(i,j) \in I$. Intuitively, ${{{\mathcal{A}}}}(R,S,I)$ represents the “synchronization" of the transducer that accepts $R$ with a copy of the 2-tape automaton that recognizes $S$ over each projection defined by the pairs in $I$. Such synchronization is possible since $I$ is acyclic. Hence, in order to solve ${\text{\sc GenInt}}_{S}({{\sf REC}})$ we only need to check ${{{\mathcal{A}}}}(R,S,I)$ for nonemptiness. The latter can be done in [[[PSpace]{}]{}]{} by the standard “on-the-fly" reachability analysis. We proceed with the details of the construction below. Recall that rational relations are the ones defined by $n$-tape automata. We start by formally defining the class of $n$-tape automata that we use in this proof. An $n$-tape automaton, $n > 0$, is a tuple ${{{\mathcal{A}}}}= (Q,\Sigma,Q_0,\delta,F)$, where $Q$ is a finite set of control states, $\Sigma$ is a finite alphabet, $Q_0 \subseteq Q$ is the set of initial states, $\delta : Q \times (\Sigma \cup \{{\varepsilon}\})^n \to 2^{Q \times ([n] \cup \{[n]\})}$ is the transition function with ${\varepsilon}$ a symbol not appearing in $\Sigma$, and $F \subseteq Q$ is the set of final states. Intuitively, the transition function specifies how ${{{\mathcal{A}}}}$ moves in a situation when it is in state $q$ reading symbol $\bar a \in \Sigma^n$: If $(q',j) \in \delta(q,\bar a)$, where $j \in [n]$, then ${{{\mathcal{A}}}}$ is allowed to enter state $q'$ and move its $j$-th head one position to the right of its tape. If $(q',[n]) \in \delta(q,\bar a)$ then ${{{\mathcal{A}}}}$ is allowed to enter state $q'$ and move each one of its heads one position to the right of its tape. Given a tuple $\bar w = (w_1,\dots,w_n) \in (\Sigma^*)^n$ such that $w_i$ is of length $p_i \geq 0$, for each $1 \leq i \leq n$, a [*run*]{} of ${{{\mathcal{A}}}}$ over $\bar w$ is a sequence $q_0 \, P_0 \, q_1 \, P_1 \, \cdots \, q_{k-1} \, P_{k-1} \, q_{k}$, for $k \geq 0$, such that: (1) $q_i \in Q$, for each $0 \leq i \leq k$, (2) $q_0 \in Q_0$, (3) $P_i$ is a tuple in $([p_1] \cup \{0\}) \times \cdots \times ([p_n] \cup \{0\})$, for each $0 \leq i \leq k-1$ (intuitively, the $P_i$’s represent the positions of the $n$ heads of ${{{\mathcal{A}}}}$ at each stage of the run. In particular, the $j$-th component of $P_i$ represents the position of the $j$-th head of ${{{\mathcal{A}}}}$ in stage $i$ of the run), (4) $P_0 = (b_1,\dots,b_n)$, where $b_i := 0$ if $w_i$ is the empty word ${\varepsilon}$ (that is, $p_i = 0$) and $b_i := 1$ otherwise (that is, the run starts by initializing each one of the $n$ heads to be in the initial position of its tape, if possible), (5) $P_{k-1} = (p_1,\dots,p_n)$, that is, the run ends when each head scans the last position of its head, and (6) for each $0 \leq i \leq k-1$, if $P_i = (r_1,\dots,r_n)$ and $$\big(\,(\pi_1(\bar w))[r_1],\, \ldots \, ,(\pi_n(\bar w))[r_n]\,\big) \ = \ (a_1,\ldots,a_n),$$ where we assume by definition that $w[0] = {\varepsilon}$, then $\delta(q_i,(a_1,\dots,a_n))$ contains a pair of the form $(q_{i+1},j)$ such that: 1. if $i < k-1$ then $j \in [n]$ and $P_{i+1}$ is the tuple $(r_1,\dots,r_{j-1}, r_{j} + 1, r_{j+1},\dots,r_n)$. In such case we say that $(q_{i+1},P_{i+1})$ is a [*valid transition from $(q_i,P_i)$ over $\bar w$ in the $j$-th head*]{}, and 2. if $i = k - 1$ then $j = [n]$. This is a technical condition that ensures that each head of ${{{\mathcal{A}}}}$ should leave its tape after the last transition in the run is performed. That is, each run is forced to respect the transition function $\delta$ when the $n$-tape automaton ${{{\mathcal{A}}}}$ is in state $q$ reading the symbols in the corresponding positions of its $n$ heads. Further, the positions of the $n$ heads are updated in the run also according to what is allowed by $\delta$. Notice that each transition in a run moves a single head, except for the last one that moves all of them at the same time. The run is [*accepting*]{} if $q_k \in F$ (that is, ${{{\mathcal{A}}}}$ enters an accepting state after each one of its heads scans the last position of its own tape). Each $n$-tape automaton ${{{\mathcal{A}}}}$ defines the language $L({{{\mathcal{A}}}}) \subseteq (\Sigma^*)^n$ of all those $\bar w =$ $(w_1,\dots,$ $w_n) \in (\Sigma^*)^n$ such that there is an accepting run of ${{{\mathcal{A}}}}$ over $\bar w$. It can be proved with standard techniques that languages defined by $n$-ary rational relations are precisely those defined by $n$-tape automata. Notice that there is an alternative, more general model of $n$-tape automata that allows each transition to move an arbitrary number of heads. It is easy to see that this model is equivalent in expressive power to the one we present here, as transitions that move an arbitrary number of heads can easily be encoded by a a series of single-head transitions. We have decided to use this more restricted version of $n$-tape automata here, as it will allow us simplifying some of the technical details in our proof. Now we continue with the proof that the problem ${\text{\sc GenInt}}_S({{\sf REC}})$ can be solved in [[[PSpace]{}]{}]{} if $I$ is acyclic (that is, it defines an acyclic undirected graph). The main technical tool for proving this is the following lemma: \[lemma:m-tape-aut-acyc\] Let $R$ be an $m$-ary relation in ${{\sf REC}}$, $S$ a binary rational relation, and $I$ a subset of $[m] \times [m]$ that defines an acyclic undirected graph. It is possible to construct, in exponential time, an $m$-tape automaton ${{{\mathcal{A}}}}(R,S,I)$ such that the language defined by ${{{\mathcal{A}}}}(R,S,I)$ is precisely the set of words $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that $\bar w \in R$ and $(w_i,w_j) \in S$ for all $(i,j) \in I$. We start by proving the lemma. The intuitive idea is that ${{{\mathcal{A}}}}(R,S,I)$ is an $m$-tape automaton that at the same time recognizes $R$ and represents the “synchronization” of the $|I|$ copies of the 2-tape automaton $S$ over the projections corresponding to the pairs in $I$. Since $I$ is acyclic, such synchronization is possible. Assume that $|I| = \ell$. Let $t_1,\dots,t_{\ell}$ be an arbitrary enumeration of the pairs in $I$. Also, assume that the recognizable relation $R$ is given as $$\bigcup_i {{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m},$$ where each ${{{\mathcal{N}}}}_{i_j}$ is an NFA over $\Sigma$ (without transitions on the empty word). Assume that the set of states of ${{{\mathcal{N}}}}_{i_j}$ is $U_{i_j}$, its set of initial states is $U^0_{i_j}$ and its set of final states is $U^F_{i_j}$. Further, assume that the 2-tape transducer $S$ is given by the tuple $(Q_S,\Sigma,Q^0_S,\delta_S,Q_S^F)$, where $Q_S$ is the set of states, the set of initial states is $Q^0_S$, the set of final states is $Q^F_S$, and $\delta_S : Q_S \times (\Sigma \cup \{{\varepsilon}\}) \times (\Sigma \cup \{{\varepsilon}\}) \to 2^{Q \times (\{1,2\} \cup \{\{1,2\}\})}$ is the transition function. We take $|I| = \ell$ disjoint copies $S_1,\dots,S_{\ell}$ of $S$, such that $S_i$, for each $1 \leq i \leq {\ell}$, is the tuple $(Q_{S_i},\Sigma,Q^0_{S_i},\delta_{S_i},Q^F_{S_i})$. Without loss of generality we assume that if $t_i = (j,j') \in [m] \times [m]$ then $\delta_{S_i}$ is a function from $Q_{S_i} \times (\Sigma \cup \{{\varepsilon}\}) \times (\Sigma \cup \{{\varepsilon}\})$ into $2^{Q \times (\{j,j'\} \cup \{\{j,j'\}\})}$. We can do this because $I$ is acyclic, and hence $j \neq j'$. The $m$-tape automaton ${{{\mathcal{A}}}}(R,S,I)$ is defined as the tuple $(Q,\Sigma,Q_0,\delta,F)$, where: (1) The set of states $Q$ is $$\bigcup_i \big( U_{i_1} \times \cdots \times U_{i_m} \times Q_{S_1} \times \cdots \times Q_{S_{\ell}} \big).$$ (2) The initial states in $Q_0$ are precisely those in $$\bigcup_i \big( U^0_{i_1} \times \cdots \times U^0_{i_m} \times Q^0_{S_1} \times \cdots \times Q^0_{S_{\ell}} \big).$$ (3) The final states in $F$ are precisely those in $$\bigcup_i \big( U^F_{i_1} \times \cdots \times U^F_{i_m} \times Q^F_{S_1} \times \cdots \times Q^F_{S_{\ell}} \big).$$ (4) The transition function $\delta : Q \times (\Sigma \cup \{{\varepsilon}\})^m \to 2^{Q \times ([m] \cup \{[m]\})}$ is defined as follows on state $\bar{q} \in Q$ and symbol $\bar a \in (\Sigma \cup \{{\varepsilon}\})^m$. Assume that $\bar q = (u_{i_1},\dots,u_{i_m},q_1,\dots,q_\ell)$, where $u_{i_j} \in U_{i_j}$ for each $1 \leq j \leq m$, and $q_{j} \in Q_{S_j}$ for each $1 \leq j \leq {\ell}$. Further, assume that $\bar a = (a_1,\dots,a_m)$, where $a_j \in (\Sigma \cup \{{\varepsilon}\})$ for each $1 \leq j \leq m$. Then $\delta(\bar q,\bar a)$ consists of all pairs of the form $\big((u'_{i_1},\dots,u'_{i_m},q'_1,\dots,q'_\ell), \,j\,\big)$, for $j \in [m]$, such that: 1. $u'_{i_k} = u_{i_k}$ for each $k \in [m] \setminus \{j\}$, and there is a transition in ${{{\mathcal{N}}}}_{i_j}$ from $u_{i_j}$ into $u'_{i_j}$ labeled $a_j$; and 2. for each $1 \leq k \leq {\ell}$, if $t_k$ is the pair $(k_1,k_2) \in [m] \times [m]$ then the following holds: (1) If $j \not\in \{k_1,k_2\}$ then $q_k = q'_k$, and (2) if $j \in \{k_1,k_2\}$ then $(q'_k,j)$ belongs to $\delta_{S_k}(q_k,(a_{k_1},a_{k_2}))$, plus all pairs of the form $\big((u'_{i_1},\dots,u'_{i_m},q'_1,\dots,q'_\ell), \,[m]\,\big)$ such that: 1. for each $1 \leq k \leq m$ there is a transition in ${{{\mathcal{N}}}}_{i_k}$ from $u_{i_k}$ into $u'_{i_k}$ labeled $a_k$; and 2. for each $1 \leq k \leq {\ell}$, if $t_k$ is the pair $(k_1,k_2) \in [m] \times [m]$ then $(q'_k,\{\{k_1,k_2\}\})$ belongs to $\delta_{S_k}(q_k,(a_{k_1},a_{k_2}))$. Intuitively, $\delta$ defines possible transitions of ${{{\mathcal{A}}}}(R,S,I)$ that respect the transition function of each one of the copies of $S$ over its respective projection. Further, while scanning its tapes the automaton ${{{\mathcal{A}}}}(R,S,I)$ also checks that there is an $i$ such that for each $1 \leq j \leq m$ the $j$-th tape contains a word in the language defined by ${{{\mathcal{N}}}}_{i_j}$. Clearly, ${{{\mathcal{A}}}}(R,S,I)$ can be constructed in exponential time from $R$, $S$ and $I$. Notice, however, that states of ${{{\mathcal{A}}}}(R,S,I)$ are of polynomial size. We prove next that for every $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ it is the case that $\bar w$ is accepted by ${{{\mathcal{A}}}}(R,S,I)$ if and only if $\bar w$ belongs to the language of $R$ and $(w_i,w_j) \in S$, for each $(i,j) \in I$. $\Longrightarrow$) Assume first that $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ is accepted by ${{{\mathcal{A}}}}(R,S,I)$. It is easy to see from the way ${{{\mathcal{A}}}}(R,S,I)$ is defined that, for some $i$, the projection of the accepting run of ${{{\mathcal{A}}}}(R,S,I)$ on each $1 \leq j \leq m$ defines an accepting run of ${{{\mathcal{N}}}}_{i_j}$ over $w_j$. Further, for each $(j,k) \in I$ it is the case that the projection of the accepting run of ${{{\mathcal{A}}}}(R,S,I)$ on $(j,k)$ defines an accepting run of $S$ over $(w_j,w_k)$. We conclude that $\bar w$ belongs to the language of $R$ and $(w_j,w_k) \in S$, for each $(j,k) \in I$. $\Longleftarrow$) Assume, on the other hand, that $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ belongs to the language of $R$ and $(w_i,w_j) \in S$, for each $(i,j) \in I$. Further, assume that the length of $w_i$ is $p_i \geq 0$, for each $1\leq i \leq m$. We prove next that $\bar w$ is accepted by ${{{\mathcal{A}}}}(R,S,I)$. Since $\bar w \in R$ it must be the case that $\bar w$ is accepted by ${{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m}$, for some $i$. Let us assume that $$\rho_{i_j} \ := \ u_{i_j,0} \, (1) \, u_{i_j,1} \, (2) \, \cdots \, u_{i_j,p_j-1} \, (p_j) \, \, u_{i_j,p_j}$$ is an accepting run of the 1-tape automaton ${{{\mathcal{N}}}}_{i_j}$ over $w_j$, for each $1 \leq j \leq m$. Since for every $t_j$ ($1 \leq j \leq \ell$) of the form $(k,k') \in [m] \times [m]$ it is the case that $(w_k,w_{k'}) \in S$, there is an accepting run $$\lambda_j \ := \ q_{j,0} \, P_{j,0} \, q_{j,1} \, P_{j,1} \, \cdots \, q_{j,r_j} \, P_{j,r_j} \, q_{j,r_j + 1}$$ of $S_j$ over $(w_k,w_{k'})$. We then inductively define a sequence $$\bar q_0 \, P_0 \, \bar q_1\, P_1 \, \cdots \,$$ where each $\bar q_j$ is a state of $Q$ and each $P_j$ is a tuple in $([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, as follows: (1) $\bar q_0 := (u_{i_1,0},\dots,u_{i_m,0},q_{1,0},\dots,q_{\ell,0})$. (2) $P_0 = (b_1,\dots,b_m)$, where $b_i := 0$ if $w_i$ is the empty word and $b_i := 1$ otherwise. (3) Let $j \geq 0$. Assume that $\bar q_j = (u_{i_1},\dots,u_{i_m},q_{1},\dots,q_{\ell})$, where each $u_{i_k}$ is a state in ${{{\mathcal{N}}}}_{i_k}$ and each $q_k$ is a state in $S_k$, and that $P_j = (r_1,\dots,r_m) \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$. If for every $1 \leq k \leq m$ it is the case that $r_k = p_k$ then the sequence stops. Otherwise it proceeds as follows. If for some $1 \leq k \leq m$ it is the case that $u_{i_k} (r_k)$ is not a subword of the accepting run $\rho_{i_k}$,[^3] or that for some $1 \leq k \leq \ell$ such that $t_k = (k_1,k_2) \in [m] \times [m]$ it is the case that $q_k (r_{k_1},r_{k_2})$ is not a subword of the accepting run $\lambda_k$,[^4] then the sequence simply fails. Otherwise check whether there is a $1 \leq k \leq m$ such that the following holds: 1. $r_k \neq p_k$. 2. For each pair $t_{k_1} \in I$ of the form $(k,k') \in [m] \times [m]$ it is the case that if $q'_{k_1} (r'_{k},r'_{k'})$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k}] \times [p_{k'}])$ that immediately follows $q_{k_1} (r_{k},r_{k'})$ in the run $\lambda_{k_1}$,[^5] then $r'_k = r_k + 1$, and $r'_{k'} = r_{k'}$. 3. For each pair $t_{k_1} \in I$ of the form $(k',k) \in [m] \times [m]$ it is the case that if $q'_{k_1} (r'_{k'},r'_{k})$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k'}] \times [p_{k}])$ that immediately follows $q_{k_1} (r_{k'},r_{k})$ in the run $\lambda_{k_1}$, then $r'_k = r_k + 1$, and $r'_{k'} = r_{k'}$. Intuitively, this states that we can move the $k$-th head of ${{{\mathcal{A}}}}(R,S,I)$ and preserve the transitions on each run of the form $\lambda_{k_1}$ such that $S_{k_1}$ is a copy of $S$ that has one of its components reading tape $k$. If no such $k$ exists the sequence fails. Otherwise pick the least $1 \leq k \leq m$ that satisfies the conditions above, and continue the sequence by defining the pair $(\bar q_{j+1},P_{j+1})$ as $$\big(\,(u_{i_1},\cdots,u_{i_{k-1}},u'_{i_{k}},u_{i_{k+1}},\cdots,u_{i_m}, q'_1,\cdots,q'_{\ell}), \, (r_1,\cdots,r_{k-1},r_{k} + 1,r_{k+1},\cdots,r_m)\,\big),$$ where the following holds: 1. $u'_{i_{k}} (r_{k}+1)$ is the subword in $U_{i_{k}} \cdot [p_{k}]$ that immediately follows $u_{i_{k}} (r_{k})$ in $\rho_{i_{k}}$. 2. For each pair $t_{k_1} \in I$ of the form $(k,k') \in [m] \times [m]$, it is the case that $q'_{k_1}$ satisfies that $q'_{k_1} (r_{k} + 1,r_{k'})$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k}] \times [p_{k'}])$ that immediately follows $q_{k_1} (r_{k},r_{k'})$ in the run $\lambda_{k_1}$. 3. For each pair $t_{k_1} \in I$ of the form $(k',k) \in [m] \times [m]$, it is the case that $q'_{k_1}$ satisfies that $q'_{k_1} (r_{k'},r_{k} + 1)$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k'}] \times [p_{k}])$ that immediately follows $q_{k_1} (r_{k'},r_{k})$ in the run $\lambda_{k_1}$. 4. For each pair $t_{k_1} \in I$ of the form $(k',k'') \in [m] \times [m]$ such that $k' \neq k$ and $k'' \neq k$, it is the case that $q'_{k_1} = q_{k_1}$. In this case we say that $(\bar q_{j+1},P_{j+1})$ is [*obtained from $(\bar q_{j},P_{j})$ by performing a transition on the $k$-th head*]{}. We first prove by induction the following crucial property of the sequence $\bar q_0 P_0 \bar q_1 P_1 \cdots$: The sequence does not fail at any stage $j \geq 0$. Clearly, the sequence does not fail in stage 0 given by pair $(\bar q_0,P_0)$. Assume now by induction that the sequence has not failed until stage $j \geq 0$ given by pair $(\bar q_j,P_j)$, and, further, that the sequence does not stop in stage $j$. We prove next that the sequence does not fail in stage $j+1$. If the sequence stops in stage $j+1$ it clearly does not fail. Assume then that the sequence does not stop in stage $(j+1)$. Also, assume that $q_j = (u_{i_1},\dots,u_{i_m},q_{1},\dots,q_{\ell})$, where each $u_{i_k}$ is a state in ${{{\mathcal{N}}}}_{i_k}$ and each $q_k$ is a state in $S_k$. Further, assume that $P_j = (r_1,\dots,r_m) \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$. Since the sequence did not stop in stage $j$ it must be the case that for every $1 \leq k \leq m$ the sequence $u_{i_k} (r_k)$ is a subword of the accepting run $\rho_{i_k}$, and that for every $1 \leq k \leq \ell$ such that $t_k = (k_1,k_2) \in [m] \times [m]$ the sequence $q_k (r_{k_1},r_{k_2})$ is a subword of the accepting run $\lambda_k$. Assume that $(\bar q_{j+1},P_{j+1})$ is obtained from $(\bar q_j,P_j)$ by performing a transition on the $k$-th head, for $1 \leq k \leq m$. Then the pair $(\bar q_{j+1},P_{j+1})$ is of the form: $$\big(\,(u'_{i_1},\cdots,u'_{i_{k}},\cdots,u'_{i_m}, q'_1,\cdots,q'_{\ell}), \, (r'_1,\cdots,r'_{k},\cdots,r'_m)\,\big),$$ where the following holds: (1) $u'_{i_{k'}} = u_{i_{k'}}$, for each $k' \in [m] \setminus \{k\}$, (2) $u'_{i_{k}} (r_{k}+1)$ is the subword in $U_{i_{k}} \cdot [p_{k}]$ that immediately follows $u_{i_{k}} (r_{k})$ in $\rho_{i_{k}}$, (3) $r'_{k'} = r_{k'}$, for each $k' \in [m] \setminus \{k\}$, (4) $r'_{k} = r_k + 1$, (5) for each pair $t_{k_1} \in I$ of the form $(k,k') \in [m] \times [m]$, it is the case that $q'_{k_1}$ satisfies that $q'_{k_1} (r_{k} + 1,r_{k'})$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k}] \times [p_{k'}])$ that immediately follows $q_{k_1} (r_{k},r_{k'})$ in the run $\lambda_{k_1}$, (6) for each pair $t_{k_1} \in I$ of the form $(k',k) \in [m] \times [m]$, it is the case that $q'_{k_1}$ satisfies that $q'_{k_1} (r_{k'},r_{k} + 1)$ is the subword in $Q_{S_{k_1}} \cdot ([p_{k'}] \times [p_{k}])$ that immediately follows $q_{k_1} (r_{k'},r_{k})$ in the run $\lambda_{k_1}$, and (7) for each pair $t_{k_1} \in I$ of the form $(k',k'') \in [m] \times [m]$ such that $k' \neq k$ and $k'' \neq k$, it is the case that $q'_{k_1} = q_{k_1}$. Then, by inductive hypothesis, it is the case that for every $k' \in [m] \setminus \{k\}$ the sequence $u'_{i_{k'}} (r'_{k'})$ is a subword of the accepting run $\rho_{i_{k'}}$. For the same reason, for every $1 \leq k' \leq {\ell}$ such that $t_{k'} = (k_1,k_2) \in [m] \times [m]$, $k_1 \neq k$ and $k_2 \neq k$, it is the case that $q'_{k'} (r'_{k_1},r'_{k_2})$ is a subword of the accepting run $\lambda_{k'}$. Further, simply by definition $u'_{i_k} (r'_k)$ is a subword of the accepting run $\rho_{i_{k}}$. Also, by definition, for each pair $t_{k_1} \in I$ of the form $(k',k) \in [m] \times [m]$, it is the case that $q'_{k_1} (r'_{k'},r'_{k})$ is a subword of the accepting run $\lambda_{k_1}$, and, similarly, for each pair $t_{k_1} \in I$ of the form $(k,k') \in [m] \times [m]$, it is the case that $q'_{k_1} (r'_{k},r'_{k'})$ is a subword of the accepting run $\lambda_{k_1}$. Hence, in order to prove that the sequence does not fail in stage $j+1$ it is enough to show that there is an $1 \leq h \leq m$ such that some pair of the form $(\bar q,P)$, where $\bar q \in Q$ and $P \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, can be obtained from $(\bar q_{j+1},P_{j+1})$ by performing a transition on the $h$-th head. Since the sequence does not stop in stage $j+1$, the set ${\mathcal{H}}$ $= \{ 1 \leq h' \leq m \mid r'_{h'} \neq p_{h'}\}$ must be nonempty. Let $h_1$ be the least element in ${\mathcal{H}}$. Since the underlying undirected graph of $I$ is acyclic, the connected component of $I$ to which $h_1$ belongs is a tree $T$. Without loss of generality we assume that $T$ is rooted at $h_1$. We start by trying to prove that there is pair of the form $(\bar q,P)$, where $\bar q \in Q$ and $P \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, that can be obtained from $(\bar q_{j+1},P_{j+1})$ by performing a transition on the $h_1$-th head. If this is the case we are done and the proof finishes. Assume otherwise. Then we can assume without loss of generality that there is a pair of the form $t_{k'} \in I$ of the form $(h_1,h_2) \in [m] \times [m]$ such that the subword in $Q_{S_{k'}} \cdot ([p_{h_1}] \times [p_{h_2}])$ that immediately follows $q'_{k'} (r'_{h_1},r'_{h_2})$ in the run $\lambda_{k'}$ is of the form $q''_{k'} (r'_{h_1},r'_{h_2} + 1)$. (That is, the run $\lambda_{k'}$ continues from $q'_{k'} (r'_{h_1},r'_{h_2})$ by moving its second head). The other possibility is that there is a pair of the form $t_{k''} \in I$ of the form $(h_2,h_1) \in [m] \times [m]$ such that the subword in $Q_{S_{k''}} \cdot ([p_{h_2}] \times [p_{h_1}])$ that immediately follows $q'_{k''} (r'_{h_2},r'_{h_1})$ in the run $\lambda_{k''}$ is of the form $q''_{k''} (r'_{h_2} + 1,r'_{h_1})$. But this case is completely symmetric to the previous one. We then continue by trying to show that there is pair of the form $(\bar q,P)$, where $\bar q \in Q$ and $P \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, that can be obtained from $(\bar q_{j+1},P_{j+1})$ by performing a transition on the $h_2$-th head. If this is the case then we are ready and the proof finishes. Assume otherwise. Then again we can assume without loss of generality that there is a pair of the form $t_{k''} \in I$ of the form $(h_2,h_3) \in [m] \times [m]$ such that the subword in $Q_{S_{k''}} \cdot ([p_{h_2}] \times [p_{h_3}])$ that immediately follows $q'_{k''} (r'_{h_2},r'_{h_3})$ in the run $\lambda_{k''}$ is of the form $q''_{k''} (r'_{h_2},r'_{h_3} + 1)$. (That is, the run $\lambda_{k''}$ continues from $q'_{k''} (r'_{h_2},r'_{h_3})$ by moving its second head). Since $T$ is acyclic and finite, if we iteratively continue in this way from $h_2$ we will either have to find some $h \in {\mathcal{H}}$ such that there is pair of the form $(\bar q,P)$, where $\bar q \in Q$ and $P \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, that can be obtained from $(\bar q_{j+1},P_{j+1})$ by performing a transition on the $h$-th head, or we will have to stop in some $h \in {\mathcal{H}}$ that is a leaf in $T$. But clearly for this $h$ it must be possible to show that there is pair of the form $(\bar q,P)$, where $\bar q \in Q$ and $P \in ([p_1] \cup \{0\}) \times \cdots \times ([p_m] \cup \{0\})$, that can be obtained from $(\bar q_{j+1},P_{j+1})$ by performing a transition on the $h$-th head. This shows that the sequence does not fail in stage $j+1$. We now continue with the proof of the first part of the theorem. Since the sequence does not fail, and from stage $j$ into stage $j+1$ the position of at least one head moves to the right of its tape, the sequence must stop in some stage $j \geq 0$ with associated pair $(\bar q_j,P_j)$. Then $P_j = (p_1,\dots,p_m)$. Assume that $\bar q_j = (u_{i_1},\dots,u_{i_m},q_{1},\dots,q_{\ell})$, where each $u_{i_k}$ is a state in ${{{\mathcal{N}}}}_{i_k}$ and each $q_k$ is a state in $S_k$. Then, from the properties of the sequence, it must be the case that $u_{i_k} (p_k)$ appears as a subword in the accepting run $\rho_{i_k}$, for each $1 \leq k \leq m$, and for each $1 \leq k \leq \ell$ such that $t_k = (k_1,k_2) \in [m] \times [m]$ it is the case that $q_k (p_{k_1},p_{k_2})$ appears as a subword in the accepting run $\lambda_k$. Hence $u_{i_k} = u_{i_k, p_k - 1}$ and $q_k = q_{k,r_k}$. It easily follows from the definition of the sequence $(\bar q_0,P_0) (\bar q_1,P_1) \cdots$ and the transition function $\delta$ of ${{{\mathcal{A}}}}(R,S,I)$, that the following holds for each $k< j$: If $(\bar q_{k+1},P_{k+1})$ is obtained from $(\bar q_k,P_k)$ by performing a transition on the $k'$-the head, $1 \leq k' \leq m$, then $(\bar q_{k+1},P_{k+1})$ is a valid transition from $(\bar q_k,P_{k})$ over $\bar w$ in the $k'$-th head. Further, assume that $$\bar a \ = \ \big(\,(\pi_1(\bar w))[p_1],\, \ldots \, ,(\pi_n(\bar w))[p_n]\,\big),$$ then $\delta(\bar q_j,\bar a)$ contains a pair of the form $(\bar q_{j+1},\{[m]\})$, where: $$\bar q_{j+1} \ := \ \big(\,u_{i_1,p_1},\cdots,u_{i_m,p_m},q_{1,r_1+1},\cdots,q_{{\ell},r_{\ell}+1} \, \big).$$ Clearly, $\bar q_{j+1} \in F$ (that is, $\bar q_{j+1}$ is a final state of ${{{\mathcal{A}}}}(R,S,I)$) and we conclude that $\bar q_0 P_0 \bar q_1 P_1 \cdots \bar q_j P_j \bar q_{j+1}$ is an accepting run of ${{{\mathcal{A}}}}(R,S,I)$ over $\bar w$, which was to be proved. We now explain how Theorem \[acyclic-thm\] follows from Lemma \[lemma:m-tape-aut-acyc\]. The lemma tells us that in order to solve acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$ we can construct, from the $m$-ary recognizable relation $R$, the binary rational relation $S$ and the acyclic $I \subseteq [m] \times [m]$, the $m$-tape automaton ${{{\mathcal{A}}}}(R,S,I)$, and then check ${{{\mathcal{A}}}}(R,S,I)$ for nonemptiness. The latter can be done in polynomial time in the size of ${{{\mathcal{A}}}}(R,S,I)$ by performing a simple reachability analysis in the states of ${{{\mathcal{A}}}}(R,S,I)$. This gives us a simple exponential time bound for the complexity of solving acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$. However, as we mentioned before, each state in ${{{\mathcal{A}}}}(R,S,I)$ is of polynomial size. Thus, checking whether ${{{\mathcal{A}}}}(R,S,I)$ is nonempty can be done in nondeterministic [[[PSpace]{}]{}]{} by using a standard “on-the-fly” construction of ${{{\mathcal{A}}}}(R,S,I)$ as follows: Whenever the reachability algorithm for checking emptiness of ${{{\mathcal{A}}}}(R,S,I)$ wants to move from a state $r_1$ of ${{{\mathcal{A}}}}(R,S,I)$ to a state $r_2$, it guesses $r_2$ and checks whether there is a transition from $r_1$ to $r_2$. Once this is done, the algorithm can discard $r_1$ and follow from $r_2$. Thus, at each step, the algorithm needs to keep track of at most two states, each one of polynomial size. From Savitch’s theorem, we know that [[[PSpace]{}]{}]{} equals nondeterministic [[[PSpace]{}]{}]{}. This shows that acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$ can be solved in [[[PSpace]{}]{}]{}. The proof of the second part of the theorem is by an easy reduction from the PCP problem (e.g. in the style of the proof of the second part of Theorem \[scr-dichotomy\]). [[CRPQ]{}]{}s with rational relations ------------------------------------- The acyclicity condition gives us a robust class of queries, with an easy syntactic definition, that can be extended with [*arbitrary*]{} rational relations. Note that acyclicity is a very standard restriction imposed on database queries to achieve better behavior, often with respect to complexity; it is in general known to be easy to enforce syntactically, and to yield benefits from both the semantics and query evaluation point of view. This is the approach we follow here. Recall that [[CRPQ]{}]{}($S$) queries are those of the form $${\varphi}(\bar x) \ =\ \exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\chi_i,\chi_j)\Big),$$ see (\[crpqs-eq\]) in Sec.\[gl:sec\]. We call such a query [*acyclic*]{} if $G_I$, the underlying undirected graph of $I$, is acyclic. \[crpqs-thm\] The query evaluation problem for acyclic [[CRPQ]{}]{}($S$) queries is decidable for every binary rational relation $S$. Its combined complexity is [[[PSpace]{}]{}]{}-complete, and data complexity is [[NLogSpace]{}]{}-complete. We provide a nondeterministic [[[PSpace]{}]{}]{} algorithm that solves the query evaluation problem when we assume the query to be part of the input (i.e. combined complexity). Then the result will follow from Savitch’s theorem, that states that [[[PSpace]{}]{}]{} equals nondeterministic [[[PSpace]{}]{}]{}. Given a graph $G$, a tuple $\bar a$ of nodes, and acyclic [[CRPQ]{}]{}($S$) query of the form $${\varphi}(\bar x) \ =\ \exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\rho_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\rho_i,\rho_j)\Big),$$ the algorithm starts by guessing a polynomial size assignment $\bar b$ for the existentially quantified variables of ${\varphi}(\bar x)$, that is, the variables in $\bar y$. It then checks that $G \models \psi(\bar a,\bar b)$, assuming that $\psi(\bar x,\bar y)$ is the [[CRPQ]{}]{}($S$) formula $$\Big(\bigwedge_{i=1}^m (u_i {\stackrel{\rho_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\rho_i,\rho_j)\Big).$$ If this is the case the algorithm accepts and declares that $G \models {\varphi}(\bar a)$. Otherwise it rejects and declares that $G \not\models {\varphi}(\bar a)$. By using essentially the same techniques as in the proof of Lemma \[crpqs-lemma-one\], one can show that there is a polynomial time translation that, given $G$ and $\psi(\bar a,\bar b)$, constructs an acyclic instance of ${\text{\sc GenInt}}_S({{\sf REC}})$ such that the answer to this instance is ‘yes’ iff $G\models\psi(\bar a,\bar b)$. From Theorem \[acyclic-thm\] we know that acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$ can be solved in [[[PSpace]{}]{}]{}, and hence that the algorithm described above can be performed in nondeterministic [[[PSpace]{}]{}]{}. With respect to the data complexity, we start with the following observation. Acyclic instances of ${\text{\sc GenInt}}_S({{\sf REC}})$ can be solved in [[NLogSpace]{}]{} for $m$-ary relations in ${{\sf REC}}$, if we assume $m$ to be fixed. The proof of this fact mimicks the proof of the [[[PSpace]{}]{}]{} upper bound in Theorem \[acyclic-thm\], but this time we assume the arity of $R$ to be fixed. In such case ${{{\mathcal{A}}}}(R,S,I)$ is of polynomial size, and each one of its states is of logarithmic size. We can easily check ${{{\mathcal{A}}}}(R,S,I)$ for nonemptiness in [[NLogSpace]{}]{} in this case, by performing a standard “on-the-fly” reachability analysis. We provide an [[NLogSpace]{}]{} algorithm that solves the query evaluation problem when we assume the query to be fixed (i.e. data complexity). Consider a fixed acyclic [[CRPQ]{}]{}($S$) query of the form $${\varphi}(\bar x) \ =\ \exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\rho_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\rho_i,\rho_j)\Big).$$ Given a graph $G$ and tuple $\bar a$ of nodes, the algorithm constructs (using the proof of Lemma \[crpqs-lemma-one\]) in deterministic logarithmic space an acyclic instance of ${\text{\sc GenInt}}_S({{\sf REC}})$, given by recognizable relation $R$ of [*fixed*]{} arity $m$ (this follows from the fact that ${\varphi}(\bar x)$ is fixed), and fixed $I \subseteq [m] \times [m]$, such that the answer to this instance is ‘yes’ iff $G \models {\varphi}(\bar a)$. Since the arity of $R$ is fixed, our previous observation tells us that we can solve the instance of ${\text{\sc GenInt}}_S({{\sf REC}})$ given by $R$ and $I$ in [[NLogSpace]{}]{}. But [[NLogSpace]{}]{} reductions compose, and hence the data complexity of the query evaluation problem for [[CRPQ]{}]{}($S$) queries is also [[NLogSpace]{}]{}. Thus, we get not only the possibility of extending [[CRPQ]{}]{}s with rational relations but also a good complexity of query evaluation. The [[NLogSpace]{}]{}-data complexity matches that of RPQs, [[CRPQ]{}]{}s, and [[ECRPQ]{}]{}s [@CM90; @CMW87; @pods10], and the combined complexity matches that of first-order logic, or [[ECRPQ]{}]{}s without extra relations. The next natural question is whether we can recover decidability for weaker syntactic conditions by putting restrictions on a class of relations $S$. The answer to this is positive if we consider [ *directed*]{} acyclicity of $I$, rather than acyclicity of the underlying undirected graph of $I$. Then we get decidability for the class of ${{\sf SCR}}$ relations. In fact, we have a dichotomy similar to that of Theorem \[acyclic-thm\]. \[scr-dichotomy\] [$\bullet$]{} Let $S$ be a relation from ${{\sf SCR}}$. Then ${({{{\sf REC}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is decidable in [[NExptime]{}]{} if $I$ is a directed acyclic graph. There is a relation $I$ with a directed cycle and $S\in{{\sf SCR}}$ such that ${({{{\sf REC}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$ is undecidable. We start by proving the first item. In order to do that, we first prove a small model property for the size of the witnesses of the instances in ${({{{\sf REC}}} \mathrel{\cap_{I}} {S}) \stackrel{\text{\tiny?}}{=}\emptyset}$, when $S$ is a relation in ${{\sf SCR}}$ and $I$ is a DAG. Let $R$ be an $m$-ary recognizable relation, $m > 0$, and $I \subseteq [m] \times [m]$ that defines a DAG. Assume that both $R$ and $S$ are over $\Sigma$. Then the following holds: Assume $R \cap_I S \neq \emptyset$. There is $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ of at most exponential size that is accepted by $R$ and such that $(w_i,w_j) \in S$, for each $(i,j) \in I$. We prove this small model property by applying usual cutting techniques. Assume that $R$ is given as $$\bigcup_i {{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m},$$ where each ${{{\mathcal{N}}}}_{i_j}$ is an NFA over $\Sigma$. Further, assume that $S$ is given as one of the 2-tape NFAs used in the [[[PSpace]{}]{}]{} upper bound of Theorem \[acyclic-thm\]. That is, $S$ defined by the tuple $(Q_S,\Sigma,Q^0_S,\delta_S,Q_S^F)$, where $Q_S$ is the set of states, the set of initial states is $Q^0_S$, the set of final states is $Q^F_S$, and $\delta_S : Q_S \times (\Sigma \cup \{{\varepsilon}\}) \times (\Sigma \cup \{{\varepsilon}\}) \to 2^{Q \times (\{1,2\} \cup \{\{1,2\}\})}$ is the transition function. Assume also that there is $\bar u = (u_1,\dots,u_m) \in (\Sigma^*)^m$ that is accepted by $R$ such that $(u_i,u_j) \in S$, for each $(i,j) \in I$. Then $\bar u$ is accepted by ${{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m}$, for some $i$. Since $I$ is a DAG it has a topological order on $[m]$. We assume without loss of generality that such topological order is precisely the linear order on $[m]$. We prove the following invariant on $1 \leq {\ell}\leq m$: There exists $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that (1) $\bar w$ is accepted by $R$, (2) $(w_j,w_k) \in S$, for each $(j,k) \in I$, and (3) each $w_{{\ell}'}$ with ${\ell}' \leq {\ell}$ is of at most exponential size. Clearly this proves our small model property on ${\ell}= m$. The proof is by induction. The basis case is ${\ell}= 1$. We start from $\bar u$ and “cut" its first component in order to satisfy the invariant. By using standard pumping techniques it is possible to show that there is a subsequence $w_1$ of $u_1$ of size at most $O(|{{{\mathcal{N}}}}_{i_1}|)$ that is accepted by ${{{\mathcal{N}}}}_{i_1}$. Clearly the tuple $(w_1,u_2,\dots,u_m)$ belongs to $R$. Further, for each pair of the form $(1,j)$ in $I$ it is the case that $(w_1,u_j) \in S$. This is the case because $(u_1,u_j) \in S$, $u_1 {\sqsubseteq}w_j$ and $S \in {{\sf SCR}}$. Notice that we do not need to consider pairs of the form $(j,1)$ since we are assuming that the linear order on $[m]$ is a topological order of $I$. This implies that $(w_1,u_2,\dots,u_m)$ satisfies our invariant on ${\ell}= 1$. Assume now that the invariant holds for ${\ell}< m$. Then there exists $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that (1) $\bar w$ is accepted by $R$, (2) $(w_j,w_k) \in S$, for each $(j,k) \in I$, and (3) each $w_{{\ell}'}$ with ${\ell}' \leq {\ell}$ is of at most exponential size. We proceed to “cut" $w_{{\ell}+1}$ while preserving the invariant. Let $I({\ell}+1)$ be $\{ 1 \leq j \leq {\ell}\mid (j,{\ell}+1) \in I\}$. Let $\rho_j$ be an accepting run of $S$ over $(w_j,w_{{\ell}+1})$, for each $j \in I({\ell}+1)$. Further, let ${\mathcal{P}}$ be the set of all positions $1 \leq k \leq |w_{{\ell}+1}|$ such that for some $j \in I({\ell}+1)$ the accepting run $\rho_j$ contains a subword of the form $q \, (k',k) \, q' \, (k'+1,k)$, where $q,q' \in Q_S$ and $1 \leq k' \leq |w_j|$. That is, ${\mathcal{P}}$ defines the set of positions over $w_{{\ell}+1}$, in which the accepting run $\rho_j$ of $S$ over $(w_j,w_{{\ell}+1})$, for some $j \in I({\ell}+1)$, makes a move on the head positioned over $w_j$. Intuitively, these are the positions of $w_{{\ell}+1}$ that should not be “cut" in order to maintain the invariant. Notice that the size of ${\mathcal{P}}$ is bounded by $s := \Sigma_{1 \leq {\ell}' \leq {\ell}} |w_{{\ell}'}|$, and hence from the inductive hypothesis the size of ${\mathcal{P}}$ is exponentially bounded. By using standard pumping techniques it is possible to show that there is a subsequence $w'_{{\ell}+1}$ of $w_{{\ell}+1}$ of size at most $|{{{\mathcal{N}}}}_{i_{{\ell}+1}}| \cdot |{\mathcal{P}}| \cdot |I({\ell}+1) \cdot |Q_S| \cdot |\Sigma| + 2$, such that $w'_{{\ell}+1}$ is accepted by ${{{\mathcal{N}}}}_{i_{{\ell}+1}}$ and $(w_j,w'_{{\ell}+1})$ is accepted by $S$, for each $j \in I({\ell}+1)$. Assume this is not the case, and that the shortest subsequence $w'_{{\ell}+1}$ of $w_{{\ell}+1}$ that satisfies this condition is of length strictly bigger than $|{{{\mathcal{N}}}}_{i_{{\ell}+1}}| \cdot |{\mathcal{P}}| \cdot |I({\ell}+1)| \cdot |Q_S| \cdot |\Sigma| + 2$. Then there exist two positions $1 \leq i < j \leq |w_{{\ell}+1}|$ such that (i) $k \not \in {\mathcal{P}}$, for each $i \leq k \leq j$, (ii) the labels of $i$ and $j$ in $w_{{\ell}+1}$ coincide, (iii) the run $\rho_s$ assigns the same state to both $i$ and $j$, for each $s \in I({\ell}+1)$, and (iv) some accepting run of ${\mathcal{N}}_{{\ell}+1}$ assigns the same state to both $i$ and $j$. Let $w''_{{\ell}+1}$ be the subsequence of $w'_{{\ell}+1}$ that is obtained by cutting all positions $i \leq k \leq j-1$. Clearly, $w''_{{\ell}+1}$ is shorter than $w'_{{\ell}+1}$ and is accepted by ${\mathcal{N}}_{{\ell}+1}$. Further, $(w_s,w''_{{\ell}+1})$ is accepted by $S$, for every $s \in I({\ell}+1)$. This is because $(w_s,w'_{{\ell}+1})$ is invariant with respect to the accepting run $\rho_s$, for each $s \in I({\ell}+1)$, as the cutting does not include elements in ${\mathcal{P}}$ (that is, we only cut elements in which $\rho_s$ does not need to synchronize with the head positioned over $w_s$) and $\rho_s$ assigns the same state to both $i$ and $j$, which have, in addition, the same label. This is a contradiction. We claim that $\bar w' = (w_1,\dots,w_{\ell},w'_{{\ell}+1},w_{{\ell}+2},\cdots,w_m) \in (\Sigma^*)^m$ satisfies the invariant. Clearly, $\bar w'$ is accepted by $R$ since $w'_{{\ell}+1}$ is accepted by ${{{\mathcal{N}}}}_{i_{{\ell}+1}}$ and, by inductive hypothesis, $w_j$ is accepted by ${{{\mathcal{N}}}}_{i_j}$, for each $j \in [m] \setminus \{{\ell}+1\}$. Further, simply by definition it is the case that $(w_j,w'_{{\ell}+1}) \in S$, for each $j \in I({\ell}+1)$. Moreover, $(w'_{{\ell}+1},w_j) \in S$, for each $({\ell}+1,j) \in I$, simply because $w'_{{\ell}+1} {\sqsubseteq}w_{{\ell}+1}$ and $S \in {{\sf SCR}}$. The remaining pairs in $I$ are satisfied by induction hypothesis. Finally, $w'_{{\ell}+1}$ is of size at most $O(|{{{\mathcal{N}}}}_{i_{{\ell}+1}}| \cdot |{\mathcal{P}}| \cdot |I({\ell}+1)| \cdot |Q_S| \cdot |\Sigma|)$, and hence, by inductive hypothesis, it is of size at most exponential. By inductive hypothesis, each $w_{{\ell}'}$ with ${\ell}' \leq {\ell}$ is of size at most exponential. It is now simple to prove the first part of the theorem using the small model property. In fact, in order to check whether $R \cap_I S \neq \emptyset$, for $S \in {{\sf SCR}}$, we only need to guess an exponential size witness $\bar w$, and then check in polynomial time that it satisfies $R$ and each projection in $I$ satisfies $S$. This algorithm clearly works in nondeterministic exponential time. Now we prove the second item. We reduce from the PCP problem. Assume that the input to PCP are two equally long lists $a_1,a_2,\dots,a_n$ and $b_1,b_2,\dots,b_n$ of strings over alphabet $\Sigma$. Recall that we want to decide whether there exists a solution for this input, that is, a sequence of indices $i_1,i_2,\dots,i_k$ such that $1 \leq i_j \leq n$ ($1 \leq j \leq k$) and $a_{i_1} a_{i_2} \cdots a_{i_k} = b_{i_1} b_{i_2} \cdots b_{i_k}$. Assume without loss of generality that $\Sigma$ is disjoint from $\mathbb{N}$. Corresponding to every input $a_1,a_2,\dots,a_n$ and $b_1,b_2,\dots,b_n$ of PCP over alphabet $\Sigma$, we define the following: [$\bullet$]{} An alphabet $\Sigma(n) := \Sigma \cup \{1,2,\dots,n\}$; a regular language $R_{a,n} := (\bigcup_{1 \leq i \leq n} a_i \cdot i)^*$; a regular language $R_{b,n} := (\bigcup_{1 \leq j \leq n} b_j \cdot j)^*$. Consider a ternary recognizable relation $R$ over alphabet $\Sigma(n) \cup \{\star,\dagger\}$, where $\star$ and $\dagger$ are symbols not appearing in $\Sigma(n)$, defined as $$\big(\star \cdot \Sigma^*\big) \, \times \, \big(\dagger \cdot R_{a,n}\big) \, \times \, \big(\dagger \cdot R_{b,n}\big).$$ Further, consider a binary relation $S$ over $(\Sigma(n) \cup \{\star,\dagger\})^*$ defined as the union of the following sets: (1) $\{(w,w') \in (\dagger \cdot (\Sigma(n))^*) \times (\dagger \cdot (\Sigma(n))^*) \, \mid \, \text{$w_{\{1,\dots,n\}} {\sqsubseteq}w'_{\{1,\dots,n\}}$}\}$. (2) $\{(w,w') \in (\dagger \cdot (\Sigma(n))^*) \times (\star \cdot \Sigma^*) \, \mid \, \text{$w_{\Sigma} {\sqsubseteq}w'_\Sigma$\}}$. (3) $\{(w,w') \in (\star \cdot \Sigma^*) \times (\dagger \cdot (\Sigma(n))^*) \, \mid \, \text{$w_{\Sigma} {\sqsubseteq}w'_\Sigma$\}}$. The intuition is that $S$ takes care that indices in the sequences are consistent. It is easy to see that $S$ is a rational relation, which implies that $S_{\sqsubseteq}$ is in ${{\sf SCR}}$. From input $a_1,\dots,a_n$ and $b_1,\dots,b_n$ to the PCP problem, we construct an instance of ${\text{\sc GenInt}}_{S_{\sqsubseteq}}({{\sf REC}})$ defined by the recognizable relation $R$ and $$I \ = \ \{(1,2),(2,1),(1,3),(3,1),(2,3),(3,2)\}.$$ We claim that $R \cap_I S \neq \emptyset$ if and only if the PCP instance given by lists $a_1,\dots,a_n$ and $b_1,\dots,b_n$ has a solution. Assume first that $R \cap_I S \neq \emptyset$. Hence there are words $w_1 \in (\star \cdot \Sigma^*)$, $w_2 \in (\dagger \cdot R_{a,n})$ and $w_3 \in (\dagger \cdot R_{b,n})$, such that $(w_i,w_j)$ belongs to $S_{{\sqsubseteq}}$, for each $(i,j) \in I$. Since $(2,3) \in I$, it must be the case that $(w_2,w_3)$ belongs to $S_{\sqsubseteq}$. Thus, since the first symbol of both $w_2$ and $w_3$ is $\dagger$, it must be the case that $(w_2)_{\{1,\dots,n\}} {\sqsubseteq}(w_3)_{\{1,\dots,n\}}$. For the same reasons, and given that $(3,2) \in I$, it must be the case that $(w_3)_{\{1,\dots,n\}} {\sqsubseteq}(w_2)_{\{1,\dots,n\}}$. We conclude that $(w_2)_{\{1,\dots,n\}} = (w_3)_{\{1,\dots,n\}}$. Since $(1,2) \in I$, it must be the case that $(w_1,w_2)$ belongs to $S_{\sqsubseteq}$. Thus, since the first symbol of $w_1$ is $\star$ and the first symbol of $w_2$ is $\dagger$, it must be the case that $(w_1)_{\Sigma} {\sqsubseteq}(w_2)_{\Sigma}$. For the same reasons, and given that $(2,1) \in I$, it must be the case that $(w_2)_{\Sigma} {\sqsubseteq}(w_1)_{\Sigma}$. We conclude that $(w_1)_{\Sigma} = (w_2)_{\Sigma}$. Mimicking the same argument, but this time using the fact that $\{(1,3),(3,1)\} \subseteq I$, we conclude that $(w_1)_{\Sigma} = (w_3)_{\Sigma}$. But then $(w_2)_{\Sigma} = (w_3)_{\Sigma}$ (because $(w_1)_{\Sigma} = (w_2)_{\Sigma}$). Assume $(w_2)_{\{1,\dots,n\}} = (w_3)_{\{1,\dots,n\}} = i_1 i_2 \cdots i_n$, where each $i_j \in [n]$. Then from the fact that $(w_2)_{\Sigma} = (w_3)_{\Sigma}$ we conclude that $a_{i_1} a_{i_2} \cdots a_{i_n} = b_{i_1} b_{i_2} \cdots b_{i_n}$, and hence that the instance of the PCP problem given by $a_1,\dots,a_n$ and $b_1,\dots,b_n$ has a solution. The other direction, that is, that the fact that the instance of the PCP problem given by $a_1,\dots,a_n$ and $b_1,\dots,b_n$ has a solution implies that $R \cap_I S \neq \emptyset$, can be proved using the same arguments. In particular, if we have a [[CRPQ]{}]{}($S$) query of the form $$\exists \bar y\ \Big( \bigwedge_{i=1}^m (u_i {\stackrel{\chi_i:L_i}{\longrightarrow}} u_i') \ \ \wedge \ \ \bigwedge_{(i,j)\in I} S(\chi_i,\chi_j)\Big),$$ where $I$ is acyclic (as a directed graph) and $S\in{{\sf SCR}}$, then query evaluation has [[NExptime]{}]{} combined complexity. The proof of this result is quite different from the upper bound proof of Theorem \[acyclic-thm\], since the set of witnesses for the generalized intersection problem is no longer guaranteed to be rational without the undirected acyclicity condition. Instead, here we establish the finite-model property, which implies the result. Also, as a corollary to the proof of Theorem \[scr-dichotomy\], we get the following result: \[po-scr-prop\] Let $S\in{{\sf SCR}}$ be a partial order. Then ${\text{\sc GenInt}}_S({{\sf REC}})$ is decidable in [[NExptime]{}]{}. As in the previous proof, we start by proving a small model property for the size of the witnesses of the instances in ${\text{\sc GenInt}}_S({{\sf REC}})$, for $S$ a partial order in ${{\sf SCR}}$. Let $R$ be an $m$-ary recognizable relation, $m > 0$, and $I \subseteq [m] \times [m]$. Assume that both $R$ and $S$ are over $\Sigma$. Then the following holds: Assume $R \cap_I S \neq \emptyset$. There is $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ of at most exponential size that is accepted by $R$ and such that $(w_i,w_j) \in S$, for each $(i,j) \in I$. We prove this small model property by applying usual cutting techniques. Assume that $R$ is given as $$\bigcup_i {{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m},$$ where each ${{{\mathcal{N}}}}_{i_j}$ is an NFA over $\Sigma$. Further, assume that $S$ is given as the 2-tape transducer $S$ defined by the tuple $(Q_S,\Sigma,Q^0_S,\delta_S,Q_S^F)$, where $Q_S$ is the set of states, the set of initial states is $Q^0_S$, the set of final states is $Q^F_S$, and $\delta_S : Q_S \times (\Sigma \cup \{{\varepsilon}\}) \times (\Sigma \cup \{{\varepsilon}\}) \to 2^{Q \times (\{1,2\} \cup \{\{1,2\}\})}$ is the transition function. Assume also that there is $\bar u = (u_1,\dots,u_m) \in (\Sigma^*)^m$ that is accepted by $R$ and such that $(u_i,u_j) \in S$, for each $(i,j) \in I$. Then $\bar u$ is accepted by ${{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m}$, for some $i$. Let $I^+$ be the transitive closure of $I$. Notice, since $S$ defines a partial order over $\Sigma^*$, that $(u_j,u_k) \in S$, for each $(j,k) \in I^+$. Further, for every pair $(j,k) \in [m] \times [m]$ such that $\{(j,k),(k,j)\} \subseteq I^+$ we must have that $u_j = u_k$. We need to maintain such equality when applying our cutting techniques over $\bar u$. In order to do that we define an equivalence relation $\mathcal{E}_I$ over $[m]$ as follows: $$\mathcal{E}_I \ := \ \{(j,k) \in [m] \times [m] \mid j = k \text{ or } \{(j,k),(k,j)\} \subseteq I^+\}.$$ Hence $\mathcal{E}_I$ contains all pairs $(j,k) \in [m] \times [m]$ such that $I$ implies $u_j = u_k$. Take the quotient $[m]/\mathcal{E}_I$, and consider the restriction $I([m]/\mathcal{E}_I)$ of $I$ over $[m]/\mathcal{E}_I$, defined in the expected way: $([j]_{\mathcal{E}_I},[k]_{\mathcal{E}_I}) \in I([m]/\mathcal{E}_I)$ if and only if $(j',k') \in I$, for some $j'\in [j]_{\mathcal{E}_I}$ and $k' \in [k]_{\mathcal{E}_I}$. Notice that $I([m]/\mathcal{E}_I)$ defines a DAG over $[m]/\mathcal{E}_I$. Consider now a new input to ${\text{\sc GenInt}}_S({{\sf REC}})$, given this time by $I([m]/\mathcal{E}_I) \subseteq ([m]/\mathcal{E}_I) \times ([m]/\mathcal{E}_I)$, and the recognizable relation $R'$ defined as $$\prod_{[j]_{\mathcal{E}_I} \in [m]/\mathcal{E}_I} {{{\mathcal{M}}}}_{i}^{[j]_{\mathcal{E}_I}},$$ where ${{{\mathcal{M}}}}_{i}^{[j]_{\mathcal{E}_I}} = \bigcap_{k \in [j]_{\mathcal{E}_I}} {{{\mathcal{N}}}}_{i_k}$. Notice that this new input may be of exponential size in the size of $R$. Assume that $[m]/\mathcal{E}_I$ consists of $p \leq m$ equivalence classes and, without loss of generality, that these correspond to the first $p$ indices of $[m]$. Hence each product in $R'$ is of the form $\prod {{{\mathcal{M}}}}_{i_1} \times \cdots \times {{{\mathcal{M}}}}_{i_p}$, where ${{{\mathcal{M}}}}_{i_j}$ is defined as the intersection of all NFAs in the equivalence class $[j]_{\mathcal{E}_I}$. Also, $I([m]/\mathcal{E}_I)$ is the restriction of $I$ to $[p] \times [p]$. Then it must be the case that $(u_1,\dots,u_{p}) \in (\Sigma^*)^p$ belongs to $R'$ and $(u_j,u_k) \in S$, for each $(j,k) \in I([m]/\mathcal{E}_I)$. Further, from every witness to the fact that $R' \cap_{I([m]/\mathcal{E}_I)} S \neq \emptyset$ we can construct in polynomial time a witness to the fact that $R \cap_I S \neq \emptyset$. Hence, in order to prove our small model property it will be enough to prove the following: There is $\bar w = (w_1,\dots,w_p) \in (\Sigma^*)^p$ of at most exponential size (in $R$) that is accepted by $R'$ and such that $(w_j,w_k) \in S$, for each $(j,k) \in I([m]/\mathcal{E}_I)$. The latter can be done by mimicking the inductive proof of the first part of Theorem \[scr-dichotomy\]. We only have to deal now with the issue that some of the NFAs that define $R'$ may be exponential in the size of $R$. However, by following the inductive proof one observes that this is not a problem, and that the same exponential bound holds in this case. It is now simple to prove the first part of the theorem using the small model property. In fact, in order to check whether $R \cap_I S \neq \emptyset$, for $S$ a partial order in ${{\sf SCR}}$, we only need to guess an exponential size witness $\bar w$, and then check in exponential time that it satisfies $R$ and each projection in $I$ satisfies $S$. This algorithm clearly works in nondeterministic exponential time. By applying similar techniques to those in the proof of Theorem \[crpqs-thm\] we obtain the following. \[crpq-subsec-cor\] If $S\in{{\sf SCR}}$ is a partial order, then [[CRPQ]{}]{}($S$) queries can be evaluated with [[NExptime]{}]{} combined complexity. In particular, [[CRPQ]{}]{}(${\sqsubseteq}$) queries have [[NExptime]{}]{} combined complexity. We do not have at this point a matching lower bound for the complexity [[CRPQ]{}]{}(${\sqsubseteq}$) queries. Notice that an easy [[[PSpace]{}]{}]{} lower bound follows by a reduction from the intersection problem for NFAs, as the one presented in the proof of Theorem \[acyclic-thm\]. The last question is whether these results can be extended to other relations considered here, such as subword and suffix. We do not know the result for subword (which appears to be hard), but we do have a matching complexity bound for the suffix relation. \[crpq-suff-prop\] The problem ${\text{\sc GenInt}}_{{\preceq_{{\rm suff}}}}({{\sf REC}})$ is decidable in [[NExptime]{}]{}. In particular, [[CRPQ]{}]{}(${\preceq_{{\rm suff}}}$) queries can be evaluated with [[NExptime]{}]{} combined complexity. We only prove that ${\text{\sc GenInt}}_{{\preceq_{{\rm suff}}}}({{\sf REC}})$ is decidable in [[NExptime]{}]{}. The fact that [[CRPQ]{}]{}(${\preceq_{{\rm suff}}}$) queries can be evaluated with [[NExptime]{}]{} combined complexity follows easily from this by applying the same techniques as in the proof of Theorem \[crpqs-thm\]. We start by proving a small model property for the size of the witnesses of the instances in ${\text{\sc GenInt}}_{{\preceq_{{\rm suff}}}}({{\sf REC}})$. Let $R$ be an $m$-ary recognizable relation, $m > 0$, and $I \subseteq [m] \times [m]$. Assume that both $R$ and ${\preceq_{{\rm suff}}}$ are over $\Sigma$. Then the following holds: Assume it is the case that $R \cap_I \{{\preceq_{{\rm suff}}}\} \neq \emptyset$. There is $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ of at most exponential size that is accepted by $R$ and such that $w_i {\preceq_{{\rm suff}}}w_j$, for each $(i,j) \in I$. We prove this small model property by applying cutting techniques. Assume that $R$ is given as $$\bigcup_i {{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m},$$ where each ${{{\mathcal{N}}}}_{i_j}$ is an NFA over $\Sigma$. We assume, without loss of generality, that $I$ defines a DAG over $[m] \times [m]$. In fact, assume otherwise; that is, $I$ does not define a DAG over $[m] \times [m]$. Since ${\preceq_{{\rm suff}}}$ defines a partial order over $\Sigma^*$, we can always reduce in polynomial time the instance of ${\text{\sc GenInt}}_{{\preceq_{{\rm suff}}}}({{\sf REC}})$ given by $R$ and $I$ to an “equivalent” instance of ${\text{\sc GenInt}}_{{\preceq_{{\rm suff}}}}({{\sf REC}})$ given by recognizable relation $R'$ of arity $m' \leq m$ and $I' \subseteq [m'] \times [m']$ such that $I'$ defines a DAG. We already showed how to do this for an arbitrary partial order over $\Sigma^*$ in the proof of Proposition \[po-scr-prop\], so we prefer not to repeat the argument here, and simply assume that $I$ defines a DAG over $[m] \times [m]$. Since $I$ defines a DAG it has a topological order over $[m]$. We assume without loss of generality that such topological order is precisely the linear order on $[m]$. Assume then that there is $\bar u = (u_1,\dots,u_m) \in (\Sigma^*)^m$ that is accepted by $R$ and such that $u_i {\preceq_{{\rm suff}}}u_j$, for each $(i,j) \in I$. Then $\bar u$ is accepted by ${{{\mathcal{N}}}}_{i_1} \times \cdots \times {{{\mathcal{N}}}}_{i_m}$, for some $i$. Assume that the length of $u_j$ is $p_j \geq 0$, for each $1 \leq j \leq m$. Our goal is to “cut” $\bar u$ in order to obtain an exponential size witness to the fact that $R \cap_I \{{\preceq_{{\rm suff}}}\} \neq \emptyset$. We recursively define the set ${{{\mathcal{M}}}}_k$ of [*marked*]{} positions in string $u_k$, $1 \leq k \leq m$, as follows: [$\bullet$]{} No position in $u_1$ is marked. For each $1 < k \leq m$ the set $\mathcal{M}_k$ of marked positions in $u_k$ is defined as the union of the marked positions in $u_k$ [*with respect to $j$*]{}, for each $j < k$ such that $(j,k) \in I$, where the latter is defined as follows. Assume that $\mathcal{M}_j$ is the set of marked positions in $u_j$. Then the set $\mathcal{M}_k$ of positions $1 \leq {\ell}\leq p_k$ that are marked in $u_k$ with respect to $j$ is $\{r + p_k - p_j \mid \text{$r = 1$ or $r \in \mathcal{M}_j$}\}$. (Notice that $p_k - p_j \geq 0$ since $u_j {\preceq_{{\rm suff}}}u_k$, and hence $1 \leq r + p_k - p_j \leq p_k$ for each $r \in {{{\mathcal{M}}}}_j$ and for $r = 1$). Intuitively, ${{{\mathcal{M}}}}_k$ consists of those positions $1 \leq {\ell}\leq p_k$ such that for some $j < k$ with $(j,k) \in I^+$, where $I^+$ is the transitive closure of $I$, it is the case that that $u_k = u_k[1,{\ell}-1] \cdot u_j$. Or, in other words, the fact that $u_j {\preceq_{{\rm suff}}}u_k$ starts to be “witnessed” at position ${\ell}$ of $u_k$. We assume the ${{{\mathcal{M}}}}_k$’s to be linearly ordered by the restriction of the linear order $1 < 2 < \cdots < m$ to ${{{\mathcal{M}}}}_k$. By a simple inductive argument it is possible to prove that the size of ${{{\mathcal{M}}}}_k$ is polynomially bounded in $m$, for each $1 \leq k \leq m$. Since $u_j {\preceq_{{\rm suff}}}u_k$, for each $(j,k) \in I$, this implies that the labels in some positions of $u_j$ are preserved in the respective positions of $u_k$ that witness the fact that $u_j {\preceq_{{\rm suff}}}u_k$. The important thing to notice is that, since we are dealing with ${\preceq_{{\rm suff}}}$, the following holds: For each position $p$ that is “copied” from $u_j$ into $u_k$ in order to satisfy $u_j {\preceq_{{\rm suff}}}u_k$, the distance from $p$ to the last element of $u_j$ equals the distance from the copy of $p$ in $u_k$ to the last position of $u_k$. That is, distances to the last element of the string are preserved when copying positions (and labels) in order to satisfy $I$. We need to take care of this information when “cutting” $\bar u$ in order to obtain an exponential size witness for the fact that $R \cap_I \{{\preceq_{{\rm suff}}}\} \neq \emptyset$. In order to do this we define for each $0 \leq r \leq \max {\{p_k \mid 1 \leq k \leq m\}}$, a binary relation ${\stackrel{r}{\rightharpoonup}}$ on $\{u_1,\dots,u_m\}$ such that $u_j {\stackrel{r}{\rightharpoonup}} u_{k}$ if $p_j - r > 0$ and $(j,k) \in I$. This implies that position $p_j - r$ of $u_j$ is “copied” as position $p_{k} - r$ of $u_{k}$ in order to satisfy the fact that $u_j {\preceq_{{\rm suff}}}u_{k}$. But in order to consistently “cut” $\bar u$, we need to preserve the suffix relation both with respect to forward and backward edges of the graph defined by $I$. In order to do that we define ${\stackrel{r}{\rightleftharpoons}}$ as $({\stackrel{r}{\rightharpoonup}} \cup \, ({\stackrel{r}{\rightharpoonup}})^{-1})$. Further, since ${\preceq_{{\rm suff}}}$ is a partial order over $\Sigma^*$, and hence it defines a transitive relation, it is important for us also to consider the transitive closure $({\stackrel{r}{\rightleftharpoons}})^+$ of the binary relation ${\stackrel{r}{\rightleftharpoons}}$. Intuitively, $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$, for $1 \leq j,k \leq m$, if position $p_j - r$ of $u_j$ has to be “copied” into position $p_k - r$ of $u_k$ in order for $\bar u$ to satisfy the pairs in $I$ with respect to ${\preceq_{{\rm suff}}}$. Let $t := |{{{\mathcal{N}}}}_{i_1}| \cdot |{{{\mathcal{N}}}}_{i_2}| \cdots |{{{\mathcal{N}}}}_{i_m}|$ and $s := (\sum_{1 \leq k \leq m} |{{{\mathcal{M}}}}_k|) + 1$. We claim the following: There is $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that: (1) $\bar w$ is accepted by $R$, (2) $w_i {\preceq_{{\rm suff}}}w_j$, for each $(i,j) \in I$, and (3) for each $1 \leq k \leq m$ the number of positions in $w_k$ between any two consecutive positions in ${{{\mathcal{M}}}}_k$ is bounded by $s \cdot t \cdot 2^m \cdot |\Sigma|^m$. This clearly implies our small model property. Assume that $\bar u$ does not satisfy this. Then there exists $1 \leq j \leq m$ and two consecutive positions $p$ and $p'$ in ${{{\mathcal{M}}}}_j$, such that the number of positions in $u_j$ between $p$ and $p'$ is bigger than $s \cdot t \cdot 2^m \cdot |\Sigma|^m$. But this implies that there are two positions $p_j - r$ and $p_j - r'$ ($r > r'$) between $p$ and $p'$ in $u_j$ such that the following hold: (1) $\{1 \leq k \leq m \mid u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k\} = \{1 \leq k \leq m \mid u_j ({\stackrel{r'}{\rightleftharpoons}})^+ u_k\}$. Intuitively, this says that the set of strings in which position $p_j - r$ of $u_j$ is “copied” coincides with the set of strings in which position $p_j - r'$ of $u_j$ is “copied”. (2) For each $k$ such that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$ it is the case that neither $p_k - r$ nor $p_{k} - r'$ is a marked position in ${{{\mathcal{M}}}}_k$, and there is no marked position in ${{{\mathcal{M}}}}_k$ in between $p_k - r$ and $p_{k} - r'$ in $u_k$. (3) The state assigned by the accepting run of ${{{\mathcal{N}}}}_{i_j}$ over $u_j$ to position $p_j - r$ of $u_j$ is the same than the one assigned to position $p_j - r'$. (4) The state assigned by the accepting run of ${{{\mathcal{N}}}}_{i_k}$ over $u_k$ to the “copy” $p_k - r$ of position $p_j - r$ over $u_k$, for each $k$ such that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$, is the same than the one assigned to the “copy” $p_k - r'$ of position $p_j - r'$ over $u_k$. (5) The symbol in position $p_j - r$ of $u_j$ is the same as the symbol in position $p_j - r'$ of $u_j$. (6) For each $k$ such that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$ it is the case that the symbol in position $p_k - r$ of $u_k$ is the same as the symbol in position $p_k - r'$ of $u_k$. Intuitively, this states that if we “cut” the string $u_j$ from position $p_j - r + 1$ to $p_j - r'$, and string $u_k$ from position $p_k - r + 1$ to $p_k - r'$, for each $k$ such that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$, then the resulting $\bar u' = (u'_1,\dots,u'_m) \in (\Sigma)^m$ satisfies the following: (1) $\bar u'$ is accepted by $R$, and (2) for each $(j,k) \in I$ it is the case that $u'_j {\preceq_{{\rm suff}}}u'_k$. We formally prove this below. Notice for the time being that this implies our small model property. Indeed, if we recursively apply this procedure to $\bar u$ we will end up with $\bar w = (w_1,\dots,w_m) \in (\Sigma^*)^m$ such that: (1) $\bar w$ is accepted by $R$, (2) $w_j {\preceq_{{\rm suff}}}w_k$, for each $(j,k) \in I$, and (3) for each $1 \leq k \leq m$ the number of positions in $w_k$ between any two consecutive positions in ${{{\mathcal{M}}}}_k$ is bounded by $s \cdot t \cdot 2^m \cdot |\Sigma|^m$. Let $\bar u' = (u'_1,\dots,u'_m) \in (\Sigma)^m$ be the result of applying once the cutting procedure described above to $\bar u = (u_1,\dots,u_m)$, starting from string $\bar u_j$ by cutting positions from $p_j - r + 1$ to $p_j - r'$ ($r > r'$). It is not hard to see that $\bar u'$ is accepted by $R$, since each $u_k$ has been cut in a way that is invariant with respect to the accepting run of ${{{\mathcal{N}}}}_{i_k}$ over $u_k$. Assume that $({\ell},k) \in I$. We need to prove that $u'_{\ell}{\preceq_{{\rm suff}}}u'_k$. If $u_{\ell}= u'_{\ell}$ and $u_k = u'_k$ then $u'_{\ell}{\preceq_{{\rm suff}}}u'_k$ by assumption. Assume then that at least one of $u_{\ell}$ and $u_k$ has been cut. Suppose first that $u_{\ell}$ has been cut from position $p_{\ell}- r + 1$ to position $p_{\ell}- r'$ in order to obtain $u'_{\ell}$. Then $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_{\ell}$ and $u_j ({\stackrel{r'}{\rightleftharpoons}})^+ u_{\ell}$. Clearly, it is also the case that $u_{\ell}{\stackrel{r}{\rightleftharpoons}} u_k$ and $u_{\ell}{\stackrel{r'}{\rightleftharpoons}} u_k$, which implies that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_k$ and $u_j ({\stackrel{r'}{\rightleftharpoons}})^+ u_k$. Thus, $u_k$ is also cut from position $p_k - r + 1$ to $p_k - r'$ in order to obtain $u'_k$, and hence $u'_{\ell}{\preceq_{{\rm suff}}}u'_k$. Suppose, on the other hand, that $u_{\ell}$ has not been cut but $u_k$ has been cut from position $p_k - r + 1$ to position $p_k - r'$ in order to obtain $u'_k$. We consider three cases: (1) $r' > p_j - 1$. Then clearly $u'_k {\preceq_{{\rm suff}}}u'_j$. (2) $r' \leq p_j - 1$ and $r > p_j - 1$. This cannot be the case since then either $p_k - r'$ is a marked position in ${{{\mathcal{M}}}}_k$ (when $r' = p_j - 1$), or $p_k - r$ and $p_k - r'$ have a marked position in ${{{\mathcal{M}}}}_k$ in between (namely, $p_k - p_j + 1$). Any of these contradicts the fact that a cutting of $u_k$ could be applied from position $p_k - r$ to position $p_k - r'$ in order to obtain $u'_k$. (3) $r' < p_j - 1$ and $r \geq p_j - 1$. Similar to the previous one. (4) $r < p_j - 1$. But then clearly $u_{\ell}{\stackrel{r}{\rightleftharpoons}} u_k$ and $u_{\ell}{\stackrel{r'}{\rightleftharpoons}} u_k$, which implies that $u_j ({\stackrel{r}{\rightleftharpoons}})^+ u_{\ell}$ and $u_j ({\stackrel{r'}{\rightleftharpoons}})^+ u_{\ell}$. This implies that $u_{\ell}$ should have also been cut from position $p_{\ell}- r$ to position $p_{\ell}- r'$ in order to obtain $u'_{\ell}$, which is a contradiction. We can finally prove the theorem using the small model property. In fact, in order to check whether $R \cap_I \{{\preceq_{{\rm suff}}}\} \neq \emptyset$ we only need to guess an exponential size witness $\bar w$, and then check in polynomial time that it satisfies $R$ and each projection in $I$ satisfies ${\preceq_{{\rm suff}}}$. This algorithm clearly works in nondeterministic exponential time. Conclusions {#concl:sec} =========== $R\in{{\sf REC}}$ $R\in{{\sf REG}}$ $R\in{{\sf RAT}}$ ----------------------------------------------------------------------------------------------------- -------------------------------- ------------------- ------------------------------- ${({R} \cap {\mbox{${\preceq}$}}) \stackrel{\text{\tiny ?}}{=}\emptyset}$ undecidable undecidable ${({R} \cap {\mbox{${\preceq_{{\rm suff}}}$}}) \stackrel{\text{\tiny ?}}{=}\emptyset}$ [[Ptime]{}]{} (cf. [@berstel]) [undecidable]{} undecidable ${({R} \cap {\mbox{${\sqsubseteq}$}}) \stackrel{\text{\tiny ?}}{=}\emptyset}$ decidable, NMR decidable, NMR [@CS-fsttcs07] ${({R} \mathrel{\cap_{I}} {\mbox{${\preceq}$}}) \stackrel{\text{\tiny?}}{=}\emptyset}$ ? undecidable ${({R} \mathrel{\cap_{I}} {\mbox{${\preceq_{{\rm suff}}}$}}) \stackrel{\text{\tiny?}}{=}\emptyset}$ [[NExptime]{}]{} undecidable undecidable ${({R} \mathrel{\cap_{I_{}}} {\mbox{${\sqsubseteq}$}}) \stackrel{\text{\tiny?}}{=}\emptyset}$ [[NExptime]{}]{} decidable, NMR $S\ =\ {\sqsubseteq}$ $S\ = \ {\preceq_{{\rm suff}}}$ $S\ =\ {\preceq}$ $S$ arbitrary in ${{\sf RAT}}$ --------------------------- ----------------------- --------------------------------- -------------------- -------------------------------- [[ECRPQ]{}]{}($S$) decidable, NMR undecidable undecidable undecidable [[CRPQ]{}]{}($S$) [[NExptime]{}]{} [[NExptime]{}]{} ? undecidable acyclic [[CRPQ]{}]{}($S$) [[[PSpace]{}]{}]{} [[[PSpace]{}]{}]{} [[[PSpace]{}]{}]{} [[[PSpace]{}]{}]{} Motivated by problems arising in studying logics on graphs (as well as some verification problems), we studied the intersection problem for rational relations with recognizable and regular relations over words. We have looked at rational relations such as subword ${\preceq}$, suffix ${\preceq_{{\rm suff}}}$, and subsequence ${\sqsubseteq}$, which are often needed in graph querying tasks. The main results on the complexity of the intersection and generalized intersection problems, as well as the combined complexity of evaluating different classes of logical queries over graphs are summarized in Fig. \[summary-fig\]. Several results generalizing those (e.g., to the class of ${{\sf SCR}}$ relations) were also shown. Two problems related to the interaction of the subword relation with recognizable relations remain open and appear to be hard. [From]{} the practical point of view, as rational-relation comparisons are demanded by many applications of graph data, our results essentially say that such comparisons should not be used together with regular-relation comparisons, and that they need to form acyclic patterns (easily enforced syntactically) for efficient evaluation. So far we dealt with the classical setting of graph data [@AG-survey; @CGLV00; @CGLV00b; @CM90; @CMW87] in which the model of data is that of a graph with labels from a finite alphabet. In both graph data and verification problems it is often necessary to deal with the extended case of infinite alphabets (say, with graphs holding data values describing its nodes), and languages that query both topology and data have been proposed recently [@1-in-3-bitch; @icdt12]. A natural question is to extend the positive results shown here to such a setting. [^1]: Partial support provided by Fondecyt grant 1110171 for Barceló and EPSRC grants G049165 and J015377 for Figueira and Libkin. [^2]: In this hierarchy—also known as the Extended Grzegorczyk Hierarchy—, the classes of functions $\textup F_\alpha$ are closed under elementary-recursive reductions, and are indexed by ordinals. Ackermannian complexity corresponds to level $\alpha = \omega$, and level $\alpha = \omega^\omega$ corresponds to some hyper-Ackermannian complexity. [^3]: Notice that $\rho_{i_k}$ is a word in the language defined by $(U_{i_k} \cdot [p_k])^* \cdot U_{i_k}$, and hence it is completely well-defined whether a word in $U_{i_k} \cdot [p_k]$ is or not a subword of $\rho_{i_k}$. [^4]: This is well-defined for essentially the same reasons given in the previous footnote. [^5]: Notice, since ${{{\mathcal{A}}}}(R,S,I)$ does not allow empty transitions, that $q'_{k_1} (r'_{k},r'_{k'})$ is well-defined since the subword $q_{k_1} (r_{k},r_{k'})$ appears exactly once in the run $\lambda_{k_1}$ and, further, $q_{k_1} (r_{k},r_{k'})$ is followed in $\lambda_{k_1}$ by a subword in $Q_{S_{k_1}} \cdot ([p_{k}] \times [p_{k'}])$ because $r_k \neq p_k$.
--- abstract: 'We introduce a novel Earth-like planet surface temperature model (ESTM) for habitability studies based on the spatial-temporal distribution of planetary surface temperatures. The ESTM adopts a surface Energy Balance Model complemented by: radiative-convective atmospheric column calculations, a set of physically-based parameterizations of meridional transport, and descriptions of surface and cloud properties more refined than in standard EBMs. The parameterization is valid for rotating terrestrial planets with shallow atmospheres and moderate values of axis obliquity ($\epsilon \la 45^\circ$). Comparison with a 3D model of atmospheric dynamics from the literature shows that the equator-to-pole temperature differences predicted by the two models agree within $\approx 5$K when the rotation rate, insolation, surface pressure and planet radius are varied in the intervals $0.5 \la \Omega/\Omega_\oplus \la 2$, $0.75 \la S/S_\circ \la 1.25$, $0.3 \la p/(\mathrm{1\,bar}) \la 10$, and $0.5 \la R/R_\oplus \la 2$, respectively. The ESTM has an extremely low computational cost and can be used when the planetary parameters are scarcely known (as for most exoplanets) and/or whenever many runs for different parameter configurations are needed. Model simulations of a test-case exoplanet (Kepler-62e) indicate that an uncertainty in surface pressure within the range expected for terrestrial planets may impact the mean temperature by $\sim 60\,$K. Within the limits of validity of the ESTM, the impact of surface pressure is larger than that predicted by uncertainties in rotation rate, axis obliquity, and ocean fractions. We discuss the possibility of performing a statistical ranking of planetary habitability taking advantage of the flexibility of the ESTM.' author: - | Giovanni Vladilo, Laura Silva, Giuseppe Murante,\ Luca Filippi, Antonello Provenzale, bibliography: - 'exoclimates.bib' title: 'Modeling the surface temperature of Earth-like planets' --- Introduction ============ The large amount of exoplanet data collected with the doppler and transit methods [e.g. @Mayor14; @Batalha13 and refs. therein] indicate that Earth-size planets are intrinsically more frequent than giant ones, in spite of the fact that they are more difficult to detect. Small planets are found in a relatively broad range of metallicities [@Buchhave12] and, at variance with giant planets, their detection rate drops slowly with decreasing metallicity [@Wang15]. These observational results indicate that Earth-like planets are quite common around other stars [e.g. @Farr14] and are expected to be detected in large numbers in the future. Their potential similarity to the Earth makes them primary targets in the quest for habitable environments ouside the Solar System. Unfortunately, small planets are quite difficult to characterize with experimental methods and a significant effort of modelization is required to cast light on their properties. The aim of the present work is to model the surface temperature of these planets as a contribution to the study of their surface habitability. The capability of an environment to host life depends on many factors, such as the presence of liquid water, nutrients, energy sources, and shielding from cosmic ionizing radiation [e.g. @Seager13; @Guedel14]. A knowledge of the surface temperature is essential to apply the liquid water criterion of habitability and can also be used to assess the potential presence of different life forms according to other types of temperature-dependent biological criteria [e.g. @Clarke14]. Here we are interested in modeling the latitudinal and seasonal variations of surface temperature, $T(\varphi,t)$, as a tool to calculate temperature-dependent indices of fractional habitability [e.g. @SMS08]. Modeling $T(\varphi,t)$ is a difficult task since many of the physical and chemical quantities that govern the exoplanet surface properties are currently not measurable. A way to cope with this problem is to treat the unknown quantities as free parameters and use fast climate calculations to explore how variations of such parameters affect the surface temperature. General Circulation Models (GCMs) are not suited for this type of exploratory work since they require large amounts of computational resources for each single run as well as a detailed knowledge of many planetary characteristics. Two types of fast climate tools are commonly employed in studies of planetary habitability: single atmospheric column calculations and energy balance models. Atmospheric column calculations treat in detail the physics of vertical energy transport, taking into account the influence of atmospheric composition on the radiative transfer [e.g., @Kasting88]. This is the type of climate tool that is commonly employed in studies of the “habitable zone” [e.g., @Kasting88; @Kasting93; @vonParis13; @Kopparapu13; @Kopparapu14]. Energy balance models (EBMs) calculate the zonal and seasonal energy budget of a planet using a heat diffusion formalism to describe the horizontal transport and simple analytical functions of the surface temperature to describe the vertical transport [e.g., @North81]. EBMs have been employed to address the climate impact induced by variations of several planet parameters, such as axis obliquity, rotation period and stellar insolation [@SMS08; @SMS09; @Dressing10; @Spiegel10; @Forgan12; @Forgan14]. By feeding classic EBMs with multi-parameter functions extracted from atmospheric column calculations one can obtain an upgraded type of EBM that takes into account the physics of vertical transport [@WK97]. Following a similar approach, in a previous paper we investigated the impact of surface pressure on the habitability of Earth-like planets by incorporating a physical treatment of the thermal radiation and a simple scaling law for the meridional transport [@Vladilo13 hereafter Paper I]. Here we include the transport of the short-wavelength radiation and we present a physically-based treatment of the meridional transport tested with 3D experiments. In this way we build up an Earth-like planet Surface Temperature Model ([ESTM]{}) in which a variety of unknown planetary properties can be treated as free parameters for a fast calculation of the surface habitability. The [ESTM]{} is presented in the next section. In Section 3 we describe the model calibration and validation. Examples of model applications are presented in Section 4 and the conclusions summarized in Section 5. ![image](fig1.pdf){width="12.5"} The model ========= The [ESTM]{} consists of a set of climate tools and algorithms that interchange physical quantities expressed in parametric form. The core of the model is a zonal and seasonal EBM fed by multi-parameter physical quantities. The parameterization is obtained using physically-based climate tools that deal with the meridional and vertical energy transport. The relationship between these ingredients is shown in the scheme of Fig. \[figScheme1\]. In the following we present the components of the model, starting from the description of the EBM. In zonal EBMs the surface of the planet is divided into zones delimited by latitude circles. The surface quantities of interest are averaged in each zone over one rotation period. In this way, the spatial dependence is determined by a single coordinate, the latitude $\varphi$. Since the temporal dependence is “smoothed” over one rotation period, the time, $t$, represents the seasonal evolution during the orbital period. The thermal state is described by a single temperature, $T=T(\varphi,t)$, representative of the surface conditions. By assuming that the heating and cooling rates are balanced in each zone, one obtains an energy balance equation that is used to calculate $T(\varphi,t)$. The most common form of EBM equation [@North81; @WK97; @SMS08; @Pierrehumbert10; @Gilmore14] is $$C \frac{\partial T}{\partial t} - \frac{\partial}{\partial x} \left[ D \, (1-x^2) \, \frac{ \partial T}{\partial x} \right] + I = S \, (1-A) , \label{diffusionEq}$$ where $x=\sin \varphi$ and all terms are normalized per unit area. The first term of this equation represents the zonal heat storage and describes the temporal evolution of each zone; $C$ is the zonal heat capacity per unit area [@North81]. The second term represents the amount of heat per unit time and unit area leaving each zone along the meridional direction [@North81 Eq. (21)]. It is called the “diffusion term” because the coefficient $D$ is defined on the basis of the analogy with heat diffusion, i.e. $$\Phi \equiv - D \frac{\partial T}{\partial \varphi}, \label{eq:DiffA}$$ where $2 \pi R^2 \Phi \cos \varphi$ is the net rate of energy transport[^1] across a circle of constant latitude and $R$ is the planet radius [see @Pierrehumbert10]. The term $I$ represents the thermal radiation emitted by the zone, also called Outgoing Longwave Radiation (OLR). The right side of the equation represents the fraction of stellar photons that heat the surface of the zone; $S$ is the incoming stellar radiation and $A$ the planetary albedo at the top of the atmosphere. All coefficients of the equation depend, in general, on both time and latitude, either directly or indirectly, through their dependence on $T$. In classic EBMs the coefficients $D$, $I$ and $A$ are expressed in a very simplified form. As an example, $D$ is often treated as a constant, in spite of the fact that the meridional transport is influenced by planetary quantities that do not appear in the formulation (\[eq:DiffA\]). The OLR and albedo are modelled as simple analytical functions, $I=I(T)$ and $A=A(T)$, while they should depend not only on $T$, but also on other physical/chemical quantities that influence the vertical transport. This simplified formulation of $D$, $I$ and $A$ prevents important planetary properties to appear in the energy balance equation (\[diffusionEq\]). To obtain a physically-based parameterization we describe the vertical transport using single-column atmospheric calculations and the meridional transport using algorithms tested with 3D climate experiments. Thanks to this type of parameterization[^2] the [ESTM]{} features a dependence on surface pressure, $p$, gravitational acceleration, $g$, planet radius, $R$, rotation rate, $\Omega$, surface albedo, $a_s$, stellar zenith distance, $Z$, atmospheric chemical composition, and mean radiative properties of the clouds. By running the simulations described in Appendix A, the [ESTM]{} generates a “snapshot” of the surface temperature $T(\varphi,t)$ in a very short computing time, for any combination of planetary parameters that yield a stationary solution of Eq. (\[diffusionEq\]). We now describe the parameterization of the model. The meridional transport \[sectMeridionalTransport\] ---------------------------------------------------- The heat diffusion analogy (\[eq:DiffA\]) guarantees the existence of physical solutions and contributes to the high computational efficiency of EBMs. In order to keep these advantages and at the same time introducing a more realistic treatment of the latitudinal transport, here we derive $\Phi$ and $D$ in terms of planet properties relevant to the physics of the horizontal transport. To keep the problem simple we focus on the atmospheric transport (the ocean transport is discussed below in §\[sectOceanTransport\]). The atmospheric flux can be derived applying basic equations of fluid dynamics to the energy content of a parcel of atmospheric gas. The energy budget of the parcel is expressed in terms of the moist static energy (MSE) per unit mass, $$m=c_p T+ L_v r_v+ gz \label{eq:mseA}$$ where the terms $c_p T$, $L_v r_v$ and $gz$ measure the sensible heat, the latent heat and the potential energy content of the parcel at height $z$, respectively; $L_v$ is the latent heat of the phase transition between the vapor and the condensed phase; $r_v$ the mass mixing ratio of the vapor over dry component; $g$ is the surface gravity acceleration. The MSE and the velocity of the parcel are a function of time, $t$, longitude, $\lambda$, latitude, $\varphi$, and height, $z$. The latitudinal transport is obtained by integrating the fluid equations in longitude and vertically, the height $z$ being replaced by the pressure coordinate, $p=p(z)$. Starting from a simplified mass continuity relation valid for the case in which condensation takes away a minimal atmospheric mass [@Pierrehumbert10 §9.2.1], one obtains the mean zonal flux $$\Phi (t,\varphi) = \frac{1}{R} \int_0^{2\pi} d\lambda \int_0^{p} v m \frac{dp'}{g} = \frac{1}{R} \frac{p}{g} \, \overline{ v \, m } \label{eq:FluxA}$$ where $p$ is the surface atmospheric pressure and $v$ the meridional velocity component of the parcel. The second equality of this expression is valid for a shallow atmosphere, where $g$ can be considered constant, as in the case of the Earth. To proceed further, we assume that (\[eq:FluxA\]) is valid when the physical quantities are averaged over one rotation period, since this is the approach used in EBMs. In this case, the time $t$ represents the seasonal (rather than instantaneous) evolution of the system: variability on time scales shorter than one planetary day are averaged out. At this point we split the problem in two parts. First we derive a relation for $\Phi$ and $D$ valid for the extratropical transport regime. Then we introduce a formalism to empirically improve the treatment of the transport inside the Hadley cells. ### Transport in the extratropical region \[sectEddiesTransport\] We consider an ideal planet with constant insolation and null axis obliquity, in such a way that we can neglect the (seasonal) dependence on $t$. We restrict our problem to the atmospheric circulation typical of fast-rotating terrestrial-type planets, i.e. with latitudinal transport dominated by eddies in the baroclinic zone. A commonly adopted formalism used to treat the eddies consists in dividing the variables of interest into a mean component and a perturbation from the mean[^3] , representative of the eddies. By indicating the mean with an overbar and perturbations with a prime, we have for instance $v = \overline{v} + v'$ and $m = \overline{m} + m'$. It is easy to show that $\overline{vm}= \overline{v}\,\overline{m} + \overline{ v' m'}$. When the eddies transport dominates, the term $\overline{v}\,\overline{m}$ can be neglected so that $$\Phi \simeq \frac{1}{R} \frac{p}{g} \, \overline{ v' m' } \label{eq:fluxC}$$ and we obtain $$D = - \Phi \left( \frac{\partial T}{\partial \varphi} \right)^{-1} = \frac{1}{R^2} \frac{p}{g} \, \left( \frac{\partial T}{\partial y} \right)^{-1} \, \overline{ v' m' } \label{eq:DiffB}$$ where $dy=R \, d\varphi$ is the infinitesimal meridional displacement. To calculate $\overline{ v' m' }$ we consider the surface value of moist static energy[The MSE is conserved under conditions of dry adiabatic ascent and is approximately conserved in saturated adiabatic ascent. Therefore the MSE is, to some extent, independent of $z$. Results obtained by @Lapeyre03 suggest that lower layer values of moist static energy are most appropriate for diffusive models of energy fluxes.]{}, $m = c_p T + L_v r_v$, from which we obtain $$\overline{v' m'}= c_p \overline{v'T'} + L_v \overline{v'r'_v} ~. \label{eq:vmfluctA}$$ We express the mean values of the perturbation products as $$\overline{ v' T' } = k_\mathrm{S} \, |v'| ~ |T'| \label{eq:vTfluctA}$$ and $$\overline{ v' r_v' } = k_\mathrm{L} |v'| ~ |r_v'| , \label{vpqp}$$ where $||$ means a root-mean square magnitude[Root-mean square values must be introduced since the time mean of the linear perturbations is zero. ]{} and $k_\mathrm{S}$ and $k_\mathrm{L}$ are correlation coefficients [e.g. @Barry02]. At this point we need to quantify the perturbations of $T$ and $r_v$, i.e. of the quantities being mixed. In eddy diffusivity theories these perturbations can be written as a mixing length, $\ell_\mathrm{mix}$, times the spatial gradient of the quantity. We consider the gradient along the meridional coordinate $y$ and we write $$|T'| = - \ell_\mathrm{mix} \frac{\partial T }{\partial y} \label{eq:Tfluct}$$ and $$|r_v'| = - \ell_\mathrm{mix} \frac{\partial \, r_v}{\partial y} \label{eq:rvfluctA}$$ where $T$ and $r_v$ are mean zonal quantities; since the mixing is driven by turbulence we assume that the mixing length is the same for sensitive and latent heat. To estimate $\partial r_v/\partial y$ we recall that $$r_v = \frac{\mu_v}{\mu_\mathrm{dry}} \frac{p_v}{p_\mathrm{dry}} = \frac{\mu_v}{\mu_\mathrm{dry}} \frac{ q \, p_v^* }{p_\mathrm{dry}} \label{eq:vmr}$$ where $\mu_v$ and $p_v$ are the molecular weight and pressure of the vapor, $\mu_\text{dry}$ and $p_\text{dry}$ the corresponding quantities of the dry air, $q$ is the relative humidity and $p_v^*=p_v^*(T)$ is the saturation vapor pressure. We assume constant relative humidity and we can write $$\frac{\partial r_v}{\partial y} = \left( \frac{\partial \, r_v}{\partial T} \cdot \frac{\partial T}{\partial y} \right)= \frac{\mu_v}{\mu_\mathrm{dry}} \frac{ q }{p_\mathrm{dry}} \frac{\partial p_v^*}{\partial T} \,\frac{\partial T}{\partial y} ~. \label{eq:rvy}$$ Combining the expressions from (\[eq:vmfluctA\]) to (\[eq:rvy\]) we obtain $$\begin{aligned} \overline{v' m'}= - \ell_\mathrm{mix} \, |v'| \, \frac{\partial T}{\partial y} \left( k_\mathrm{S} c_p + k_\mathrm{L} L_v \frac{ \mu_v }{\mu_\mathrm{dry} } \frac{q}{p_\mathrm{dry}} \frac{\partial p_v^*}{\partial T} \right) \label{eq:vmfluctC}\end{aligned}$$ and inserting this in (\[eq:DiffB\]) we derive $$\begin{aligned} D \simeq \frac{1}{R^2} \frac{p}{g} \ell_\mathrm{mix} \, |v'| \, \left( k_\mathrm{S} c_p + k_\mathrm{L} L_v \frac{ \mu_v }{\mu_\mathrm{dry} } \frac{q}{p_\mathrm{dry}} \frac{\partial p_v^*}{\partial T} \right) ~. \label{eq:DtermA}\end{aligned}$$ At this point, we need an analytical expression for $\ell_\mathrm{mix} \, |v'|$. Among a large number of analytical treatments of the baroclinic circulation [e.g., @Green70; @Stone72; @Gierasch73; @Held99], here we adopt a formalism proposed by @Barry02 which gives the best agreement with GCM experiments. According to @Barry02, the baroclinic zone works as a diabatic heat engine that obtains and dissipates energy in the process of transporting heat from a warm to a cold region. If we call $T_w$ and $T_c$ the temperatures of the warm and cold regions, the maximum possible thermodynamic efficiency of the engine is $\delta T/T_w$, where $\delta T = T_w-T_c$. The energy received by the atmosphere per unit time and unit mass, $Q$, represents the diabatic forcing of the engine. The rate of generation (and dissipation) of eddy kinetic energy per unit mass is given by $$\varepsilon = \eta \, \left( \frac{\delta T}{T_w} \right) \, Q \label{eq:diabaticForcing}$$ where $\eta$ is an efficiency factor representing the fraction of the generated kinetic energy used by heat transporting eddies. Assuming that the average properties of the flow depend only on the length scale and the dissipation rate per unit mass[If the eddies exist in an inertial range, the average properties of the flow will depend only on the dissipation rate and the length scale [@Barry02].]{}, dimensional arguments yield the velocity scaling law $$|v'| \propto \left( \varepsilon \, \ell_\mathrm{mix} \right)^{1/3} ~. \label{eq:vflucA}$$ As far as the mixing length is concerned, the Rhines scale is adopted $$\ell_\mathrm{mix}= \left( \frac{2 |v'| }{\beta} \right)^{1/2} \label{eq:rhines}$$ where $\beta=\partial f/\partial y$ is the gradient of the Coriolis parameter, $f=2 \Omega \sin \varphi$, and $\Omega$ the angular rotation rate of the planet. The study of @Barry02 suggests that, among other types of length scales considered in literature, the Rhines scale yields the best correlations in 3D atmospheric experiments. The adoption of the Rhines scale is also supported by a study of moist transport performed with GCM experiments [@Frierson07]. The Rhines scale must be calculated at the latitude $\varphi_\mathrm{m}$ of maximum kinetic energy, i.e. for $\beta=(2 \Omega \cos \varphi_\mathrm{m})/R $. From the above expressions we obtain $$\ell_\mathrm{mix} \, |v'| = \left( \eta \frac{ \delta T }{T_w } \, Q \right)^{3/5} \left( \frac{R}{\Omega \cos \varphi_\mathrm{m}} \right)^{4/5} ~. \label{eq:ellvfluc}$$ Inserting this in (\[eq:DtermA\]) we obtain $$D = D_\mathrm{dry} ( 1 + \Lambda) \label{eq:DtermB}$$ where $$\begin{aligned} \lefteqn{ D_\mathrm{dry} = k_\mathrm{S} c_p \, \eta^{3/5} (\cos \varphi_\mathrm{m})^{-4/5} \times} \nonumber \\ & & {} \times \, R^{-6/5} \, \frac{p}{g} \, \Omega^{-4/5} \left( \frac{ \delta T }{T_w } Q \right)^{3/5} \label{eq:Ddry}\end{aligned}$$ is the dry component of the atmospheric eddies transport and $$\Lambda = \frac{k_\mathrm{L} L_v}{k_\mathrm{S} c_p } \frac{ \mu_v }{\mu_\mathrm{dry} } \frac{q}{p_\mathrm{dry}} \frac{\partial p_v^* }{\partial T} \label{eq:Lambda}$$ is the ratio of the moist over dry components. For the practical implementation of the analytical expressions (\[eq:DtermB\]), (\[eq:Ddry\]) and (\[eq:Lambda\]) in the EBM code, we proceed as follows. The maximum thermodynamic efficiency $\delta T/T_w$ is calculated by taking $T_w=\overline{T}(\varphi_1)$ and $T_c=\overline{T}(\varphi_2)$ where $\varphi_1$ and $\varphi_2$ are the borders of the mid-latitude region and overbars indicate zonal annual means. Following @Barry02, we adopt $\varphi_1=28^\circ$ and $\varphi_2=68^\circ$, after testing that the model predictions are virtually unaffected by the exact choice of these values[Also the GCM experiments by @Barry02 indicate that the results are not sensitive to the choice of $\varphi_1$ and $\varphi_2$.]{}. We estimate the diabatic forcing term (W/kg) as $Q \simeq \left\{ \mathrm{ASR} \right\} /(p/g)$, where $\left\{ \mathrm{ASR}\right\} = \left\{ S(1-A) \right\}$ is the absorbed stellar radiation (W/m$^2$) averaged over one orbital period in the latitude range ($\varphi_1$, $\varphi_2$) and $p/g$ the atmospheric columnar mass (kg/m$^2$). We neglect the contribution of surface fluxes of sensible heat since they cannot be estimated in the framework of the EBM model. This approximation is not critical because these fluxes yield a negligible contribution to $Q$ according to @Barry02. Treating $k_\mathrm{L}$, $k_\mathrm{S}$, $\eta$, and $\varphi_\mathrm{m}$ as constants[^4], we obtain from Eq. (\[eq:Ddry\]) a scaling law for the dry term of the transport $$\begin{aligned} \lefteqn{ \mathcal{S}_\mathrm{dry} \propto c_p \, R^{-6/5} \left( \frac{p}{g} \right)^{2/5} \Omega^{-4/5} \times {} } \nonumber\\ & & {} \times \left( \frac{ \delta T }{T_w } \left\{ \mathrm{ASR} \right\} \right)^{3/5} . \label{eq:SLdry}\end{aligned}$$ We estimate the temperature gradient of saturated vapor pressure as $\partial p_v^* /\partial T \simeq \delta p_v^* / \delta T$, with $\delta p_v^* = \left[ p_v^*(T_w) -p_v^*(T_c) \right]$. Since $k_\mathrm{L}$, $k_\mathrm{S}$ and $L_v$ are constants, we obtain from Eq. (\[eq:Lambda\]) a scaling law for the ratio of the moist over dry components $$\mathcal{S}_\mathrm{md} \propto \frac{q}{c_p \, \mu_\mathrm{dry} \, p_\mathrm{dry}} \frac{\delta p_v^* }{\delta T} ~. \label{eq:SLdm}$$ Finally, by applying Eq. (\[eq:DtermB\]) and the scaling laws (\[eq:SLdry\]) and (\[eq:SLdm\]) to a generic terrestrial planet and to the Earth, indicated by the subscript $\circ$, we obtain $$\frac{D}{D_\circ} = \frac{ \mathcal{S}_\mathrm{dry} }{ \mathcal{S}_{\mathrm{dry},\circ} } \, \left[ \frac{1+\Lambda_\circ \cdot \left( \mathcal{S}_\mathrm{md} / \mathcal{S}_{\mathrm{md},\circ} \right) } {1+\Lambda_\circ} \right] ~. \label{eq:DtermF}$$ With the above expressions we calculate $D$ treating $R$, $\Omega$, $p$, $g$, $c_p$, $\mu_\mathrm{dry}$, $p_\mathrm{dry}$, $q$ as parameters that can vary from planet to planet, in spite of being constant in each planet. The ratio of moist over dry eddie transport of the Earth is set to $\Lambda_\circ=0.7$ [e.g. @KS14]. For the sake of self-consistency, we adopt the parameters $(\delta_T)_\circ$, $(T_w)_\circ$, $\left\{ \mathrm{ASR} \right\}_\circ$ and $(\delta p_v^*)_\circ$ obtained from the Earth’s reference model. Since these parameters vary in the course of the simulation, we perform the calibration of the Earth model in two steps. First we calibrate the model excluding the ratios[^5] $\delta_T/(\delta_T)_\circ$, $T_w/T_{w,\circ}$, $\left\{ \mathrm{ASR} \right\}/\left\{ \mathrm{ASR} \right\}_\circ$ and $\delta p_v^*/(\delta p_v^*)_\circ$ from the scaling laws of Eq. (\[eq:DtermF\]). Then we reintroduce these ratios in the scaling law adopting for $(\delta_T)_\circ$, $(T_w)_\circ$ and $\left\{ \mathrm{ASR} \right\}_\circ$ the values $(\delta_T)$, $(T_w)$, $\left\{ \mathrm{ASR} \right\}$ and $(\delta p_v^*)$ obtained in the first step. The second step is repeated a few times, until convergence of the parameters $(\delta_T)_\circ$, $(T_w)_\circ$ and $\left\{ \mathrm{ASR} \right\}_\circ$ and $(\delta p_v^*)_\circ$ is achieved. ### Transport in the Hadley Cell \[sectHadleyCell\] The derivation performed above ignores the existence of the Hadley Cells, since they do not contribute to the extratropical meridional transport. However, the Hadley circulation is extremely efficient in smoothing temperature gradients inside the tropical region. This aspect cannot be completeley ignored in our treatment, since our goal is to estimate the planet surface temperature distribution. Unfortunately, the diffusion formalism of Eq. (\[eq:DiffA\]) is inappropriate inside the Hadley Cells and the only way we have to improve the description of the tropical temperature distribution is to correct the formalism with some empirical expression. We summarize the approach that we follow to cope with this problem. The global pattern of atmospheric circulation is influenced, among other factors, by the seasonal variation of the zenith distance of the star. In the case of the Earth, a well known example of this type of influence is the seasonal shift of the Intertropical Convergence Zone (ITCZ), that moves to higher latitudes in the summer hemisphere. The ITCZ is, in practice, a tracer of the thermal equator at the center of the system of the two Hadley Cells, where we want to improve the uniformity of the temperature distribution. A way to do this is to enhance the transport coefficient $D$ in correspondence with such thermal equator. To incorporate this feature in our model, we scale $D$ according to mean diurnal value of $\mu(\varphi,t)=\cos Z$, where $Z$ is the stellar zenith distance. In practice, we multiply $D$ by a dimensionless modulating factor, $\zeta(\varphi,t)$, that scales linearly with $\mu(\varphi,t)$, i.e. $\zeta(\varphi,t)=c_o+c_1 \mu(\varphi,t)$. We normalize this factor in such a way that its mean global annual value is $\widetilde{\zeta}(\varphi,t)=1$. Thanks to the normalization condition, it is possible to calculate the parameters $c_0$ and $c_1$ in terms of a single parameter, $\mathcal{R}= \max \left\{ \zeta(\varphi,t)\right\}/ \min \left\{ \zeta(\varphi,t)\right\}$, which represents the ratio between the maximum and minimum values of $\zeta$ at any latitude and orbital phase [see @Vladilo13 §A.2.1]. With the adoption of the modulation term, the complete expression for the transport coefficient becomes $$\begin{aligned} \frac{D}{D_\circ} = \zeta(\varphi,t) \, \frac{ \mathcal{S}_\mathrm{dry} }{ \mathcal{S}_{\mathrm{dry},\circ} } \, \left[ \frac{1+\Lambda_\circ \cdot \left( \mathcal{S}_\mathrm{md} / \mathcal{S}_{\mathrm{md},\circ} \right) } {1+\Lambda_\circ} \right] . \label{eq:DtermG}\end{aligned}$$ The mean global annual value of this expression equals Eq. (\[eq:DtermF\]) thanks to the normalization condition $\widetilde{\zeta}(\varphi,t)=1$. This formalism introduces a dependence on $t$ and on the axis obliquity[^6] in the transport coefficient. Empirical support for the adoption of the modulation term $\zeta(\varphi,t)$ comes from the improved match between the observed and predicted temperature-latitude profile of the Earth. In the left panel of Fig. \[compareDR\] we show that it is not possible to accurately match the Earth profile by varying $D_\circ$ at constant $\zeta(\varphi,t)=1$ (i.e. $\mathcal{R}=1$). This is because the whole profile becomes flatter with increasing $D_\circ$ and the values of $D_\circ$ sufficiently high to provide the desired smooth temperature distribution inside the tropics yield a profile which is too flat in the polar regions. This problem can be solved with the introduction of the modulation factor $\zeta$. In the right panel of Fig. \[compareDR\] we show that by increasing $\mathcal{R}$ the profile declines faster at the poles while becoming slightly flatter at the equator. This behavior is different from that induced by changes of $D_\circ$ and provides an extra degree of freedom to match the observed profile. For the time being, the parameter $\mathcal{R}$ can be tuned to fit the Earth model, but cannot be validated with other planets. The validation of $\mathcal{R}$ in rocky planets different from the Earth could be addressed by future GCM calculations. Meantime, the uncertainty related to the choice of this parameter in other planets can be estimated by repeating the climate simulations for different values of $\mathcal{R}$. Given the lack of solid theoretical support for the adoption of the $\zeta(\varphi,t)$ formalism, it is safe to use the smallest possible value of $\mathcal{R}$ (i.e. closest to unity) that allows the Earth profile to be reproduced. With the upgraded calibration of the Earth model presented here (Appendix B) we have been able to adopt a lower value ($\mathcal{R}=2.2$, Table \[tabFiducialPar\]) than in Paper I ($\mathcal{R}=6$, Table 2 in @Vladilo13). ### Ocean transport \[sectOceanTransport\] The algorithm that describes the energy transport has been derived assuming that most of the meridional transport is performed by the atmosphere rather than the ocean (§\[sectMeridionalTransport\]). This is a reasonable assumption in the Earth climate regime, where the atmosphere contributes 78% of the total transport in the Northern Hemisphere and 92% in the Southern Hemisphere at the latitude of maximum poleward transport [@Trenberth01]. In order to assess the importance of the ocean contribution in different planetary regimes one needs to run GCM simulations featuring the ocean component. This is a difficult task because the ocean circulation is extremely dependent on the [*detailed*]{} distribution of the continents and because the time scale of ocean response is much longer than that of the atmosphere. As a result, one should run GCMs with a detailed description of the geography for a large number of orbits in order to include the ocean transport in the modellization of exoplanets. With this type of climate simulation it would be impossible to perform an exploratory study of exoplanet surface temperature, which is the aim of our model. Not to mention the fact that the choice of a [*detailed*]{} description of the continental distribution in exoplanets is completely arbitrary. It is therefore desirable to find simplified algorithms able to include the ocean transport in zonal models, such as the [ESTM]{}. To this end, one should perform 3D numerical experiments aimed at investigating how the energy transport is partitioned between the atmosphere and the ocean in a variety of planetary conditions. Preliminary work of this type suggests that the energy transport of wind-driven ocean gyres[^7] vary in a roughly similar fashion to the energy transport of the atmosphere as external parameters vary [@Vallis09]. The existence of mechanisms of compensation that regulate the relative contribution of the atmosphere and the ocean to the [*total*]{} transport [@Bjerknes64; @Shaffrey06; @vanderSwaluw07; @Lucarini11] may also help build a simplified description of the atmosphere/ocean transport. In the case of the Earth, we note that the [*total*]{} transport is remarkably similar in the Southern and Northern hemisphere (see Fig. \[annualLatProfiles\]) in spite of significant differences between the two hemispheres in terms of the relative contribution of the ocean and atmosphere [e.g. @Trenberth01 Fig. 7]. ![image](fig2a.pdf){width="7.5cm"} ![image](fig2b.pdf){width="7.5cm"} The vertical transport \[sectCRM\] ---------------------------------- The outgoing longwave radiation and the top-of-atmosphere albedo are parametrized using single atmospheric column calculations. In the present version of the [ESTM]{}, the single column calculations are performed with standard radiation codes developed at the National Center for Atmospheric Research (NCAR), as part of the Community Climate Model (CCM) project NCAR-CCM [@Kiehl98ccm3]. To access these codes we use the set of routines CliMT [@Pierrehumbert10; @Caballero12]. The CCM code employs an Earth-like atmospheric composition, with the possibility to change the amount of non-condensable greenhouse gases (i.e. CO$_2$ and CH$_4$). We adopt $p\mathrm{CO}_2=380$ppmv and $p\mathrm{CH}_4 =1.8$ppmv as the reference values for the Earth’s model. These values can be changed as long as they remain in trace abundances, as in the case of the Earth. The relative humidity, $q$, is fixed to limit the huge amount of calculations and the dimensions of the tables described below. We adopt $q=0.6$, a value consistent with the global relative humidity measured on Earth. A low effective humidity ($q\sim0.6$) is predicted self-consistently by 3-D dynamic climate models as a result of subsidence in the Hadley circulation [e.g. @Ishiwatari02]. Adoption of saturated water vapour pressure ($q=1$) tends to understimate the OLR at high temperatures, leading to excessive heating of the planet. ### Outgoing long-wavelength radiation We use a column radiation model scheme for a cloud-free atmosphere to calculate the Outgoing Long-wavelength Radiation (OLR), i.e. the thermal infrared emission that cools the planet. The OLR calculations are repeated a large number of times in order to cover a broad interval of surface temperature, $T$, background pressure, $p$, gravity acceleration, $g$, and partial pressure of non-condensable greenhouse gases. The results of these calculations are stored in tables OLR=OLR$(T,p,g,p\mathrm{CO}_2,p\mathrm{CH}_4)$. In the course of the simulation, these tables are interpolated at the zonal and instantaneous value of $T=T(\varphi,t)$. The long-wavelength forcing of the clouds is subtracted at this stage, taking into account the zonal cloud coverage, as we explain in §\[sectClouds\]. The total CPU time required to cover the parameter space $(T,p,g,p\mathrm{CO}_2,p\mathrm{CH}_4)$ is relatively large. However, once the tables are built up, the simulations are extremely fast. ### Incoming short-wavelength radiation \[sectTOAalbedo\] The top-of-atmosphere albedo, $A$, is calculated with the CCM code, to take into account the transfer of short-wavelength stellar photons in the planet atmosphere. In each atmospheric column we calculate the fraction of stellar photons that is reflected back in space for different values of $T$, $p$, $g$, $p\mathrm{CO_2}$, surface albedo, $a_s$, and zenith distance of the star, $Z$. In practice, for each set of values ($g$, $p\mathrm{CO}_2$, $p\mathrm{CH}_4$), we calculate the temperature and pressure dependence of $A$. Then, for each set of values $(T,p,g,p\mathrm{CO_2},p\mathrm{CH}_4)$ the calculations are repeated to cover the complete intervals of surface albedo, $0 < a_\mathrm{s} < 1$, and zenith distance, $0^\circ < Z < 90^\circ$. The results of these calculations are stored in multidimensional tables. In the course of the [ESTM]{} simulations these tables are interpolated to calculate $A$ as a function of the zonal and instantaneous values of $(T,p,g,p\mathrm{CO_2},a_\mathrm{s},Z)$. Each single column calculation of $A$ is relatively fast, compared to the corresponding calculation of $I$. However, due to the necessity of covering a larger parameter space, the preparation of the tables $A=A(T,p,g,p\mathrm{CO_2},a_\mathrm{s},Z)$ requires a comparable CPU time. ### Caveats \[sectCCMcaveats\] The CCM calculations that we use include pressure broadening [@Kiehl98ccm3 and refs. therein], but not collision-induced absorption. As a result, the model may underestimate the atmospheric absorption at the highest values of pressure. To avoid physical conditions not considered in the calculations we limit the surface pressure at $p \la 10$bar. The calculations are valid for a solar-type spectral distribution. The spectral type of the central star affects the vertical transport because of the wavelength dependence of the atmospheric albedo [e.g., @Selsis07]. The present version of the [ESTM]{} should be applied to planets orbiting stars with spectral distributions not very different from the solar one. Surface and cloud properties \[sectSurfaceProperties\] ------------------------------------------------------ ### Zonal coverage of oceans, lands, ice and clouds \[sectCoverage\] The zonal coverage of oceans is a free parameter, $f_o$, that also determines the fraction of continents, $f_l=1-f_o$. In this way, the planet geography is specified in a schematic way by assigning a set of $f_o$ values, one for each zone. The zonal coverage of ice and clouds is parametrized using algorithms calibrated with Earth experimental data. Following WK97, the zonal coverage of ice is a function of the mean diurnal temperature, $$\begin{aligned} f_i (T)= \max \left\{ 0, \left[ 1 - e^{ (T-273.15\,\mathrm{K} )/ 10\,\mathrm{K} } \right] \right\} . \label{fice}\end{aligned}$$ One problem with this formulation is that the ice melts completely and instantaneously as soon as $T > 273.15$K. To minimize this effect, we introduced an algorithm that mimics the formation of permanent ice when a latitude zone is below the freezing point for more than half the orbital period. In this case, we adopt a constant ice coverage for the full orbit, $f_i=f_i (\overline{T})$, where $\overline{T}$ is the mean [*annual*]{} zonal temperature. As far as the clouds are concerned, we adopt specific values of zonal coverage for clouds over oceans and continents. The dependence of the cloud coverage on the type of underlying surface has long been known [e.g. @Kondratev69] and has been quantified in recent studies [e.g. @Sanroma12; @Stubenrauch13]. Based on the results obtained by @Sanroma12, we adopt $0.70$ and $0.60$ for the cloud coverage over oceans and lands, respectively. In this way, the reference Earth model (Appendix B) predicts a mean annual global cloud coverage ${{\textstyle <}}f_{c,\circ} {{\textstyle >}}=0.67$, in excellent agreement with most recent Earth data [@Stubenrauch13]. With our formalism the cloud coverage is automatically adjusted for planets with cloud properties similar to those of the Earth, but different fractions of continents and oceans. Since the coverage of ice, $f_i$, depends on the temperature, the model simulates the feedback between temperature and albedo. ### Cloud radiative properties \[sectClouds\] The albedo and infrared absorption of the clouds have cooling and warming effects of the planet surface, respectively. Even with specifically designed 3D models it is hard to predict which of these two opposite effects dominate. The single-column radiative calculations used in studies of habitability usually assume cloud-free radiative transfer and tune the results by playing with the albedo [@Kasting88; @Kasting93; @Kopparapu13; @Kopparapu14]. The approach that we adopt with the [ESTM]{} is to parametrize the albedo and the long-wavelength forcing of the clouds assuming that their global properties are similar to those measured in the present-day Earth. Following WK97, we express the albedo of the clouds as $$a_c = \alpha + \beta Z \label{cloudAlbedo}$$ where the parameters $\alpha$ and $\beta$ are tuned to fit Earth experimental data of cloud albedo as a function of stellar zenith distance [@Cess76]. For clouds over ice, we adopt the same albedo of frozen surfaces (see Table \[tabFiducialPar\]). To take into account the long wavelength forcing of the clouds, we subtract ${{\textstyle <}}\mathrm{OLR} {{\textstyle >}}_\mathrm{cl,\circ} \, (f_c/{{\textstyle <}}f_{c,\circ} {{\textstyle >}})$ from the clear-sky OLR obtained from the radiative calculations, where ${{\textstyle <}}\mathrm{OLR} {{\textstyle >}}_\mathrm{cl,\circ}= 26.4$ W m$^{-2}$ is the mean global long wavelength forcing of the clouds on Earth [@Stephens12], $f_c$ is the mean cloud coverage in each latitude zone, and ${{\textstyle <}}f_{c,\circ} {{\textstyle >}}=0.67$ the mean global cloud coverage of the reference Earth model. The fact that the [ESTM]{} accounts for the mean radiative properties of the clouds is an improvement over classic EBMs, but one should be aware that the adopted parameterization is only valid for planets with global cloud properties similar to those of the Earth. This is a critical point because the cloud radiative properties may change with planetary conditions, as suggested by 3D simulations of terrestrial planets [e.g. @Leconte13; @Yang13]. To some extent, we can simulate this situation by changing the [ESTM]{}  cloud-forcing parameters. An example of this exercise is provided in Fig. \[mapsOLRclouds\]. If the predictions of 3D experiments become more robust, it could be possible in the future to introduce a new [ESTM]{} recipe for expressing the cloud forcing as a function of relevant planetary parameters. ### The surface albedo \[sectionAlbedo\] The mean surface albedo of each latitude zone is calculated by averaging the albedo of each type of surface present in the zone, weighted according to its zonal coverage. For the surface albedo of continents and ice we adopt the fiducial values listed in Table \[tabFiducialPar\]. The albedo of the oceans is calculated as a function of the stellar zenith distance, $Z$, using an expression calibrated with experimental data [@Briegleb86; @Enomoto07] $$\begin{aligned} \lefteqn{ a_o = { 0.026 \over (1.1 \, \mu^{1.7} + 0.065)} + {} } \nonumber\\ & & {} + 0.15 (\mu-0.1) \, (\mu-0.5) \, (\mu-1.0) ~,\end{aligned}$$ where $\mu = \cos Z$. Also clouds are treated as surface features, with zonal coverage and albedo parametrized as explained above (§\[sectClouds\]). ### Thermal capacity of the surface \[sectThermalCapacity\] The term $C$ is calculated by averaging the thermal capacity per unit area of each type of surface present in the corresponding zone according to its zonal coverage (§\[sectCoverage\]). The parameters used in these calculations are representative of the thermal capacities of oceans and solid surface (Table \[tabFiducialPar\]). For the reference Earth model the ocean contribution is calculated assuming a 50 m, wind-mixed ocean layer[^8] [@WK97; @Pierrehumbert10]. The atmospheric contribution is calculated as $$\left( \frac{ C_\mathrm{atm}}{C_{\mathrm{atm},\circ} } \right) = \left( \frac{ c_p}{c_{p,\circ} } \right) \, \left( \frac{ p}{p_\circ} \right) \, \left( \frac{ g_\circ}{g} \right)~~~, \label{Catm}$$ where $c_p$ and $p$ are the specific heat capacity and total pressure of the atmosphere, respectively [@Pierrehumbert10]. The atmospheric term enters as an additive contribution to the ocean and solid surface terms. Its impact on these parameters is generally small, the ocean contribution being the dominant one. The strong thermal inertia of the oceans implies that the mean zonal $C$ has an “ocean-like” value even when the zonal fraction of lands is comparable to that of the oceans [@WK97]. This weak point of the longitudinally-averaged model can be by-passed by adopting an idealized orography with continents covering all longitudes (see §\[sectOceanLand\]). The insolation term $S$ ----------------------- The zonal, instantaneous stellar radiation $S=S(\varphi,t)$ is calculated from the stellar luminosity, the keplerian orbital parameters and the inclination of the planet rotation axis. The model calculates $S$ also for eccentric orbits. Details on the implementation of $S$ can be found in Paper I [@Vladilo13 §A.5]. At variance with that paper, the [ESTM]{} takes also into account the vertical transport of short-wavelength photons (see §\[sectTOAalbedo\]). Limitations of the model ------------------------ In spite of the above-mentioned improvements over classic EBMs, the adoption of the zonal energy balance formalism at the core of the [ESTM]{} leads to well known limitations intrinsic to EBMs. One is that zonally averaged models cannot be applied to tidally-locked planets that always expose the same side to their central star: such cases require specifically designed models [e.g., @Kite11; @Menou13; @Mills13; @Yang13]. Also, it should be clear that the [ESTM]{} does not track climatic effects that develop in the vertical direction, even though the atmospheric response is adjusted according to latitudinal and seasonal variations of $T$ and $Z$. In spite of these limitations, the EBM at the core of our climate tools provides the flexibility that is required when many runs are needed or when one wants to compare the impact of different parameters unconstrained by the observations. At present time this is still unfeasible with GCMs and even with Intermediate Complexity Models. While GCMs are invaluable tools for climate change studies on Earth, they are heavily parameterized on current Earth conditions, and their use in significantly different conditions raises concern. In particular, the paper by @Stevens13 caused serious worries about the use of GCMs in “unconstrained” situations such as those encountered in habitability studies. Model calibration and validation ================================ The [ESTM]{} is implemented in two stages. First, a reference Earth model is built up by tuning the parameters to match the present Earth climate properties (see Appendix B). Then we use results obtained from 3D climate experiments to tune parameters or validate algorithms that are meant to be applied in Earth-like planets. Here we present a test of validation of the algorithms that describe the meridional transport. This test is a concrete example of how results obtained by GCMs can be used to validate the model. Validation of the meridional transport \[sectValidation\] --------------------------------------------------------- To perform this test we used a study of the atmospheric dynamics of terrestrial exoplanets performed by @KS14. These authors employed a moist atmospheric general circulation model to test the response of the atmospheric dynamics over a wide range of planet parameters. Specifically, they used an idealized aquaplanet with surface covered by a uniform slab of water 1 m thick; only vapor-liquid phase change was considered; the albedo was fixed at 0.35 and insolation was imposed equally between hemispheres; the remaining parameters were set to mimic an Earth-like climate. To validate the [ESTM]{} with the results found by @KS14 we modified the Earth reference model as follows. The axis obliquity was set to zero; the temperature-ice feedback was excluded; the albedo was fixed at $A=0.35$; the fraction of oceans was set to 1, adopting a thermal capacity corresponding to a mixing layer 1 m thick. With this idealized planet model we performed several sets of simulations, varying the planet rotation rate, surface flux, radius, and surface pressure. To validate the [ESTM]{} we analyze the mean annual equator-to-pole temperature difference, ${\Delta T_\text{EP}}$, which is critical for a correct estimate of the latitude temperature profile and of the surface habitability. The results of the tests are shown in Fig. \[figKS14validation\], where we compare the ${\Delta T_\text{EP}}$ values predicted by the 3D model (diamonds) with those obtained from the [ESTM]{} (solid lines). We also plot the predictions of a “dry” transport model (dashed lines) obtained by setting $\Lambda=0$ in Eq. (\[eq:DtermB\]). Finally, for the sake of comparison with previous EBMs, we plot the results obtained from a “basic [ESTM]{}” without moist term ($\Lambda=0$) and without diabatic forcing term[^9] (dotted lines). In using this “basic” model, we test some alternative scaling laws for the parameterization of the rotation rate, surface pressure and radius, as we explain below. ![image](fig3a.pdf){width="7.cm"} ![image](fig3b.pdf){width="7.cm"} ![image](fig3c.pdf){width="7.cm"} ![image](fig3d.pdf){width="7.cm"} ### Rotation rate In this experiment all parameters were fixed, with the exception of the planet rotation rate, $\Omega$, that was gradually increased from 1/10 to 10 times the Earth value, $\Omega_\oplus$. The results of this test are shown in the top-left panel of Fig. \[figKS14validation\]. One can see that the [ESTM]{} and GCM results show a similar trend, with a good quantitative agreement at $\Omega \ga 0.3 \, \Omega_\oplus$, but not at low rotation rate. This result is expected since our parameterization is appropriate to simulate planets with horizontal transport dominated by mid-latitude eddies, i.e. planets with relatively high rotation rate (see §\[sectMeridionalTransport\]). The dotted line in this figure shows the results of the basic model obtained by replacing the term $\Omega^{-4/5}$ in Eq. (\[eq:SLdry\]) with the stronger dependence $\Omega^{-2}$ adopted in previous work [e.g. @WK97; @Vladilo13]. One can see that this strong dependence on rotation rate is not supported by the 3D model, while the more moderate dependence $D \propto \Omega^{-4/5}$ adopted in the [ESTM]{}yields a much better agreement with the GCM experiments. ### Stellar flux In the top-right panel of Fig. \[figKS14validation\] we show the results obtained by varying the insolation from 100 to 2000 Wm$^{-2}$, i.e. from 0.07 to 1.47 times the present-day Earth’s insolation. The behavior predicted by the 3D model is bimodal, with a rise of ${\Delta T_\text{EP}}$ up to an insolation of $\simeq 800$ Wm$^{-2}$ and a decline at higher values of stellar flux. According to @KS14 the decline is triggered by the rise of the moist transport efficiency resulting from the increase of temperature and water vapor content. The moist [ESTM]{} is able to capture this bimodal behavior, even though a reasonable agreement with the 3D experiments is only found in a range of insolation $\pm 25$% around the present-Earth’s value (shaded area in the figure). The dry model (dashed line) is unable to capture the bimodal behavior of ${\Delta T_\text{EP}}$ versus flux. The basic model is even more discrepant (dotted line). ### Surface pressure or atmospheric columnar mass In the bottom-left panel of Fig. \[figKS14validation\] we show the results obtained by varying the surface pressure $p$ of the idealized aquaplanet from 0.2 to 20 bar. Since the surface gravity is not varied, this experiment is equivalent to vary the atmospheric columnar mass[ In this experiment, @KS14 adopted a constant optical depth of the atmosphere to focus on horizontal transport, rather than vertical transfer effects. For the sake of comparison with their experiment, we used a constant value of atmospheric columnar mass in the OLR and TOA-albedo calculations, while changing $p/g$ in the diffusion term. ]{}, $p/g$, from 0.2 to 20 times that of the Earth. Theoretical considerations indicate that the efficiency of the horizontal transport must increase with increasing $p/g$ \[e.g. Eq. (\[eq:fluxC\])\], and equator-pole temperature differences should decrease as a result. The 3D model predicts a monotonic decrease of ${\Delta T_\text{EP}}$, in line with this expectation. However, the decrease is milder than expected by the basic model with a simple law $D \propto p/g$ (dotted line). The models with diabatic forcing (solid and dashed lines) predict a more moderate decrease, $D \propto (p/g)^{2/5}$ \[Eq. (\[eq:SLdry\])\], and are in much better agreement with the results of the 3D experiments. The agreement of the moist [ESTM]{} (solid line) is remarkable in the range of high columnar mass. ### Planet radius or mass In this experiment all planetary parameters, including the columnar mass $p/g$, are fixed while changing the planet radius. Assuming a constant mean density, $\rho=\rho_\oplus$, this is equivalent to scale the planet mass as $M \propto R^3$. The results are shown in the bottom-right panel of Fig. \[figKS14validation\], where 3D models predict an increase of ${\Delta T_\text{EP}}$ with increasing radius and mass, indicating that the horizontal transport becomes less efficient in larger planets. This is in line with theoretical expectations which suggest that the transport coefficient decreases with increasing radius, possibly with a quadratic law \[e.g. Eq. (\[eq:DtermA\])\]. However, the increase of ${\Delta T_\text{EP}}$ appears to be too sharp if we adopt the basic model with $D \propto R^{-2}$ (dotted line). The models with diabatic forcing (solid and dashed lines) predict a moderate decrease, $D \propto R^{-6/5}$ \[Eq. (\[eq:Ddry\])\], in line with the 3D predictions. In the range of masses typical of terrestrial planets (shaded area in the bottom-right panel of Fig. \[figKS14validation\]) the predictions of the [ESTM]{} are very similar to those obtained by @KS14. ![image](fig4a.pdf){width="7.5cm"} ![image](fig4b.pdf){width="7.5cm"} Applications ============ After the calibration and validation, we apply the model to explore the dependence of $T(\varphi,t)$ and the mean global surface temperature[^10], ${\widetilde{T}}$, on a variety of planet parameters. At variance with the validation tests, we now consider all the features of the model, including the ice-albedo feedback.In Appendix C we present simulations of idealized Earth-like planets. Here we describe a test study of exoplanet habitability. Exoplanets ---------- The modelization of the surface temperature of exoplanets is severely constrained by the limited amount of observational data. Typically, one can measure the stellar and orbital parameters and a few planetary quantities, such as the radius and/or mass. From the stellar and orbital data one can estimate the planet insolation and its seasonal evolution. From the radius and mass one can estimate the surface gravity which enters in the parameterization of the atmospheric columnar mass. Unfortunately, many planet quantities that are required for the modelization are currently not observable. These include the atmospheric composition[^11], surface pressure, ocean/land distribution, axis obliquity and rotation period. Taking advantage of the flexibility of the [ESTM]{}, we can perform a fast exploration of the space of the unknown quantities, treating them as free parameters. From the application of this methodology we can assess the relative importance in terms of climate impact of the planet quantities that are not measurable. In addition, we can constrain the ranges of parameters values that yield habitable solutions. We show two examples of application of this methodology. First we consider a specific exoplanet chosen as a test case, then we introduce a statistical ranking of planetary habitability. We adopt an index of habitability, ${h_\text{lw}}$, based on the liquid water criterion[^12]. ![ Average equator-pole temperature difference, ${{\textstyle <}}{\Delta T_\text{EP}}{{\textstyle >}}$, obtained from [ESTM]{} simulations of Kepler62-e (§\[sectKepler62e\]), plotted as a function of surface pressure, $p$. Each curve is calculated at constant $p$CO$_2$, as specified in the legend. []{data-label="fig2Kepler62e"}](fig5.pdf){width="8.2cm"} ### Kepler-62e as a test case \[sectKepler62e\] The test-case exoplanet was chosen using three criteria. The first is that the planet should be of terrestrial type, i.e. rocky and without an extended atmosphere, in order to be suitable for the application of the [ESTM]{}. We used the radius for a preliminary characterization of the planet, since evidence is accumulating for the existence of a gradual transition, correlated with radius, between planets of terrestrial type and planets with rocky cores but extended gas envelopes [e.g. @Wu13; @Marcy14]. We restricted our search to planets with $R \la 1.7 R_\oplus$, the threshold for terrestrial planets found in a statistical study of size, host-star metallicity and orbital period [@Buchhave14]. As a second criterion, we required the orbital semimajor axis to be larger than the tidal lock radius, since the [ESTM]{}  cannot be applied to tidally locked planets. Finally, given the extreme dependence of habitability on insolation (see e.g. Fig. \[mapsFlux\]), we selected planets with an insolation within $\pm 50\%$ of the present-day Earth value. By querying the Exoplanet Orbit Database [@Wright11] at exoplanets.org, we found that only Kepler-62e [@Borucki13K62] satisfies the above criteria. The radius, $R=1.61 R_\oplus$, and orbital period, $P=122\,$d, suggest that Kepler-62e is probably of terrestrial type [see @Buchhave14 Fig. 2]. Its insolation is only 19% higher than the Earth’s value, and its semimajor axis, $a=0.427$AU, is larger than the tidal lock radius[^13], $r_\mathrm{tl}=0.31$AU. To run the [ESTM]{} simulations of Kepler-62e we adopted at face value the radius, semimajor axis, eccentricity and stellar flux provided by the observations [@Borucki13K62]. Unfortunately, only a loose upper limit ($M<36\,M_\oplus$) is available for the mass, so that the surface gravity $g$ is poorly constrained at the present time. For illustrative purposes, we adopt $g=1.5\,g_\oplus$ ($M=3.9\,M_\oplus$), corresponding to a mean density $5.1$gcm$^{-3}$, similar to that of the Earth ($\rho_\oplus=5.5$gcm$^{-3}$). As far as the atmosphere is concerned, we vary the surface pressure in the range $p \in (0.03,8)$bar and the CO$_2$ partial pressure in the range $p$CO$_2$/($p$CO$_2$)$_\oplus \in (1,100)$. We adopt 3 representative values of rotation rate, $\Omega/\Omega_\oplus \in (0.5,1,2)$, axis obliquity, $\epsilon \in (0^\circ,22.5^\circ,45^\circ)$, and ocean coverage, $f_o \in (0.5,0.75,1.0)$. For the remaining parameters we adopt the Earth’s reference values. For each value of CO$_2$ partial pressure we run simulations covering all possible combinations of background pressure, rotation rate, axis obliquity and ocean coverage listed above. Part of the results of these simulations are shown in Figs. \[fig1Kepler62e\] and \[fig2Kepler62e\]. In Fig. \[fig1Kepler62e\] we plot the mean global temperature, ${\widetilde{T}}$, obtained for two different values of $p$CO$_2$, specified in the legends. At each value of $p$, we show the values of ${\widetilde{T}}$ obtained from all possible combinations of $\Omega$, $\epsilon$, and $f_o$. The typical scatter of ${\widetilde{T}}$ due a random combination of these 3 parameters is $\simeq$ 10-20K. On top of this scatter, the most remarkable feature is a positive trend of ${\widetilde{T}}$ versus $p$ extending over an interval of $\approx 60$K. Within the limits of application of the model, these results indicate that an uncertainty on $p$ within the range expected for terrestrial planets[^14] has stronger effects on ${\widetilde{T}}$ than uncertainties of rotation rate, axis obliquity and ocean coverage. In fact, variations of $p$ have strong effects both on $T(\varphi,t)$ and ${\widetilde{T}}$ because they are equivalent to variations of atmospheric columnar mass, $p/g$, which affect both the latitudinal transport (i.e. the surface temperature distribution), and the radiative transfer (i.e. the global energy budget). The results shown in Fig. \[fig1Kepler62e\] constrain the interval of $p$ that allows Kepler-62e to be habitable. At high $p$, the habitability is limited by the rise of ${\widetilde{T}}$, which eventually leads to a runaway greenhouse instability. The red diamonds in Fig. \[fig1Kepler62e\] indicate cases where the water vapor column exceeds the critical value that we tentatively adopt as a limit for the onset of such instability (§\[sectClimateSimulations\]). At low $p$, two factors combine to limit the habitability. One is the onset of large temperature excursions and the other the decrease of the water boiling point (dotted lines in Fig. \[fig1Kepler62e\]). As a result, the fraction of planet surface outside the liquid water temperature becomes larger at low $p$. To evidentiate this effect, we have scaled the size of the symbols in Fig. \[fig1Kepler62e\] according to the value of ${h_\text{lw}}$. One can see that ${h_\text{lw}}$ tends to become smaller at low $p$, especially when ${\widetilde{T}}$ approches the temperature regime where the ice-albedo feedback becomes important. In some cases, not shown in the figure, the planet undergoes a complete snowball transition and the habitability becomes zero. The effect of temperature excursions is shown in Fig. \[fig2Kepler62e\], where we plot as a function of $p$ the average value of ${\Delta T_\text{EP}}$ obtained from all possible combinations of $\Omega$, $\epsilon$, and $f_o$. One can see that the temperature excursions become large with decreasing level of CO$_2$; this happens because at low CO$_2$ the temperature is sufficiently low for the development of the ice-albedo feedback, and because the lowered IR optical depth of the atmosphere is less effective in reducing the effect of the geometrically-induced meridional insolation variation at the surface. Fig. \[fig1Kepler62e\] shows that the equilibrium temperature of Kepler-62e, $T_\text{eq}=270 \pm 15$K [@Borucki13K62 dashed line], lies at the lower end of the predicted ${\widetilde{T}}$ values. The difference ${\widetilde{T}}-T_\mathrm{eq}$ increases with atmospheric columnar mass because the estimate of $T_\mathrm{eq}$ does not consider the greenhouse effect. ![ Ranking index of habitability, $r_h$, obtained from [ESTM]{} simulations of Kepler-62e and an Earth twin (§\[sectRanking\]) plotted as a function of surface pressure, $p$. Symbols for Kepler-62e as in Fig. \[fig2Kepler62e\]. Crossed circles: Earth twin. []{data-label="figRanking"}](fig6.pdf){width="8.2cm"} ### Statistical ranking of planetary habitability \[sectRanking\] By performing a large number of simulations for a wide combination of parameters we can quantify the habitability in a statistical way. To illustrate this possibility with an example we consider again the test case of Kepler-62e. We perform a statistical analysis of the results obtained from all combinations of $\Omega$, $\epsilon$, and $f_o$ values adopted at a given $p$. We tag as “non habitable” the cases with a snowball transition and those with a critical value of water vapor. For each set of parameters {$\Omega$, $\epsilon$, $f_o$} we count the number of cases that are found to be habitable, $n_h$, over the total number of simulations, $n_t$. From this we calculate the fraction $\psi_h=n_h/n_t$, which represents the probability for the planet to be habitable if the adopted parameter values are equally plausible a priori. We then call ${{\textstyle <}}{h_\text{lw}}{{\textstyle >}}=(1/n_h)\sum_{i=1}^{n_h} ({h_\text{lw}})_i$ the mean surface habitability of the $n_h$ sets that yield a habitable solution. At this point we define a “ranking index of habitability” $$r_h \equiv \psi_h \times {{\textstyle <}}{h_\text{lw}}{{\textstyle >}}= {1 \over n_t}\sum_{i=1}^{n_h}\, ({h_\text{lw}})_i ~~. \label{HabitabilityIndex}$$ This index takes into account at the same time the probability that the planet is habitable and the average fraction of habitable surface. As an example, in Fig. \[figRanking\] we plot $r_h$ versus $p$ for the different values of $p$CO$_2$ considered in our simulations of Kepler-62e. One can see that $r_h \simeq 1$ only in a limited range of surface pressure. At very low pressure, the index becomes lower because the fraction of habitable surface decreases and because $\psi_h$ drops below unity when a snowball transition is encountered. At high pressure $\psi_h$ drops when the water vapor limit is encountered. As an example of application, we can constrain the interval of $p$ suitable for the habitability of Kepler62-e. For $p$CO$_2$= ($p$CO$_2$)$_\oplus$ (triangles) the requirement of habitability yields the limit $p \la 3$bar. As $p$CO$_2$ increases (squares and pentagons), the upper limit becomes more stringent ($p \la 1$bar), but the planet has a higher probability of being habitable at relatively low pressure. Clearly, the index $r_h$ does not have an absolute meaning since its value depends on the choice of the set of parameters. However, by choosing a common set, the index $r_h$ can be used to rank the relative habitability of different planets. As an example, in Fig. \[figRanking\] we plot $r_h$ versus $p$ obtained for an Earth twin[^15] with the same sets of parameters {$\Omega$, $\epsilon$, $f_o$} adopted for Kepler-62e. One can see that at $p \ga 1$bar the Earth twin (crossed circles) is more habitable than Kepler-62e for the adopted set of parameters, while at $p \la 1$bar Kepler-62 is more habitable than the Earth twin. Summary and conclusions ======================= We have assembled the ESTM set of climate tools (Figure 1) to model the latitudinal and seasonal variation of the surface temperature, $T(\varphi,t)$, on Earth-like planets. The motivation for building the ESTM is twofold. From the general point of view of exoplanet research, Earth-size planets are expected to be rather common, but difficult to characterize with experimental methods. From the astrobiological point of view, Earth-like planets are excellent candidates in the quest for habitable environments. A fast simulation of $T(\varphi,t)$ enables us to characterize the surface properties of these planets by sampling the wide parameter space representative of the physical quantities not measured by observations. The detailed modelization of the surface temperature is essential to estimate the habitability of these planets using the liquid water criterion or a proper set of thermal limits of life [e.g. @Clarke14]. The [ESTM]{} consists of an upgraded type of EBM featuring a multi-parameter description of the physical quantities that dominate the vertical and horizontal energy transport (Fig.\[figScheme1\]). The functional dependence of the physical quantities is derived using single-column atmospheric calculations and algorithms tested with 3D climate experiments. Special attention has been dedicated to improve (§\[sectMeridionalTransport\]) and validate (§\[sectValidation\]) the description of the meridional transport, a weak point of classic EBMs. The functional dependence of the meridional transport on atmospheric columnar mass and rotation rate is significantly milder \[see Eq. (\[eq:SLdry\])\] than the one adopted in previous EBMs. The reference Earth model obtained from the calibration process is able to reproduce with accuracy the average surface temperature properties of the Earth and to capture the main features of the Earth albedo and meridional energy transport (Figs. \[annualLatProfiles\] and \[TempLatMaps\]). Once calibrated, the [ESTM]{} is able to reproduce the mean equator-pole temperature difference, ${\Delta T_\text{EP}}$, predicted by 3D aquaplanet models (Fig. \[figKS14validation\]). The [ESTM]{} simulations provide a fast “snapshot” of $T(\varphi,t)$ and temperature-based indices of habitability for any set of input parameters that yields a stationary solution. The planet parameters that can be changed include radius, $R$, surface pressure, $p$, gravitational acceleration, $g$, rotation rate, $\Omega$, axis tilt, $\epsilon$, ocean/land coverage and partial pressure of non-condensable greenhouse gases. The approximate limits of validity of the present version of the [ESTM]{} can be summarized as follows: $0.5 \la R/R_\oplus \la 2$, $p \la 10$bar, $0.5 \la \Omega/\Omega_\oplus \la 5$, $\epsilon \la 45^\circ$; $p$CO$_2$ and $p$CH$4$ can be changed, but should remain in trace abundances with respect to an Earth-like atmospheric composition. The requirement of a relatively high rotation rate is inherent to the simplified treatment of the horizontal transport. However, the ESTM can be applied to explore the early habitability of slow rotating planets that had an initial fast rotation. We have performed [ESTM]{} simulations of idealized Earth-like planets to evaluate the impact on the planet temperature and habitability resulting from variations of rotation rate, insolation, atmospheric columnar mass, radius, axis obliquity, ocean/land distribution, and long-wavelength cloud forcing (Figs. \[mapsRotation\] - \[mapsOLRclouds\]). Most of these quantities can easily induce $\sim 30$-$40$% changes of the mean annual habitability for parameter variations well within the range expected for terrestrial planets. Variations of insolations within $\pm 10\%$ of the Earth value can impact the habitability up to $100\%$. The land/ocean distribution mainly affects the seasonal habitability, rather than its mean annual value. The impact of rotation rate is weaker than predicted by classic EBMs, without evidence of a snowball transition at $\Omega/\Omega_\oplus \ga 3$. A general result of these numerical experiments is that the ice-albedo feedback amplifies changes of $T(\varphi,t)$ resulting from variations of planet parameters. We have tested the capability of the [ESTM]{} to explore the habitability of a specific exoplanet in presence of a limited amount of observational data. For the exoplanet chosen for this test, Kepler-62e, we used the stellar flux, orbital parameters and planet radius provided by the observations [@Borucki13K62], together with a surface gravity $g=1.5\,g_\oplus$ adopted for illustrative purposes. We treated the surface pressure, $p$CO$_2$, rotation rate, axis tilt and ocean coverage as free parameters. We find that ${\widetilde{T}}$ increases from $\approx 280$K to $\approx 340$K when the surface pressure increases between $p\simeq 0.03$bar and $3$bar; this trend dominates the scatter of $\simeq 10$-20K resulting from variations of rotation rate, axis tilt and ocean coverage at each value of $p$. We also find that the surface pressure of Kepler-62e should lie above $p \approx 0.05$bar to avoid the presence of a significant ice cover and below $\approx 2$bar to avoid the onset of a runaway greenhouse instability; this upper limit is confirmed for different values of $p$CO$_2$ and surface gravity. These results demonstrate the [ESTM]{} capability to evaluate the climate impact of unknown planet quantities and to constrain the range of values that yield habitable solutions. The test case of Kepler-62e also shows that the equilibrium temperature commonly published in exoplanet studies represents a sort of lower limit to the mean global temperature of more realistic models. We have shown that the flexibility of the [ESTM]{} makes it possible to quantify the habitability in a statistical way. As an example, we have introduced a ranking index of habitability, $r_h$, that can be used to compare the overall habitability of different planets for a given set of reference parameters (§\[sectRanking\]). For instance, we find that at $p \la 1$bar Kepler-62e is more habitable than an Earth twin for the combination of rotation rates, axis tilts and ocean fractions considered in our test, whereas the comparison favours the Earth twin at $p \ga 1$bar. The index $r_h$ can be applied to select the best potential cases of habitable exoplanets for follow-up searches of biomarkers. The results of this work indicate the level of accuracy required to estimate the surface habitability of terrestrial planets. The quality of exoplanet orbital data and host star fluxes should be improved to measure the insolation with an accuracy of $\approx 1\%$. In spite of the difficulty of characterizing terrestrial atmospheres [e.g., @Misra13], an effort should be made to constrain the atmospheric pressure, possibly within a factor of two. We thank Yohai Kaspi for providing results in advance of publication. The comments and suggestions received from an anonymous referee have significantly improved the presentation of this work. This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org. We thank Rodrigo Caballero and Raymond Pierrehumbert for suggestions concerning the use of their climate utilities. Running the simulations\[sectClimateSimulations\] ================================================= The [ESTM]{} simulations consist in a search for a stationary solution $T(\varphi,t)$ of Eq. (\[diffusionEq\]). We solve the spatial derivates with the Euler method, with the boundary condition that the flux of horizontal heat into the pole vanishes. The temporal derivates are solved with the Runge-Kutta method. The solution is searched for by iterations, starting with an assigned initial value of temperature equal in each zone, $T(\varphi,t_\circ) \equiv T_\mathrm{start}$. Every 10 orbits we calculate the mean global orbital temperature, $\widetilde{T}$. The simulation is stopped when $\widetilde{T}$ converges within a prefixed accuracy. In practice, we calculate the increment $\delta \widetilde{T}$ every 10 orbits and stop the simulation when $| \delta \widetilde{T}| < 0.01$K. In most cases the convergence is achieved in fewer than 100 orbits. After checking that the simulations converge to the same solution starting from widely different values of $T_\mathrm{start}> 273$K, we adopted $T_\mathrm{start}=275$K. The choice of this “cold start” allows us to study atmospheres with very low pressure, where the boiling point is just a few kelvins above the freezing point; in these cases, the adoption of a higher $T_\mathrm{start}$ would force most of the planet surface to evaporate at the very start of the simulation. The adoption of a lower $T_\mathrm{start}$, on the other hand, would trigger artificial episodes of glaciation. In addition to the regular exit based on the convergence criterion, we interrupt the simulation in presence of water vapor effects that may lead to a condition of non-habitability. Specifically, two critical conditions are monitored in the course of the simulation. The first takes place when $T(\varphi,t)$ exceeds the water boiling point, $T_\mathrm{b}$. The second, when the columnar mass of water vapour[^16] exceeds 1/10 of the total atmospheric columnar mass (see next paragraph). In the first case, the [*long term*]{} habitability might be compromised due to evaporation of the surface water. The second condition might lead to the onset of a runaway greenhouse instability [@Hart78; @Kasting88] with a complete loss of water from the planet surface. The [ESTM]{} does not track variations of relative humidity and is not suited to describe these two cases. By interrupting the simulation when one of these two conditions is met, we limit the range of application of the simulations to cases with a modest content of water vapor that can be safely treated by the model. The limit of water vapor columnar mass that we adopt is inspired by the results of a study of the Earth climate variation induced by a rise of insolation [@Leconte13]. When the insolation attains 1.1 times the present Earth value, the 3D moist model of @Leconte13 predicts an energy unbalance that would lead to a runaway greenhouse instability. The mixing ratio of water vapor over moist air predicted at the critical value of insolation is $\simeq 0.1$ [@Leconte13 Fig. 3b]. The limit of water columnar mass that we adopt is based on this mixing ratio. For the simulations presented in this work, we adopt a grid of $N=54$ latitude zones. The orbital period is sampled at $N_s=48$ instants of time to investigate the seasonal evolution of the surface quantities of interest (e.g. temperature, albedo, ice coverage). With this set-up, the simulation of the Earth model presented below attains a stationary solution after 50 orbital periods, with a CPU time of $\sim 70$s on a 2.3 GHz processor. This extremely high computational efficiency is the key to perform the large number of simulations required to cover the broad parameter space of exoplanets. ![image](fig7a.pdf){width="7.5cm"} ![image](fig7b.pdf){width="7.5cm"} ![image](fig7c.pdf){width="7.5cm"} ![image](fig8a.pdf){width="7.5cm"} ![image](fig8b.pdf){width="7.5cm"} ![image](fig8c.pdf){width="7.5cm"} ![image](fig8d.pdf){width="7.5cm"} The reference Earth model \[sectEarthModel\] ============================================ For the calibration of the reference Earth model we adopted orbital parameters, axis tilt, and rotation period from @Allen00. For the solar constant we adopted $S_0 = 1360$ Wm$^{-2}$ and $g=9.8$ms$^{-2}$ for the surface gravity acceleration. The zonal coverage of continents and oceans was taken from Table III in WK97. We adopted a relative humidity $q= 0.6$ (see §\[sectCRM\]) and volumetric mixing ratios of CO$_2$ and CH$_4$ of 380 ppmV and 1.8 ppmV, respectively. The surface pressure of dry air, $p_\mathrm{dry} = 1.0031 \times 10^5$Pa, was tuned to match the moist surface pressure of the Earth, $p_\mathrm{tot} = 1.0132 \times 10^5$Pa. The remaining parameters of the model are shown in Table \[tabFiducialPar\]. Some parameters were fine tuned to match the mean annual global quantities of the northern hemisphere of the Earth, as specified in the Table. We avoided using the southern hemisphere as a reference since its climate is strongly affected by the altitude of Antartica, while orography is not included in the model. In Table \[GlobalEarthModel\], column 3, we show the experimental data of the northern hemisphere used as a guideline to tune the model parameters. In column 4 of the same table we show the corresponding predictions of the Earth reference model. In Fig. \[annualLatProfiles\] we show the mean annual latitude profiles of surface temperature, top-of-atmosphere albedo and meridional energy flux predicted by the reference model (solid line). The temperature profile is compared with ERA Interim 2m temperatures [@Dee11] averaged in the period 2001-2013 (crosses in the top panel). Area-weighted temperature differences between observed and predicted profile have an rms of 1.1K in the northern hemisphere. The albedo profiles are compared with the CERES short-wavelength albedo [@Loeb05; @Loeb07] averaged in the same period (crosses in the middle panel). The [ESTM]{} is able to reproduce reasonably well the rise of albedo with increasing latitude. This rise is due to two factors: the dependence of the atmosphere, ocean, and cloud albedo on zenith distance (§§\[sectTOAalbedo\],\[sectionAlbedo\]) and the increasing coverage of ice at low temperature (§\[sectCoverage\]). The meridional flux in the bottom panel is compared with the total flux (dashed line) and the atmospheric flux (dotted line) obtained from EC-Earth model [@Hazeleger10]. In spite of the simplicity of the transport formalism intrinsic to Eq. (\[diffusionEq\]), the model is able to capture remarkably well the latitude dependence of the meridional transport. The [*seasonal*]{} variations of the temperature and albedo latitudinal profiles are compared with the experimental data in Fig. \[TempLatMaps\]. One can see that the reference model is able to capture the general patterns of seasonal evolution. Even if the reference model has been tuned using northern hemisphere data, the predictions shown in Figs. \[annualLatProfiles\] and \[TempLatMaps\] are in general agreement with the data also for most the southern hemisphere, with the exception of Antartica. It is remarkable that the atmospheric transport in both hemispheres is reproduced well, in spite of significant differences in the ocean contribution between the two hemispheres (see §\[sectOceanTransport\]). Once the reference model is calibrated, some of the parameters that have been tuned to fit the present-day Earth’s climate can be changed for specific applications of the [ESTM]{} to exoplanets. As an example, even if we adopt $a_s=0.18$ for the surface albedo of continents in the reference model, we may adopt lower values, typical of forests, or higher values, typical of sandy deserts, for specific applications. More information on the parameters that can be changed is given in Table \[tabFiducialPar\]. Model simulations of idealized Earth-like planets \[sectEarthLike\] =================================================================== In this set of experiments we study the effects of varying a single planet quantity while assigning Earth’s values to all the remaining parameters. We consider variations of rotation period, insolation, atmospheric columnar mass, radius, obliquity, land distribution, and long-wavelength cloud forcing. Rotation rate \[sectRotMap\] ---------------------------- In Fig. \[mapsRotation\] we show how $T(\varphi,t)$ is affected by variations of rotation rate, the left and right panel corresponding to the cases $\Omega=0.5 \,\Omega_\oplus$ and $\Omega=4 \, \Omega_\oplus$, respectively. One can see that the change of the surface temperature distribution is quite dramatic in spite of the modest dependence of the transport coefficient on rotation rate that we adopt, $D \propto \Omega^{-4/5}$ \[see Eq.(\[eq:Ddry\])\]. The mean global habitability changes from ${h_\text{lw}}=0.94$ in the slow-rotating case to ${h_\text{lw}}=0.71$ in the fast-rotating case. The corresponding change of mean global temperature is relatively small, from $\widetilde{T}=284$K to 290K. These results show the importance of estimating $T(\varphi,t)$, rather than $\widetilde{T}$, in order to quantify the habitability. The behavior of the mean equator-pole difference, ${\Delta T_\text{EP}}$, is useful to interpret the results of this test. We find ${\Delta T_\text{EP}}= 28$K in the slow-rotating case and ${\Delta T_\text{EP}}= 80$K in the fast-rotating case. This variation of ${\Delta T_\text{EP}}$ is much higher than that found for the same change of rotation rate in the case of the KS14 aquaplanet (top-left panel in Fig. \[figKS14validation\]). We interpret this strong variation of ${\Delta T_\text{EP}}$ in terms of the ice-albedo feedback, which is positive and tends to amplify variations of the surface temperature. This feedback is accounted for in the present experiment, but not in the case of the aquaplanet. These results illustrate the importance of using climate models with latitude temperature distribution and ice-albedo feedback in order to estimate the fraction of habitable surface. The analysis of the ice cover evidentiates another important difference between the [ESTM]{} and classic EBMs. With the [ESTM]{} the ice cover increases from $\simeq 3\%$ at $\Omega=0.5 \,\Omega_\oplus$ to $\simeq 23\%$ at $\Omega=4 \, \Omega_\oplus$. This increase is less dramatic than the transition to a complete “snowball” state (i.e. ice cover $\simeq 100\%$) found with classic EBMs at $\Omega=3 \, \Omega_\oplus$ [e.g. @SMS08]. This difference is due to two factors. One is the strong dependence of the transport on rotation rate adopted in most EBMs ($D \propto \Omega^{-2}$), which is not supported by the validation tests discussed above (top-left panel of Fig. \[figKS14validation\]). Another factor is the algorithm adopted for the albedo, which in classic EBMs is a simple analytical function $A=A(T)$, while in the [ESTM]{} is a multi-parameter function $A=A(T,p,g,p\mathrm{CO_2},a_\mathrm{s},Z)$ that takes into account the vertical transport of stellar radiation (§\[sectTOAalbedo\]). These results show the importance of adopting algorithms calibrated with 3D experiments and atmospheric column calculations. ![image](fig9a.pdf){width="7.5cm"} ![image](fig9b.pdf){width="7.5cm"} ![image](fig10a.pdf){width="7.5cm"} ![image](fig10b.pdf){width="7.5cm"} Insolation \[sectFluxMap\] -------------------------- In Fig. \[mapsFlux\] we show how $T(\varphi,t)$ is affected by variations of stellar insolation, the left and right panel corresponding to an insolation of 0.9 and 1.1 times the present-day Earth’s insolation ($S/4\simeq 341$Wm$^{-2}$), respectively. In the first case the [ESTM]{} finds a complete snowball with ${\widetilde{T}}=223$K and ${h_\text{lw}}=0$, while in the latter ${\widetilde{T}}=306$K and ${h_\text{lw}}=1$ with no ice cover. These results demonstrate the extreme sensitivity of the surface temperature to variations of insolation and the need of incorporating feedbacks in global climate models in order to define the limits of insolation of a habitable planet. Our model is able to capture the ice-albedo feedback, but is not suited for treating hot atmospheres and the runaway greenhouse instability. To test the limits of the [ESTM]{} we have gradually increased the insolation and compared our results with those obtained by the 3D model of @Leconte13. We found that the [ESTM]{} tracks the rise of ${\widetilde{T}}$ with insolation predicted by the 3D model up to $S/4 \simeq 365$Wm$^{-2}$. At higher insolation, the 3D model predicts a faster rise of ${\widetilde{T}}$ due to an increase of the radiative cloud forcing. By decreasing the insolation with respect to the Earth’s value, the [ESTM]{} finds solutions characterized by increasing ice cover. At $S/4 \simeq 310$Wm$^{-2}$ the simulation displays a runaway ice-albedo feedback that leads to a complete snowball configuration. This result sets the [ESTM]{} limit of minimum insolation for the liquid-water habitability of an Earth-twin planet. ![image](fig11a.pdf){width="7.5cm"} ![image](fig11b.pdf){width="7.5cm"} Atmospheric columnar mass \[sectColumnarMass\] ---------------------------------------------- In Fig. \[mapsPressure\] we show how $T(\varphi,t)$ is affected by variations of surface pressure, the left and right panel corresponding to the cases $p=0.5$bar and $4$bar, respectively. Since the surface gravity is kept fixed at $g=g_\oplus$, this experiment also investigates the climate impact of variations of atmospheric columnar mass, $p/g$. We find a significant difference in mean temperature and habitability between the two cases, with ${\widetilde{T}}=274$K and ${h_\text{lw}}=0.62$ in the low-pressure case and ${\widetilde{T}}=310$K and ${h_\text{lw}}=1.00$ in the high-pressure case. The mean equator-pole difference, decreases from ${\Delta T_\text{EP}}=$55K to 33K between the two cases. The rise of mean temperature is due to the existence of a positive correlation between columnar mass and intensity of the greenhouse effect. The decrease of temperature gradient results from the correlation between $p/g$ and the efficiency of the horizontal transport. Both effects have been already discussed in Paper I. Here we find a more moderate trend with $p/g$, as a result of the new formulation of $D$ that we adopt. ![image](fig12a.pdf){width="7.5cm"} ![image](fig12b.pdf){width="7.5cm"} Planet radius or mass \[sectRadius\] ------------------------------------ In Fig. \[mapsRadius\] we show how $T(\varphi,t)$ is affected by variations of planet radius, the left and right panel corresponding to the cases $R=0.5\,R_\oplus$ and $1.5\,R_\oplus$, respectively. In this ideal experiment we keep fixed the planet mean density, $\rho=\rho_\oplus$, so that the planet mass and gravity scale as $M \propto R^3$ and $g \propto R$, respectively. We also keep fixed the columnar mass, $p/g=(p/g)_\oplus$, by scaling $p$ and $g$ with $R$. With this experimental setup, the radius is the only parameter that varies in the transport coefficient $D$ \[Eqs. (\[eq:SLdry\]), (\[eq:SLdm\]), and (\[eq:DtermF\])\]. The left panel corresponds to the case $M=0.125\,M_\oplus$, $p=0.5$bar and $g=0.5\,g_\oplus$ and the right panel to the case $M=3.375\,M_\oplus$, $p=1.5$bar and $g=1.5\,g_\oplus$. We find that the planet cools significantly when the radius, mass and gravity increase, with a variation of the mean global temperature from ${\widetilde{T}}=294$K to 281K. The equator-to-pole temperature differences increases dramatically, from ${\Delta T_\text{EP}}=$18K to 64K, respectively. This change of $\Delta T_\mathrm{PE}$ is much higher than that found for the same change of radius in the case of the aquaplanet (bottom-right panel in Fig. \[figKS14validation\]). The inclusion of the ice-albedo feedback in the present experiment amplifies variations of $\Delta T_\mathrm{PE}$. In fact, the ice cover increases from 0% to 21% between $R=0.5\,R_\oplus$ and $1.5\,R_\oplus$. As a result of the variations of ${\widetilde{T}}$ and ${\Delta T_\text{EP}}$, the mean global habitability is significantly affected, changing from ${h_\text{lw}}=1.00$ to 0.71 with increasing planet size. ![image](fig13a.pdf){width="7.5cm"} ![image](fig13b.pdf){width="7.5cm"} Axis obliquity \[sectObliquity\] -------------------------------- In Fig. \[mapsObliquity\] we show how $T(\varphi,t)$ is affected by variations of axis obliquity, the left and right panel corresponding to the cases $\epsilon=0^\circ$ and $45^\circ$, respectively. We find a modest decrease of ${\widetilde{T}}$, from $291$K to 289K, and a significant decrease of ${\Delta T_\text{EP}}$, from 42K to 15K. The mean annual temperature at the poles is lower at $\epsilon=0^\circ$ than at $\epsilon=45^\circ$, because in the first case the poles have a constant, low temperature, while in the latter they alternate cool and warm seasons. As a result, the ice cover decreases and the habitability increases in the range from $\epsilon=0^\circ$ to $\epsilon=45^\circ$ For the conditions considered in this experiment, the initial ice cover is relatively small ($\simeq 7\%$) and the increase of habitability relatively modest (from $0.87$ to 0.95). Larger variations of habitability are found starting from a higher ice cover. These results confirm the necessity of determining $T(\varphi,t)$ and accounting for the ice-albedo feedback in order to estimate the habitability. At $\epsilon > 45^\circ$ EBM studies predict a stronger climate impact of obliquity, with possible formation of equatorial ice belts [@WK97; @SMS09; @Vladilo13]. The physically-based derivation of the coefficient $D$ prevents using the [ESTM]{} when the equatorial-polar gradient is negative, because Eqs. (\[eq:Ddry\]) and (\[eq:SLdry\]) require $\delta T>0$. In the Earth model this condition is satisfied when $\epsilon \leq 52^\circ$. Clearly, the climate behavior at high obliquity should be tested with 3D climate experiments [e.g. @Williams03; @Ferreira14 and refs. therein], being cautious with predictions obtained with EBMs. ![image](fig14a.pdf){width="7.5cm"} ![image](fig14b.pdf){width="7.5cm"} Ocean/land distribution \[sectOceanLand\] ----------------------------------------- In Fig. \[mapsContinents\] we show how $T(\varphi,t)$ is affected by variations of the geographical distribution of the continents. In these experiments we consider a single continent covering all longitudes, but located at different latitudes in each case. The global ocean coverage is fixed at 0.7, as in the case of the Earth. In the left panel we show the the case of a continent centered on the equator, while in the right panel a continent centered on the southern pole. The variation of $T(\varphi,t)$ is quite remarkable given the little change of mean global annual temperature (${\widetilde{T}}=288$K and 289K for the equatorial and polar case, respectively). The mean annual habitability is almost the same in the two continental configurations ($0.86$ and $0.85$), but in the case of the polar continent the fraction of habitable surface shows strong seasonal oscillations. This behaviour is due to the low thermal capacity of the continents and the large excursions of polar insolation. ![image](fig15a.pdf){width="7.5cm"} ![image](fig15b.pdf){width="7.5cm"} Long-wavelength cloud forcing \[sectOLRclouds\] ----------------------------------------------- In Fig. \[mapsOLRclouds\] we show how $T(\varphi,t)$ is affected by variations of the long-wavelength forcing of clouds. Analysis of Earth data indicates a mean value 26.4W/m$^2$ [@Stephens12], but with large excursions [e.g. @Hartmann92]. To illustrate the impact of this quantity on the surface temperature we have adopted $15$W/m$^2$ (left panel) and $35$W/m$^2$ (right panel) since this range brackets most of the Earth values. The impact on the mean global temperature is relatively high, with a rise from ${\widetilde{T}}=277$K to 295K between the two cases. The ice cover correspondingly decreases from 24% to 2%, while the habitability increases from ${h_\text{lw}}=0.68$ to 0.95. The mean equator-pole temperature difference shows a decrease from ${\Delta T_\text{EP}}= 52$K to 36K. This decrease of ${\Delta T_\text{EP}}$ is more moderate than found for variations of rotation rate, radius and axis tilt. [clll]{} $C_\text{ml50}$ & $210 \times 10^6$ J m$^{-2}$ K$^{-1}$ & Thermal inertia of the oceans$^\text{a}$ (§\[sectThermalCapacity\]) & @Pierrehumbert10\ $C_{\text{atm},\circ}$ & $10.1 \times 10^6$ J m$^{-2}$ K$^{-1}$ & Thermal inertia of the atmosphere$^\text{a}$ (§\[sectThermalCapacity\])& @Pierrehumbert10\ $C_\text{solid}$ & $1 \times 10^6$ J m$^{-2}$ K$^{-1}$ & Thermal inertia of the solid surface(§\[sectThermalCapacity\]) & @Vladilo13\ $D_\circ$ & 0.66 W m$^{-2}$ K$^{-1}$ & Coefficient of latitudinal transport (§\[sectEddiesTransport\]) & Tuned$^b$ to match $T$-latitude profile (Fig.\[annualLatProfiles\])\ $\mathcal{R}$ & 2.2 & Modulation of latitudinal transport (§\[sectHadleyCell\]) & Tuned to match $T$-latitude profile (Fig.\[annualLatProfiles\])\ $a_l$ & 0.18 & Albedo of lands$^\text{a}$ & Tuned to match albedo-latitude profile (Fig.\[annualLatProfiles\])\ $a_{il}$ & 0.70 & Albedo of frozen surfaces and overlooking clouds & Tuned to match albedo-latitude profile (Fig.\[annualLatProfiles\])\ $\alpha$ & $-0.11$ & Cloud albedo \[Eq. (\[cloudAlbedo\])\] & Tuned$^\text{c}$ using Fig. 2 in @Cess76\ $\beta$ & $7.98 \times 10^{-3}$ $(^\circ)^{-1}$ & Cloud albedo \[Eq. (\[cloudAlbedo\])\] & Tuned using Fig. 2 in @Cess76\ ${{\textstyle <}}$OLR${{\textstyle >}}_\mathrm{cl,\circ}$ & 26.4 W m$^{-2}$ & Long wavelength forcing of clouds$^\text{a}$ & @Stephens12\ $f_{cw}$ & 0.70 & Cloud coverage on water & @Sanroma12 [@Stubenrauch13]\ $f_{cl}$ & 0.60 & Cloud coverage on land and frozen surface & @Sanroma12 [@Stubenrauch13]\ $\Lambda_\circ$ & 0.7 & Ratio of moist over dry eddie transport & @KS14 [Fig.2]\ \[tabFiducialPar\] [llccl]{} ${{\textstyle <}}T {{\textstyle >}}_\mathrm{NH}$ & Surface temperature & 288.61$^a$ & 288.60 & K\ $\Delta T_\mathrm{PE}$ & Pole-equator temperature difference & 40.3$^a$ & 41.7 & K\ ${{\textstyle <}}h{{\textstyle >}}_\mathrm{NH}$ & Fraction of habitable surface & 0.851$^b$ & 0.858 & ...\ ${{\textstyle <}}A {{\textstyle >}}_\mathrm{NH}$ & Top-of-atmosphere albedo & 0.322$^c$ & 0.323 & ...\ ${{\textstyle <}}OLR{{\textstyle >}}_\mathrm{NH}$ & Outgoing longwave radiation & 240.3$^c$ & 237.6 & Wm$^{-2}$\ $\Phi_\mathrm{max}$ & Peak atmospheric transport at mid latitude & 5.0$^d$ & 4.9 & PW\ \[GlobalEarthModel\] [^1]: With the adopted definitions of $D$ and $\Phi$ it is easy to show that ${\partial \over \partial x} \left( \Phi \cos \varphi \right)= - {\partial \over \partial x} \left( D \cos \varphi \frac{\partial T}{\partial \varphi} \right)= - {\partial \over \partial x} \left[ (1-x^2) D \frac{\partial T}{\partial x} \right]$ represents the latitudinal transport per unit area [see @North81 Eq. 21]. [^2]: A simpler parameterization, not tested with 3D climate calculations, was adopted by @WK97 and in Paper I. [^3]: In general, the “mean” and the “perturbations” are referred to time and or space variations. [^4]: Numerical experiments performed with simplified GCMs suggest that the correlation coefficients $k_\mathrm{L}$, $k_\mathrm{S}$ and the efficiency factor $\eta$ can be treated as constants with good approximation [@Barry02; @Frierson07]. [^5]: Excluding these ratios is equivalent to setting them equal to unity in the Earth’s model, as they should be by definition. [^6]: The mean diurnal value of $\mu(\varphi,t)$ is a function of the axis obliquity. [^7]: In oceanography the term gyre refers to major ocean circulation systems driven by the wind surface stress. [^8]: The thermal capacity of the ocean can be changed to simulate idealized aquaplanets with a thin layer of surface water (e.g., §\[sectValidation\]). [^9]: To ignore the diabatic forcing term we set $\frac{ \delta T }{T_w } \left\{ \mathrm{ASR} \right\} =1$ in the scaling law (\[eq:SLdry\]). [^10]: The average in latitude is weighted in area and the average in time is performed over one orbital period. [^11]: At present time it is possible to measure the atmospheric composition of selected giant planets, but not of Earth-size terrestrial planets. [^12]: We define a function $H_\text{lw}(T)=1$ when $T(\varphi,t)$ is inside the liquid-water temperature range; $H_\text{lw}(T)=0$ outside the range [@SMS08; @Vladilo13]. The index ${h_\text{lw}}$ is the global and orbital average value of $H_\text{lw}(T)$. [^13]: The tidal lock radius was calculated with the expression $r_\mathrm{TL}=0.027 (P_\circ t / Q)^{1/6} M_{\star}^{1/3}$ (Kasting 1993), where $P_\circ$ is the original rotation period, $t$ is the amount of time during which the planet has been slowed down, $Q^{-1}$ is the specific dissipation function, and $M_\star$ is the stellar mass; we adopted $P_\circ=0.5$days, $t=10^9$yr, and $Q=100$. [^14]: Mars and Venus have a surface pressure of $\simeq 0.006$bar and $\simeq 90$bar, respectively. [^15]: Here an Earth twin is a planet with properties equal to those of the Earth (including insolation, radius, gravity and atmospheric composition), but with unknown values of $p$, $\Omega$, $\epsilon$ and $f_o$, which are treated as free parameters. [^16]: The water columnar mass is ${\cal M}_w \simeq (\mu_w/ \mu ) (q \, p^*_v(T)/g)$, where $\mu_w$ and $\mu$ are the molecular weights of water and air; $q$ is the relative humidity and $p^*_v(T)$ the saturated partial pressure of water vapor [e.g., @Pierrehumbert10].
--- abstract: | Change detection in multivariate time series has applications in many domains, including health care and network monitoring. A common approach to detect changes is to compare the divergence between the distributions of a reference window and a test window. When the number of dimensions is very large, however, the naïve approach has both quality and efficiency issues: to ensure robustness the window size needs to be large, which not only leads to missed alarms but also increases runtime. To this end, we propose , a linear-time algorithm for robustly detecting non-linear changes in massively high dimensional time series. Importantly, provides high flexibility in choosing the window size, allowing the domain expert to fit the level of details required. To do such, we 1) perform scalable to reduce dimensionality, 2) perform scalable factorization of the joint distribution, and 3) scalably compute divergences between these lower dimensional distributions. Extensive empirical evaluation on both synthetic and real-world data show that outperforms state of the art with up to 100% improvement in both quality and efficiency. author: - 'Hoang-Vu Nguyen[^1] Jilles Vreeken' bibliography: - 'bib/abbrev.bib' - 'bib/citation.bib' - 'bib/bib-jilles.bib' title: | [[Linear-time Detection of Non-linear Changes\ in Massively High Dimensional Time Series]{}]{} --- Acknowledgements {#acknowledgements .unnumbered} ================ The authors are supported by the Cluster of Excellence “Multimodal Computing and Interaction” within the Excellence Initiative of the German Federal Government. [^1]: Max Planck Institute for Informatics and Saarland University, Germany. Email: `{hnguyen,jilles}@mpi-inf.mpg.de`
--- author: - | M.-I. Trappe,$^{1,2,}$ P. Augenstein,$^{3,}$ M. DeKieviet,$^{3,}$ T. Gasenzer,$^{2,4,5,}$ O. Nachtmann$^{2,}$\ $^{1}$[*Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore*]{}\ $^{2}$[*Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany*]{}\ $^{3}$[*Physikalisches Institut, Universität Heidelberg, Im Neuenheimer Feld 226, 69120 Heidelberg, Germany*]{}\ $^{4}$[*Kirchhoff-Institut für Physik, Universität Heidelberg, Im Neuenheimer Feld 227, 69120 Heidelberg, Germany*]{}\ $^{5}$[*ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f[ü]{}r Schwerionenforschung, Planckstra[ß]{}e 1, 64291 Darmstadt, Germany*]{} date: '()' --- Introduction ------------ [^1] Atoms being exposed to an adiabatically varying external field can acquire geometric phases [@Ber84; @Bar83]. For metastable states, such geometric phases are in general complex. The imaginary part of such a phase influences the lifetime, see e.g. [@Garrison1988177; @Miniatura1990; @Massar1996; @Berry2004]. In Refs. [@BeGaNa07_I; @BeGaNa07_II; @BeGaMaNaTr08_I; @DeKGaNaTr11; @GaNaTr2012], we have presented studies of geometric phases for metastable states of hydrogen. Both, parity-conserving (PC) and parity-violating (PV) geometric phases were identified. It was, in particular, shown in [@GaNaTr2012] that the lifetimes of metastable 2S hydrogen states can be influenced by geometric phases acquired by the atom in suitable external electric and magnetic fields. A concrete example of the influence of a complex geometric phase on the lifetime of atomic states was discussed in [@GaNaTr2012]. With the field configurations investigated there geometric effects on the lifetimes at the per mille level were found. In the present paper we shall explore suitable field configurations which lead, in theory, to geometric effects on the lifetimes of metastable hydrogen states up to the level of several per cent. We propose to measure the lifetime shifts by means of an existing longitudinal atomic beam spin-echo interferometer that allows the initial and final fluxes of metastable atoms to be compared with each other. The results presented here were obtained by means of the theoretical formalism introduced in detail in Refs. [@BeGaMaNaTr08_I; @GaNaTr2012]. We refer to these papers for the discussion of the general context of our investigations and of the proposed experimental scheme, as well as for many further references. We will, in particular, make use of specific expressions and formulae from these papers, referring to them without repeating their derivations. Metastable hydrogen in the longitudinal atomic beam spin-echo apparatus {#SectionMetastable} ----------------------------------------------------------------------- ### Atomic-beam spin-echo interferometer ![Scheme of the atom interferometry experiment. The atom is prepared around $z_0$ and analysed around $z_a$. We start with a superposition ${|\psi(z_0)\rangle}$ of the two states ${\left|\right.\hspace{-0.5ex}{9}\left.\hspace{-0.5ex}\right)}$ and ${\left|\right.\hspace{-0.5ex}{11}\left.\hspace{-0.5ex}\right)}$. After passing the electric and magnetic fields the wave function is projected onto an analysing state ${|\psi(z_a)\rangle}$, for example, again onto a superposition of the states ${\left|\right.\hspace{-0.5ex}{9}\left.\hspace{-0.5ex}\right)}$ and ${\left|\right.\hspace{-0.5ex}{11}\left.\hspace{-0.5ex}\right)}$. The coordinate axes used, $x$, $y$, $z$, indexed in the formulae as 1, 2, and 3, respectively, are also indicated.[]{data-label="SchematiclABSE"}](LifeTimeModification_lABSE_schematic4.pdf){width="0.99\linewidth"} As in [@BeGaMaNaTr08_I] we consider metastable 2S hydrogen states in the spin-echo interferometer described in [@ABSE95]. shows a schematic view of the atomic-beam spin-echo interferometer. An atomic state, in general a superposition of local energy eigenstates, enters the interferometer at $z_0$. The state is then subjected to electric and magnetic fields ${\mathbfcal{E}}(z)$ and ${\mathbfcal{B}}(z)$. Finally, it is analysed at $z_a$ by projection on a chosen final state. In reference to the experiment, we set in the following $$\begin{aligned} \begin{split} z_0&=0\,\mathrm{m}\ ,\\ z_a&=0.66\,\mathrm{m}\ . \end{split}\end{aligned}$$ First we consider field configurations of a general type, consisting of two regions I and II in space and/or time of the spin-echo setup [@ABSE95], in which the spins precess forward and backwards, respectively (thus separated by an effective $\pi$-pulse). These regions have an electric field $$\begin{aligned} {\mathbfcal{E}}(z)&={\mathbfcal{E}}_{\mathrm{I}}(z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z){\nonumber}\\ &\phantom=+{\mathbfcal{E}}_{\mathrm{II}}(z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ ,\label{E}\end{aligned}$$ and a magnetic field with the components $$\begin{aligned} {\mathbfcal{B}}(s;z)&={\mathbf{e}}_1\mathcal B_1(z)+{\mathbf{e}}_2\mathcal B_2(z)+{\mathbf{e}}_3\mathcal B_3(s;z)\ ,\label{B}\end{aligned}$$ where $$\begin{aligned} \mathcal B_{i}(z)&=\mathcal B_{i\mathrm{I}}(z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z){\nonumber}\\ &\phantom=+\mathcal B_{i\mathrm{II}}(z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\end{aligned}$$ for $i=1,2$, and $$\begin{aligned} \label{B3si} \mathcal B_3(s;z)&=\mathcal B_{3\mathrm{I}}(z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z){\nonumber}\\ &\phantom=+s\,\mathcal B_{3\mathrm{II}}(z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ .\end{aligned}$$ We also require $$\begin{aligned} \begin{split}\label{zerofields} {\mathbfcal{E}}(0)&={\mathbfcal{E}}({\mbox{$\frac12$}}z_a)={\mathbfcal{E}}(z_a)=0\ ,\\ {\mathbfcal{B}}(s;0)&={\mathbfcal{B}}(s;{\mbox{$\frac12$}}z_a)={\mathbfcal{B}}(s;z_a)=0\ . \end{split}\end{aligned}$$ In (\[E\])-(\[B3si\]) $\Theta(\cdot)$ is the usual step function and $s$ is a parameter, which acts as a detuning between the spin precession regions I and II, and is varied around the spin echo point, $s=1$, by typically $$\begin{aligned} 0.4\le s\le1.6\ .\end{aligned}$$ The variation of $s$, that is, the variation of the magnetic field $\mathcal B_3$ in the second half of the interferometer produces the oscillations in the spin-echo signal; see [@BeGaMaNaTr08_I]. Explicit examples of external fields within this general form are given in Section \[SectionExamples\] below (see Figures \[example1fields\]–\[FieldConfigSymmCond\]). An atom travelling through the interferometer with field configuration (\[E\])-(\[zerofields\]) traces out, in parameter space, a closed path $C_s$, where $s$ is kept fixed. In fact, $C_s$ is composed of two successive paths in regions I and II, $$\begin{aligned} \label{path} C_s=C_{\mathrm{I}}+C_{\mathrm{II},s}\ .\end{aligned}$$ We shall now consider field configurations that, in parameter space, correspond to oppositely oriented paths, either along the reverse of the complete path $C$, or along the reverse of the paths $C_{\mathrm{I}}$ and $C_{\mathrm{II}}$ separately. For reversing the complete path $C$ we consider the fields $$\begin{aligned} \label{rev} \begin{split} {\mathbfcal{E}}^{\mathrm{rev}}(z)&={\mathbfcal{E}}(z_a-z)\ ,\\ \mathcal B^{\mathrm{rev}}_i(z)&=\mathcal B_i(z_a-z)\ ,\quad \mbox{for }i=1,2\ ,\\ \mathcal B^{\mathrm{rev}}_{3}(s;z)&=s\,\mathcal B_{3\mathrm{II}}(z_a-z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z)\\ &\quad+\mathcal B_{3\mathrm{I}}(z_a-z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ . \end{split}\end{aligned}$$ From (\[E\])–(\[zerofields\]) and (\[rev\]) we see that, in the reverse field configuration, the atomic system traces out the path which is the reversed one of (\[path\]), $$\begin{aligned} \overline C_s=\overline C_{\mathrm{II},s}+\overline C_{\mathrm{I}}\ .\end{aligned}$$ Note that for the reverse field configuration the magnetic field component $\mathcal B_3$ is varied with $s$ in the first half of the interferometer. For the second case of reversing the paths in regions I and II of the interferometer separately, we consider the following fields: $$\begin{aligned} \label{rev2} \begin{split} \tilde{{\mathbfcal{E}}}^{\mathrm{rev}}(z)&={\mathbfcal{E}}_{\mathrm{I}}({\mbox{$\frac12$}}z_a-z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z)\\ &\phantom=+{\mathbfcal{E}}_{\mathrm{II}}({\mbox{$\frac32$}}z_a-z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ ,\\ \tilde{\mathcal B}^{\mathrm{rev}}_i(z)&=\mathcal B_{i\mathrm{I}}({\mbox{$\frac12$}}z_a-z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z)\\ &\phantom=+\mathcal B_{i\mathrm{II}}({\mbox{$\frac32$}}z_a-z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ ,\\ \mbox{for }i&=1,2\ ,\\ \tilde{\mathcal B}^{\mathrm{rev}}_3(s;z)&=\mathcal B_{3\mathrm{I}}({\mbox{$\frac12$}}z_a-z)\Theta({\mbox{$\frac12$}}z_a-z)\Theta(z)\\ &\phantom=+s\,\mathcal B_{3\mathrm{II}}({\mbox{$\frac32$}}z_a-z)\Theta(z_a-z)\Theta(z-{\mbox{$\frac12$}}z_a)\ . \end{split}\end{aligned}$$ Here the path of the atom in parameter space in relation to (\[path\]) is $$\begin{aligned} \overline C'_s=\overline C_{\mathrm{I}}+\overline C_{\mathrm{II},s}\ .\end{aligned}$$ ### Hydrogen spin-echo observables The hydrogen states under investigation are 2S states that are admixed with 2P states in external electric fields. Our numbering of the 16 ($n=2$)-states of hydrogen is explained in detail in Appendix A, Table A.2, of [@GaNaTr2012]. The index set of metastable states is $$\begin{aligned} I=\{9,10,11,12\}\ .\end{aligned}$$ The initial state at $z=z_0$ is a superposition of metastable states $$\begin{aligned} \label{calpha} \begin{split} \left.{|\psi(0)\rangle}\right|_{\mathrm{internal}}&=\sum_{\alpha\in I}c_\alpha{\left|\right.\hspace{-0.5ex}{\alpha(z_0)}\left.\hspace{-0.5ex}\right)}\ ,\\ \sum_{\alpha\in I}|c_\alpha|^2&=1\ . \end{split}\end{aligned}$$ See (72) in [@BeGaMaNaTr08_I] for the complete state vector. Here and in the following we write out only the internal part of it. In (\[calpha\]) and in the following ${\left|\right.\hspace{-0.5ex}{\alpha(z)}\left.\hspace{-0.5ex}\right)}$ ($\alpha=1,\dots,16$) are the local energy right eigenstates corresponding to the fields ${\mathbfcal{E}}(z)$, ${\mathbfcal{B}}(z)$; see (13) of [@BeGaMaNaTr08_I]. As discussed in [@BeGaMaNaTr08_I], the effective potentials $\mathcal V_\alpha(z)$ entering the Schrödinger equation for the atomic states in the external fields are not equal to the local complex energy eigenvalues $E_\alpha(z)$, see (31)–(33) of [@BeGaMaNaTr08_I], as they include additional geometric-phase effects. But, as we shall show below, in our case this difference is negligible. Nonetheless, we work in the following with the effective potentials as this is the correct procedure. The value of the effective potential for the state $\alpha$ at point $z$ is in general complex $$\begin{aligned} \mathcal V_\alpha(z)=\mathrm{Re}\,\mathcal V_\alpha(z)-\frac{{\mathrm{i}}}{2}\Gamma_\alpha(z)\ .\end{aligned}$$ Here $$\begin{aligned} \Gamma_\alpha(z)=-2\mathrm{Im}\,\mathcal V_\alpha(z)\end{aligned}$$ is the local decay rate of the state $\alpha$; see (32), (33) of [@BeGaMaNaTr08_I]. For the field configurations considered in the present work, we find for $\alpha=9,11$ $$\begin{aligned} \label{ReVE} |\mathrm{Re}\,\big(\mathcal V_\alpha(z)-E_\alpha(z)\big)|\lesssim 10^{-16}\,\mathrm{eV}\ ,\end{aligned}$$ and $$\begin{aligned} \label{ImVE} \frac{|\mathrm{Im}\,\big(\mathcal V_\alpha(z)-E_\alpha(z)\big)|}{\underset{0\le z\le z_a}{\mathrm{max}}|\mathrm{Im}\,E_\alpha(z)|}\lesssim 10^{-10}\ ,\end{aligned}$$ that is, the numerical differences between $\mathcal V_\alpha(z)$ und $E_\alpha(z)$ are negligible since we shall deal with energies at the $\mu$eV scale; cf. Figure \[example1energies\] below. The atoms in the beam have typical longitudinal velocity $v_z$, wave number $k_z$ and de Broglie wavelength $\lambda$ (see (20) of [@BeGaMaNaTr08_I]) $$\begin{aligned} v_z&=\frac{k_z}{m}\approx 3500 \,\mathrm{m/s}\ ,{\nonumber}\\ k_z&\approx5.6\times10^{10}\,\mathrm m^{-1}\ ,{\nonumber}\\ \lambda&=\frac{2\pi}{k_z}\approx 1.1\times 10^{-10}\,\mathrm m\ .\end{aligned}$$ At the end of the interferometer, at $z=z_a$, the atomic state is projected onto a chosen state (see (90) of [@BeGaMaNaTr08_I]) $$\begin{aligned} \label{palpha} \begin{split} {\left|\right.\hspace{-0.5ex}{p}\left.\hspace{-0.5ex}\right)}&=\sum_{\alpha\in I}p_\alpha{\left|\right.\hspace{-0.5ex}{\alpha(0)}\left.\hspace{-0.5ex}\right)}\ ,\\ \sum_{\alpha\in I}|p_\alpha|^2&=1\ . \end{split}\end{aligned}$$ The integrated flux $\mathcal F_p$ for this state is the experimental observable $$\begin{aligned} \label{Fp} {\mathcal F}_p=\sum_{\alpha,\beta\in I}&p_\beta p^*_\alpha c^*_\beta c_\alpha \exp [-(\Delta\tau_\beta-\Delta\tau_\alpha)^2/(8\sigma'^2_k)]{\nonumber}\\ &\quad\times U^*_\beta(z_a,z_0;\bar k_m)\,U_\alpha(z_a,z_0;\bar k_m)\ .\end{aligned}$$ All quantities occuring in (\[Fp\]) are defined and explained in the context of Eq. (105) in [@BeGaMaNaTr08_I]. We briefly recall them in the following. The $U_\alpha$ contain the dynamic and geometric phases, see (101) of [@BeGaMaNaTr08_I], $$\begin{aligned} \label{Ualpha} U_\alpha(z_a,z_0;\bar k_m)= \exp[-{\mathrm{i}}\varphi_\alpha(z_a)+{\mathrm{i}}\gamma_\alpha(z_a)]\,.\end{aligned}$$ Here $\bar k_m$ is the peak value of the wave-number distribution in the wave packet; see (78), (79) of [@BeGaMaNaTr08_I]. The $\Delta\tau_{\alpha,\beta}$ are the shifts of the reduced arrival times as defined in (99) of [@BeGaMaNaTr08_I]. The dynamic and geometric phases acquired by the state with label $\alpha$ from $z=0$ to $z$ are denoted by $\varphi_\alpha(z)$ and $\gamma_\alpha(z)$, respectively. We have $$\begin{aligned} \varphi_\alpha(z)&=\frac{1}{v_z}\int_0^z{\mathrm{d}}z'\,\mathcal V_\alpha(z')\ ,\label{phialphanew}\\ \gamma_\alpha(z)&={\mathrm{i}}\int_{0}^{z}{\mathrm{d}}z'\, {\left(\right.\hspace{-0.5ex}\widetilde{\alpha(z')}\left.\hspace{-0.5ex}\right|{\frac{\partial}{\partial z'}}\left|\right.\hspace{-0.5ex}{\alpha(z')}\left.\hspace{-0.5ex}\right)}\ ,\end{aligned}$$ where ${\left(\right.\hspace{-0.5ex}\widetilde{\alpha(z)}\left.\hspace{-0.5ex}\right|}$ are the local energy left eigenstates. Note that we use a slightly different notation here, as compared to [@BeGaMaNaTr08_I]. To obtain (\[Ualpha\]) from (101)-(103) of [@BeGaMaNaTr08_I] the following replacements have to be made $$\begin{aligned} \begin{split} \phi_{\mathrm{dyn},\alpha}&\to\varphi_\alpha(z_a)\ ,\\ \phi_{\mathrm{geom},\alpha}&\to\gamma_\alpha(z_a)\ . \end{split}\end{aligned}$$ The main quantities of interest to us here are the effective decay rates of the metastable states, see (127) of [@GaNaTr2012], which depend on the path $C$ in parameter space. For a state $\alpha\in I$, these decay rates, multiplied by the flight time $T$ from $z_0$ to $z_a$, are given by $$\begin{aligned} \label{Gammaeff} T\,\Gamma_{\alpha,\mathrm{eff}}(C)=-2\,\mathrm{Im}\,\varphi_\alpha(z_a)+2\,\mathrm{Im}\,\gamma_\alpha(z_a)\ .\end{aligned}$$ The dynamic contribution to $T\,\Gamma_{\alpha,\mathrm{eff}}$ can be written as $$\begin{aligned} -2\,\mathrm{Im}\,\varphi_\alpha(z_a)&=-\frac{2}{v_z}\int_0^{z_a}{\mathrm{d}}z\,\mathrm{Im}\,\mathcal V_\alpha(z){\nonumber}\\ &=\frac{m}{\bar k_m}\int_0^{z_a}{\mathrm{d}}z\,\Gamma_\alpha(z)\label{imphialpha}\end{aligned}$$ and thus depends inversely on $v_z$ and $\bar k_m$, respectively. In (\[imphialpha\]) $m$ denotes the hydrogen mass. In contrast, the geometric contribution in (\[Gammaeff\]), $$\begin{aligned} 2\,\mathrm{Im}\,\gamma_\alpha(z_a)\ ,\end{aligned}$$ is independent of $v_z$. This different dependence on $v_z$ allows us to experimentally distinguish between the dynamic and geometric contributions to $T\,\Gamma_{\alpha,\mathrm{eff}}$. For our setup the flight time is $$\begin{aligned} T=\frac{z_a}{v_z}\approx\frac{0.66}{3500}\,\mathrm{s}\approx 0.2\,\mathrm{ms}\ .\end{aligned}$$ Geometric-phase induced lifetime modification {#SectionExamples} --------------------------------------------- ### Exemplary field configuration In the following we shall discuss a concrete example of field configurations (\[E\])-(\[zerofields\]) and their reverse ones, (\[rev\]), and calculate the corresponding effective decay rates of metastable H states. We consider the fields shown in Figure \[example1fields\] (for $s=1$) leading to the path $C$ in parameter space. The magnetic part of $C$ is illustrated in Figure \[example1Bfields\]. We are looking here for a lifetime shift, that is, a parity conserving (PC), or even effect. We will, therefore, in the following and other than in our previous work [@BeGaNa07_I; @BeGaNa07_II; @BeGaMaNaTr08_I; @DeKGaNaTr11; @GaNaTr2012], neglect the very small parity violating (PV) interaction for the hydrogen atom. Hence, in all formulae taken from [@BeGaMaNaTr08_I] and [@GaNaTr2012], we leave out the PV contributions. ![The functions $z\mapsto\mathcal E_1(z)$ and $z\mapsto{\mathbfcal{B}}(s;z)$ with $s=1$, see Appendix \[AppendixField\] for details. Ideally, ${\mathbfcal{B}}^{\mathrm{rev}}(1,z)=-{\mathbfcal{B}}(1;z)$. Regions $\mathrm{I}$ and $\mathrm{II}$ are separated by $z=z_a/2$.[]{data-label="example1fields"}](LifeTimeModification_FieldConfig.pdf){width="0.99\linewidth"} As initial and as analysing state we choose the same superposition of the states 9 and 11: $$\begin{aligned} \label{cp} \begin{split} c_9=c_{11}&=\frac{1}{\sqrt{2}}\ ,\; c_{10}=c_{12}=0\ ;\\ p_9=p_{11}&=\frac{1}{\sqrt{2}}\ ,\; p_{10}=p_{12}=0\ . \end{split}\end{aligned}$$ The results shown in the following have been obtained with the help of the numerical software QABSE [@DissTB; @DissMIT]. The exemplary path $C$ which we choose in agreement with Eqs. (\[E\])-(\[zerofields\]), represents an external field configuration with electric field components $\mathcal E_1\not=0$, $\mathcal E_2=\mathcal E_3=0$ and magnetic components $\mathcal B_i\not=0$ ($i=1,2,3$). We consider the case where for $s=1$ we have $$\begin{aligned} \label{symmcond} \begin{split} \mathcal E_1(z)&=\mathcal E_1(z_a-z)\ ,\\ {\mathbfcal{B}}(1;z)&=-{\mathbfcal{B}}(1;z_a-z)\ . \end{split}\end{aligned}$$ That is, we choose $\mathcal E_1(z)$ to be a symmetric function and ${\mathbfcal{B}}(1,z)$ to be an antisymmetric function under a reflection at the point $z=z_a/2$. In Figures \[example1fields\] and \[example1Bfields\] we plot the components of these fields as functions of $z$. These fields are inspired by the realistic design of an actual experimental device, using a fit to calculated and measured field values. The electric field is given in units of V/cm while the magnetic field components are specified in units of $\mu$Tesla. The specific fit functions are listed in Appendix \[AppendixField\]. We emphasise that these realistic fields satisfy the symmetry conditions (\[symmcond\]) only to a certain accuracy. We choose the electric field such that $\mathcal E_1(z)=\mathcal E_1(z_a-z)$. The magnetic field is produced by fixed coils, in the regions I and II of the apparatus, one for $\mathcal B_3$ and one for $\mathcal B_1$ and $\mathcal B_2$. The magnetic fields can be varied by changing the currents through these coils. We illustrate the deviations of our field configuration from the ideal symmetric setup (\[symmcond\]) in Figure \[FieldConfigSymmCond\]. In addition to the small violations of (\[symmcond\]) by the fit functions of Appendix \[AppendixField\] we have introduced, by hand, a violation of (\[symmcond\]) by shifting the $z$-component of the magnetic field along the beam axis (dashed line). As a measure of deviation we use ![The path $z\mapsto{\mathbfcal{B}}(1;z)$ in magnetic field space, starting and ending at ${\mathbfcal{B}}={\mathbf{0}}$ for $z=0$ and $z_a$, respectively. The values of $\mathcal B_3(1;z)$ are color-encoded. Also $\mathcal E_1(z)$ varies with $z$ as shown in Figure \[example1fields\] and discussed in the text. The orientation of the path is chosen such that the imaginary parts of the geometric phases are maximised, given the experimental constraints to the magnetic field coils currently available.[]{data-label="example1Bfields"}](LifeTimeModification_projC1.pdf "fig:"){width="0.99\linewidth"}   $$\begin{aligned} \label{deviation} \Delta=\frac{1}{z_a}\int_0^{z_a}{\mathrm{d}}z\,\left\{\sum_{i=1}^3\big[b_i(z)\big]^2\right\}^{1/2}\ ,\end{aligned}$$ where $$\begin{aligned} b_i(z)=\frac{\mathcal B_i(1;z)+\mathcal B_i(1;z_a-z)}{\underset{0\le z\le z_a}{\mathrm{max}}\mathcal B_i(1;z)}\ .\end{aligned}$$ $\Delta$ vanishes if (\[symmcond\]) holds. For the field configuration in Figure \[example1fields\] the deviation (\[deviation\]) turns out to be $\Delta\approx 8.4\%$ and is mainly due to the asymmetry of $\mathcal B_3$. Note that we deliberately choose the deviations (\[deviation\]) here almost an order of magnitude larger than in the actual experiment, in order to demonstrate in the following the robustness of our method to this kind of experimental imperfection. ![Illustration of the deviations of our experimentally motivated field configuration from an ideal configuration satisfying the symmetry conditions (\[symmcond\]). The solid lines correspond to the configuration from Figure \[example1fields\] while the dashed lines indicate the reversed fields, with the sign of the magnetic field switched for presentational purposes, [i. e.]{}, $-{\mathbf{\mathcal{B}}}^{\mathrm{rev}}$.[]{data-label="FieldConfigSymmCond"}](LifeTimeModification_FieldConfigCompare.pdf){width="0.99\linewidth"} The reverse (\[rev\]) of the ideal field configuration (\[symmcond\]), for $s=1$, is obtained by leaving the electric field unchanged and reversing the current through the coils generating the magnetic field, $$\begin{aligned} \begin{split}\label{rev3} \mathcal E_1^{\mathrm{rev}}(z)&=\mathcal E_1(z)\ ,\\ {\mathbfcal{B}}^{\mathrm{rev}}(1;z)&=-{\mathbfcal{B}}(1;z)\ . \end{split}\end{aligned}$$ While the parameter space in our example is four-dimensional, spanned by $\mathcal E_1$, $\mathcal B_1$, $\mathcal B_2$, $\mathcal B_3$, we can illustrate the projection of the path into the three-dimensional space of the magnetic fields. Figure \[example1Bfields\] shows this projection of the path $C_s$ (\[path\]) for $s=1$. The corresponding $z$-dependence of $\mathcal E_1(z)$ is as shown in Figure \[example1fields\]. That is, $\mathcal E_1(z)$ starts at zero and is positive when ${\mathbfcal{B}}(z)$ traces out the upper loop in Figure \[example1Bfields\]. After this, $\mathcal E_1(z)$ goes to zero at $z=z_a/2$ before becoming positive again while ${\mathbfcal{B}}(z)$ traces out the lower loop in Figure \[example1Bfields\]. Finally, both $\mathcal E_1(z)$ and ${\mathbfcal{B}}(z)$ go back to zero before ending at $z=z_a$. The evolution of the states in the interferometer should be adiabatic wherever geometric phases are picked up for $0<z<z_a/2$ and $z_a/2<z<z_a$. We have made sure that this is true for all cases considered; see Appendix \[AppendixAdiabat\]. The point $z=z_a/2$ is special since there we have ${\mathbfcal{E}}=0$ and ${\mathbfcal{B}}=0$ as required in (\[zerofields\]), implying a degeneracy to appear at this point. Making use of the numbering scheme as explained in Appendix A of [@GaNaTr2012] we find that a state with label $\alpha=9$ ($\alpha=11$) entering from $z<z_a/2$ will have the label $\alpha=11$ ($\alpha=9$) for $z>z_a/2$. Hereby, we make sure that the phases of the states are continuous for $z=z_a/2$ despite their renumbering. In the following we shall, therefore, label the states, energies, etc., with ${{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$ where the first/second number corresponds to the label $\alpha$ in the first/second half of the interferometer. Note that for the states $\alpha=10$ and $12$ there is no relabelling at $z=z_a/2$. Note furthermore that, when switching from the path defined by the fields (\[E\]), (\[B\]) to the reverse path (\[rev\]), we have to compare the states ${{9{;}\hspace{-0.05em}11}}$ with ${{11\hspace{-0.05em}{;}9}}$ and, correspondingly, ${{11\hspace{-0.05em}{;}9}}$ with ${{9{;}\hspace{-0.05em}11}}$. This becomes particularly clear if in (\[E\]), (\[B\]) we consider a path with only $\mathcal B_3(s;z)\not=0$, of the form shown in Figure \[example1fields\] and with $\mathcal B^{\mathrm{rev}}_3(s;z)=\mathcal B_3(s;z_a-z)$. The states $\alpha=9$ ($\alpha=11$) are then those with spin parallel (antiparallel) to ${\mathbfcal{B}}$. The renumbering is illustrated in Figure \[spinflip\], for the system in state $\alpha={{9{;}\hspace{-0.05em}11}}$ within a field configuration path $C_s$ and in the corresponding state $\alpha={{11\hspace{-0.05em}{;}9}}$ within $\overline C_s$. ![Illustration of the renumbering of states in the case that only $\mathcal B_3(s;z)\not=0$. The double line arrows indicate the spin directions. In (a) the state starting at $z=0$ with label $\alpha=9$ is subject to the path $C_s$ in parameter space and arrives with label $\alpha=11$. In the reverse field configuration (b) the corresponding state to start with has label $\alpha=11$ and is relabeled as $\alpha=9$ for $z>z_{a}/2$.[]{data-label="spinflip"}](LifeTimeModification_Newspinflip.pdf){width="0.99\linewidth"} ### Dynamic and geometric phases The dynamical phases picked up by the states traversing the external field configurations are defined by the $z$-dependencies of their eigenenergies. In Figure \[example1energies\] we show, for $s=1$, the real parts of the energies $E_\alpha(z)$ for $\alpha={{9{;}\hspace{-0.05em}11}},10,{{11\hspace{-0.05em}{;}9}}$, exhibiting the Zeeman- and Stark-shifts according to the fields shown in Figure \[example1fields\]. As we can see from (73) of [@GaNaTr2012] the functional dependence of $E_\alpha(z)$ on the external fields is as follows: $$\begin{aligned} \label{Ealpha} E_\alpha(z)\equiv E_\alpha\big({\mathbfcal{E}}^2(z),{\mathbfcal{B}}^2(z),[{\mathbfcal{E}}(z)\cdot{\mathbfcal{B}}(z)]^2\big)\ .\end{aligned}$$ For our field configurations this can be simplified to $$\begin{aligned} \label{ourfield} E_\alpha(z)=E_\alpha\big([\mathcal E_1(z)]^2,{\mathbfcal{B}}^2(z),[\mathcal E_1(z)\mathcal B_1(z)]^2\big)\ .\end{aligned}$$ We find, therefore, that in the ideal case where (\[rev3\]) holds the eigenenergies are the same, taking $s=1$, for the field path $C_{1}$ and the reverse path $\overline C_1$, $$\begin{aligned} \left.E_\alpha(z)\right|_{C_1}=\left.E_\alpha(z)\right|_{\overline{C}_1}\ .\end{aligned}$$ The same holds for the effective potential $\mathcal V_\alpha(z)$ because the additional geometric contributions are negligible, see (\[ReVE\]) and (\[ImVE\]), $$\begin{aligned} \label{effpotCCbar} \left.\mathcal V_\alpha(z)\right|_{C_1}=\left.\mathcal V_\alpha(z)\right|_{\overline{C}_1}\ .\end{aligned}$$ For the dynamic phases $\varphi_\alpha(z)$ we have, therefore, from (\[phialphanew\]) and (\[effpotCCbar\]) again in the ideal case $$\begin{aligned} \label{phiCCbar} \left.\varphi_\alpha(z)\right|_{C_1}=\left.\varphi_\alpha(z)\right|_{\overline{C}_1}\ .\end{aligned}$$ In (\[Ealpha\])–(\[phiCCbar\]) we have $$\begin{aligned} \alpha\in\{{{9{;}\hspace{-0.05em}11}},{{11\hspace{-0.05em}{;}9}},10,12\}\ .\end{aligned}$$ ![The real parts of the energies $E_\alpha(z)$ of the atomic states $\alpha={{9{;}\hspace{-0.05em}11}}$, $10$, and ${{11\hspace{-0.05em}{;}9}}$ in the fields shown in Figure \[example1fields\], with $s=1$.[]{data-label="example1energies"}](LifeTimeModification_Energies_9_10_11.pdf){width="0.99\linewidth"} In Figure \[example1energies\] we show $\mathrm{Re}\,E_\alpha(z)$ for the realistic field configuration of Figure \[example1fields\] where the symmetry relations (\[symmcond\]) hold only approximately. In case that (\[symmcond\]) would hold exactly the red curve ($\alpha={{9{;}\hspace{-0.05em}11}}$) would be the reflection of the blue curve ($\alpha={{11\hspace{-0.05em}{;}9}}$) on $z=z_a/2$. We see that this reflection symmetry holds to a good approximation. The observed asymmetry in Figure \[example1energies\] is caused mainly by the shift of $-\mathcal B_3^{\mathrm{rev}}$ [[with respect to ]{}]{} $\mathcal B_3$, see Figure \[FieldConfigSymmCond\], but does not qualitatively affect the main findings of this work. The asymmetry should rather be regarded as a realistic complication which our methods can easily deal with. The difference ![The real part of the energy difference (\[Ediff\]) for the fields shown in Figure \[example1fields\], with $s=1$. The shaded areas indicate the regions of non-zero electric fields.[]{data-label="example1energydifferences"}](LifeTimeModification_EnergySeparation.pdf){width="0.99\linewidth"} ![The imaginary parts of the dynamic phase (\[imphialpha\]), $\mathrm{Im}\,\varphi_\alpha(z)$, as function of $z$, for $s=1$, for the states $\alpha={{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$. The field configuration is given in Figure \[example1fields\]. The plot clearly shows where the imaginary parts of the dynamic phases are picked up along the $z$-axis.[]{data-label="example1imag"}](LifeTimeModification_ImDynPhase.pdf){width="0.8\linewidth"} $$\begin{aligned} \label{Ediff} \mathrm{Re}\,\big[E_{{{9{;}\hspace{-0.05em}11}}}(z)-E_{{{11\hspace{-0.05em}{;}9}}}(z)\big]\ ,\end{aligned}$$ again for $s=1$, is shown in Figure \[example1energydifferences\]. The adiabaticity conditions associated with these energy differences can be checked easily; see Appendix \[AppendixAdiabat\]. In Figure \[example1imag\] we show the $z$-dependent imaginary parts of the dynamic phase for the states $\alpha={{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$ exposed to the fields in Figure \[example1fields\] where $s=1$. For these fields the imaginary parts of the dynamic phases are, within the accuracy of our numerical calculations, the same for $\alpha={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$. For the reverse field configuration (\[rev\]), again with $s=1$ and in the ideal case where (\[rev3\]) holds, the imaginary parts of the dynamic phases are the same as for the original field configuration; see (\[phiCCbar\]). ![The imaginary part $\mathrm{Im}\,\gamma_\alpha(z)$ of the geometric phase, as function of $z$, for $s=1$, for the state $\alpha={{9{;}\hspace{-0.05em}11}}$ in the field configuration path $C_1$ given in Figure \[example1fields\], and for the state $\alpha={{11\hspace{-0.05em}{;}9}}$ within the reversed configuration $\overline{C}_{1}$. The curves are identical for the states $\alpha={{11\hspace{-0.05em}{;}9}}$, with $C_{1}$, and for ${{9{;}\hspace{-0.05em}11}}$, with $\overline{C}_{1}$.[]{data-label="example1imagdiff"}](LifeTimeModification_ImGeom_9.pdf){width="0.82\linewidth"} In Figure \[example1imagdiff\] we show the imaginary parts of the geometric phases, $\mathrm{Im}\,\gamma_{\alpha}(z)$, as functions of $z$ for $\alpha={{9{;}\hspace{-0.05em}11}}$ and the curve $C_{1}$, and for $\alpha={{11\hspace{-0.05em}{;}9}}$ and $\overline{C}_{1}$. A clear difference in $\mathrm{Im}\,\gamma_{\alpha}(z)$ between these two cases can be seen. For the curve $C_{1}$ the results for $\alpha={{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$ are the same. This is also the case for the curve $\overline{C}_{1}$. Note that here and in the following we compare $\alpha={{9{;}\hspace{-0.05em}11}}$ (${{11\hspace{-0.05em}{;}9}}$) in the field path $C_{s}$ to $\alpha={{11\hspace{-0.05em}{;}9}}$ (${{9{;}\hspace{-0.05em}11}}$) in the field path $\overline{C}_{s}$, thus taking into account the label change explained in Figure \[spinflip\]. The sign change of $\mathrm{Im}\,\gamma_{\alpha}(z_{a})$ when going from $\alpha={{9{;}\hspace{-0.05em}11}}$ and $C_{1}$ to $\alpha={{11\hspace{-0.05em}{;}9}}$ and $\overline{C}_{1}$ in Figure \[example1imagdiff\] is clear from the property of geometric phases as line integrals. The fact that we have the same result for $\mathrm{Im}\,\gamma_{\alpha}(z_{a})$ for $\alpha={{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$ is due to the special configuration of fields chosen; see Figures \[example1fields\] and \[example1Bfields\]. We now turn to the difference of the imaginary parts of the dynamic and geometric phases. For the field configuration of Figure \[example1fields\], corresponding to $s=1$ and the path $C_1$ in parameter space, this difference is shown in Figure \[figInset\] for $\alpha={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$. For the path $C_1$ and the reverse path $\overline{C}_1$ we see a clear difference in $\mathrm{Im}\,\varphi_\alpha(z)-\mathrm{Im}\,\gamma_\alpha(z)$. For the effective decay rates multiplied by the flight times, see (\[Gammaeff\]), we get $$\begin{aligned} \label{example1Gammaeff} T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)&=T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(C_1) \nonumber\\ &=\left.\left(-2\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)+2\mathrm{Im}\,\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\right)\right|_{C_1} \nonumber\\ &=2(2.599+0.0139)\ , \nonumber\\[0.6ex] T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(\overline{C}_1)&=T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1) \nonumber\\ &=\left.\left(-2\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)+2\mathrm{Im}\,\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\right)\right|_{\overline{C}_1} \nonumber\\ &=2(2.599-0.0139)\ , $$ if the symmetry condition (\[symmcond\]) is satisfied. The latter implies that a maximum revival, that is, a spin echo, can be observed at $s=1$, and the maxima of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ are both found at $s=1$. Furthermore, the same decay rates (\[example1Gammaeff\]) are obtained for atomic states initially prepared in any superposition of $\alpha=9$ and $\alpha=11$. From the values (\[example1Gammaeff\]) we obtain the ratio $R_\alpha$ of the fluxes of metastable hydrogen atoms in states $\alpha={{11\hspace{-0.05em}{;}9}}$ and field-path $\overline{C}_{1}$ and $\alpha={{9{;}\hspace{-0.05em}11}}$ and path ${C}_1$ as $$\begin{aligned} \label{R} R_{{{9{;}\hspace{-0.05em}11}}}=\frac{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]}=1.057\ .\end{aligned}$$ Similarly we get $$\begin{aligned} \label{Ra} R_{{{11\hspace{-0.05em}{;}9}}}=\frac{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(C_1)]}=1.057\ .\end{aligned}$$ ![Combining the data shown in Figures \[example1imag\] and \[example1imagdiff\], we depict $\mathrm{Im}\,\varphi_\alpha(z)-\mathrm{Im}\,\gamma_\alpha(z)$ for both the states $\alpha={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$ in $C_{1}$, together with the data obtained with the reversed path $\overline{C}_1$. The differences $\mathrm{Im}\,\varphi_\alpha(z_a)-\mathrm{Im}\,\gamma_\alpha(z_a)$ at the end of the interferometer are used in (\[example1Gammaeff\]) to extract the lifetime modification (\[R\]), (\[Ra\]).[]{data-label="figInset"}](LifeTimeModification_ImDyn-ImGeom_Inset.pdf){width="0.92\linewidth"} ![The difference $\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}}}(z)-\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}},\mathrm{rev}}(z)$ as a function of $z$, using the field configuration $C_1$ in Figure \[example1fields\] and its reverse $\overline C_1$. This difference vanishes for fields obeying the symmetry condition (\[symmcond\]) and is a measure for the violation of (\[phiCCbar\]). As for spin echo signals, however, the $z$-dependence only enters at $z=z_a$, the violation of (\[phiCCbar\]) is of no concern here. This makes our method rather robust with respect to imperfections in the experimental field configurations of the type (\[deviation\]).[]{data-label="figimDynDiff9"}](LifeTimeModification_ImDyn-ImDynRev_9.pdf){width="0.92\linewidth"} We expect the effect on the atomic lifetimes which is at the level of more than 5% to be accessible in a realistic experiment. However, $R_\alpha$ in (\[R\]), (\[Ra\]) is an appropriate measure of geometric lifetime modification only if a *symmetric* field configuration according to (\[symmcond\]) is given. As we shall see below, the maxima of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ are in general found at different values of $s$ if (\[symmcond\]) is not satisfied exactly. Although the norm of the atomic states $\alpha={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$ decays as obtained from (\[example1Gammaeff\]), an initial superposition of $\alpha={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$ travelling through an asymmetric field configuration leads to interference patterns for which the maximal revival of the initial state is not reached at $s=1$. If the deviation from (\[symmcond\]) were large enough, even completely destructive interference could be observed, misleadingly indicating large decay rates. Therefore, we cannot extract the lifetime modification for our slightly asymmetric realistic fields by only comparing $\mathcal F_p(C_1)$ and $\mathcal F_p(\overline C_1)$. Deviations from the symmetry conditions (\[symmcond\]) occurring in realistic situations, however, do not affect the spin-echo measurements we are proposing here. To demonstrate this we show in Figure \[figimDynDiff9\] the difference of the imaginary parts of $\varphi_{{{9{;}\hspace{-0.05em}11}}}(z)$ and $\varphi_{{{9{;}\hspace{-0.05em}11}},\mathrm{rev}}(z)$ for $s=1$ where the reversed fields are the realistic ones fulfilling (\[symmcond\]) only approximately; see Figure \[FieldConfigSymmCond\]. We see that $\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}}}(z)-\mathrm{Im}\,\varphi_{{{9{;}\hspace{-0.05em}11}},\mathrm{rev}}(z)$ *is* different from zero, but for $z=z_a$ the difference vanishes, since the integral over both regions I and II in (\[deviation\]) is the same. For our lifetime measurements only the value of these imaginary parts at $z=z_a$ matters and, therefore, our results (\[example1Gammaeff\]), (\[R\]) and (\[Ra\]) hold unchanged also for our realistic case where (\[symmcond\]) is satisfied only approximately. ### Spin-echo measurement procedure We now turn to the actual measurement to be done with the spin-echo apparatus in order to extract the lifetime differences calculated above. A direct measurement of (\[R\]), (\[Ra\]) with the spin-echo field configuration in Figure \[example1fields\] is possible by starting with hydrogen in the state $\alpha=9$ and projecting onto $\alpha=11$, [i. e.]{}, $c_9=p_{11}=1$. The results obtained should then be compared to the case with reversed fields, starting with state $\alpha=11$ and projecting onto $\alpha=9$ at $z=z_{a}$. Notice, hereby, the change of labeling of the states at $z=z_a/2$; see Figure \[spinflip\] and the discussion after (\[rev3\]). However, aiming at an actual spin-echo measurement, we propose to choose identical initial and analysing states, [i. e.]{}, the superpositions in (\[cp\]). Varying $s$, we obtain the spin-echo curves shown in Figures \[example1zeroE1SpinechoANDrev\] and \[example1SpinechoANDrev\]. These plots conveniently illustrate how lifetime modifications through geometric phases can be observed experimentally. The magnitude of this effect can be easily extracted by comparing the amplitudes of the spin-echo curves measured for $C_s$ and $\overline C_s$ as we discuss in more detail in the following. ![Spin echo integrated-flux curves for the paths $C_s$ (red solid line) and $\overline C_s$ (blue dotted line) using (\[cp\]) and the field configuration in Figure \[example1fields\], but with the electric field set to zero. Experimentally, $s$ can be varied by varying the current through the coil which generates the $\mathcal B_3$-field in the second (first) half of the interferometer for $C_s$ ($\overline C_s$). The vertical dashed line marks $s=1$. Without electric field the decay of the metastable states is negligible, and the spin echos reach almost unit amplitude for several values of $s$. However, the amplitudes also depend on the real parts of the $s$-dependent dynamic and geometric phases, as does the separation of the maxima along the $s$-axis.[]{data-label="example1zeroE1SpinechoANDrev"}](LifeTimeModification_SpinEchoPuzzle_zeroE1.pdf){width="0.86\linewidth"} ![Spin echo curves for the paths $C_s$ (red solid line) and $\overline C_s$ (blue dotted line) using (\[cp\]) as in Figure \[example1zeroE1SpinechoANDrev\], but with the electric field turned on. The vertical dashed line marks $s=1$. The electric field results in decreased amplitudes of the spin echo curves, but the general shapes of the interference patterns are unchanged, cf. Figure \[example1zeroE1SpinechoANDrev\]. However, the presence of the electric field allows for the imaginary geometric phase to emerge after the closed path shown in Figure \[example1Bfields\] has been traced out in parameter space, resulting in different values of the heights of maxima when comparing $C_s$ with $\overline C_s$. See Figure \[example1SpinechoANDrevs1\] for an enlarged display of the region around $s=1$.[]{data-label="example1SpinechoANDrev"}](LifeTimeModification_SpinEchoPuzzle.pdf){width="0.81\linewidth"} Figure \[example1SpinechoANDrevs1\] shows the behaviour of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ near $s=1$ in an enlarged scale. The lifetime differences due to the differing imaginary parts of the geometric phases for $C_1$ and $\overline C_1$ cause different spin echo curves for $C_1$ and $\overline C_1$. Note, however, that for a quantitative analysis we have to take into account also the real parts of the dynamic and geometric phases as will be explained below. ![Magnification of the spin echo integrated-flux curves shown in Fig. \[example1SpinechoANDrev\], for the paths $C_s$ (red solid line) and $\overline C_s$ (blue dotted line) using (\[cp\]) near $s=1$. The differing lifetimes specified in (\[example1Gammaeff\]) show up as different values of the spin echo fluxes for $C_1$ and $\overline C_1$, respectively. However, while the reversed path gives a lower decay rate than the red path at $s=1$, the spin echo signal is not entirely determined by its amplitude, and we have to take into account the frequency of oscillation due to the cosine in (\[Fpapprox\]).[]{data-label="example1SpinechoANDrevs1"}](LifeTimeModification_SpinEchoPuzzle_s1.pdf){width="0.9\linewidth" height="0.65\linewidth"} As our main result we predict that the amplitudes of the spin-echo signals obtained for $C_s$ and $\overline C_s$ differ due to imaginary geometric phases, to an extent that the effect is large enough to be experimentally accessible. The effect is extracted from the main features of the interference patterns $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$, with and without the electric field component $\mathcal E_1$ as shown in Figure \[example1fields\]. Comparing Figures \[example1zeroE1SpinechoANDrev\] and \[example1SpinechoANDrev\], we observe a decreased amplitude as the most pronounced effect of the electric field, while the phase of the interference patterns is not visibly affected, [i. e.]{}, the electric field has negligible influence on the real parts of the geometric phases. The frequencies of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ in Figure \[example1zeroE1SpinechoANDrev\] [with respect to ]{}$s$ are distinctly different, and both are $s$-dependent. As we will discuss in the following, the behavior of $\mathcal F_p$ as a function of $s$ is easily understood in terms of the $s$-dependent phases since the field configuration in Figure \[example1fields\] allows for simplifications of the general expression (\[Fp\]). It will become clear that the different $s$-dependences of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ result from an interference effect involving the real parts of the geometric phases, while the different values of the maxima of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ originate mainly from the differences in the imaginary parts of the geometric phases. As illustrated in Figure \[SpinEchoPuzzle20141121data2\], the approximation $$\begin{aligned} \exp\big[-(\Delta\tau_\beta-\Delta\tau_\alpha)^2/(8\sigma'^2_k)\big]\approx 1\end{aligned}$$ holds at the percent level. Here $\Delta\tau_\alpha$ and $\sigma'_k$ are the shifts of the reduced arrival times and the momentum-space widths of the wave packets defined in (99) and (86) of Ref. [@BeGaMaNaTr08_I], respectively. Furthermore, $$\begin{aligned} \label{eq45} \mathrm{Im}\,\big(\varphi_{{{9{;}\hspace{-0.05em}11}}}-\gamma_{{{9{;}\hspace{-0.05em}11}}}\big)\approx \mathrm{Im}\,\big(\varphi_{{{11\hspace{-0.05em}{;}9}}}-\gamma_{{{11\hspace{-0.05em}{;}9}}}\big)\end{aligned}$$ holds at the level of per mille. Hence, the flux (\[Fp\]) can be approximated by $$\begin{aligned} \label{Fpapprox} \mathcal F_p(C_s)\approx\ &\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\big\}{\nonumber}\\ \times\big(1&+\cos\big\{\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]{\nonumber}\\ &\left.-\ \mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\big\}\big)\right|_{C_s}\end{aligned}$$ with a similar expression for $\mathcal F_p(\overline C_s)$, $$\begin{aligned} \label{FpapproxBar} \mathcal F_p(\overline C_s)\approx\ &\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)\big]\big\}{\nonumber}\\ \times\big(1&+\cos\big\{\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]{\nonumber}\\ &\left.-\ \mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\big\}\big)\right|_{\overline C_s}\,;\end{aligned}$$ see Section 5.4 of [@DissMIT]. In (\[FpapproxBar\]) we again make use of (\[eq45\]) but we now write $\mathrm{Im}(\varphi-\gamma)$ with index ${{11\hspace{-0.05em}{;}9}}$ to recall the label change when going over from the curve $C_{s}$ to the reversed curve $\overline{C}_{s}$; see Figure \[spinflip\]. The functions occuring in (\[Fpapprox\]) and (\[FpapproxBar\]) have been calculated for the realistic field configurations of Figure \[example1fields\] and are shown in Figures \[SpinEchoPuzzle20141121data\] and \[SpinEchoPuzzle20141121reverseddata\] for $C_s$ and $\overline C_s$, respectively. The results are close to fulfilling the symmetry relations $$\begin{aligned} &\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_s}{\nonumber}\\ &\qquad=\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_s}\ ,\label{44a}\\ &\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_s}{\nonumber}\\ &\qquad=-\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_s}\ .\label{44b}\end{aligned}$$ In the ideal case where (\[symmcond\]) holds we would also expect $$\begin{aligned} &\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_1}{\nonumber}\\ &\qquad=\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_1}=0\ ,\label{44c}\\ &\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_1}{\nonumber}\\ &\qquad=\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_1}=0\ .\label{44d}\end{aligned}$$ ![The relevant quantities that allow for the approximation (\[Fpapprox\]) of (\[Fp\]), given as functions of $s$ for the field configuration in Figure \[example1fields\] and its reverse. The black dotted line shows $\exp [-(\Delta\tau_\beta-\Delta\tau_\alpha)^2/(8\sigma'^2_k)]-1$ for $C_s$ and $\beta={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$. The curves for $\beta={{11\hspace{-0.05em}{;}9}}$ and $\alpha={{9{;}\hspace{-0.05em}11}}$ as well as for the reversed path $\overline C_s$ are the same.[]{data-label="SpinEchoPuzzle20141121data2"}](LifeTimeModification_SpinEchoPuzzle_data2.pdf "fig:"){width="0.9\linewidth"}   ![The real and imaginary parts of all combinations of phase differences that can occur between the atomic states ${{9{;}\hspace{-0.05em}11}}$ and ${{11\hspace{-0.05em}{;}9}}$, using the approximate expression (\[Fpapprox\]) for $\mathcal F_p(C_s)$. The black dotted line shows $-(\Delta\tau_\beta-\Delta\tau_\alpha)^2/(8\sigma'^2_k)$ for $C_s$ and $\beta={{9{;}\hspace{-0.05em}11}}$ and $\alpha={{11\hspace{-0.05em}{;}9}}$. For our realistic field configurations $\varphi_{{9{;}\hspace{-0.05em}11}}(z_{a})=\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_{a})$ holds only approximately. The $s$-dependent deviations of the dynamic phases from the spin echo point $\varphi_{{9{;}\hspace{-0.05em}11}}(z_{a})=\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_{a})$ lead to the specific interference patterns in Figures \[example1zeroE1SpinechoANDrev\] and \[example1SpinechoANDrev\].[]{data-label="SpinEchoPuzzle20141121data"}](LifeTimeModification_SpinEchoPuzzle_data.pdf){width="0.865\linewidth"} ![The relevant quantities that compose the approximate expression (\[Fpapprox\]) of $\mathcal F_p(\overline C_s)$.[]{data-label="SpinEchoPuzzle20141121reverseddata"}](LifeTimeModification_SpinEchoPuzzle_reversed_data.pdf){width="0.9\linewidth"} We see from Figures \[SpinEchoPuzzle20141121data\] and \[SpinEchoPuzzle20141121reverseddata\] that for realistic fields, the symmetry relations (\[44a\]), (\[44b\]) and (\[44d\]) are rather well satisfied, but (\[44c\]) not so well. We shall now expand the relevant functions around $s=1$: $$\begin{aligned} &\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_s}{\nonumber}\\ &\qquad=c_\varphi+m_\varphi\,(s-1)+\dots\ ,\\ &\left.\mathrm{Re}\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_s}{\nonumber}\\ &\qquad=\overline c_\varphi+\overline m_\varphi\,(s-1)+\dots\ ,\\ &\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{C_s}{\nonumber}\\ &\qquad=c_\gamma+m_\gamma\,(s-1)+\dots\ ,\\ &\left.\mathrm{Re}\big[\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\right|_{\overline C_s}{\nonumber}\\ &\qquad=\overline c_\gamma+\overline m_\gamma\,(s-1)+\dots\ .\end{aligned}$$ From Figures \[SpinEchoPuzzle20141121data\] and \[SpinEchoPuzzle20141121reverseddata\] we take that $$\begin{aligned} \label{45e} \begin{split} c_\varphi&\approx\overline c_\varphi\ ,\\ m_\varphi&\approx\overline m_\varphi\ ,\\ m_\gamma&\approx-\overline m_\gamma\ ,\\ m_\varphi&>m_\gamma>0\ , \end{split}\end{aligned}$$ and $$\begin{aligned} \label{45ee} \begin{split} c_\gamma\approx\overline c_\gamma\approx 0\ . \end{split}\end{aligned}$$ Keeping only the constant terms and those linear in $s-1$, which is a valid approximation when $|s-1|\lesssim0.3$, we can approximate the cosine in (\[Fpapprox\]), for $C_s$, as $$\begin{aligned} \label{cos1} \cos\big[c_\varphi-c_\gamma+(m_\varphi-m_\gamma)(s-1)\big]\end{aligned}$$ and in (\[FpapproxBar\]), for $\overline C_s$, as $$\begin{aligned} \label{cos2} \cos\big[\overline c_\varphi-\overline c_\gamma+(\overline m_\varphi-\overline m_\gamma)(s-1)\big]\ .\end{aligned}$$ Let us first consider $s=1$ for which we get, from (\[Fpapprox\]), (\[FpapproxBar\]), (\[45e\]) and (\[45ee\]), $$\begin{aligned} \mathcal F_p(C_1)&\approx\left.\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\big\}\right|_{C_1}{\nonumber}\\ &\quad\times\big[1+\cos(c_\varphi)\big]\ ,\\ \mathcal F_p(\overline C_1)&\approx\left.\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)\big]\big\}\right|_{\overline C_1}{\nonumber}\\ &\quad\times\big[1+\cos(\overline c_\varphi)\big]\ ,\end{aligned}$$ and for their ratio, using (\[example1Gammaeff\]), (\[R\]), (\[Ra\]), and $c_\varphi\approx\overline c_\varphi$, $$\begin{aligned} \label{R2} \frac{\mathcal F_p(\overline C_1)}{\mathcal F_p(C_1)} &\approx\ \frac{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]} \nonumber\\ &=\ \frac{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]} =1.057\ .\end{aligned}$$ The discrepancy between this value for the quotient and $$\begin{aligned} \label{Rs1} \frac{\mathcal F_p(\overline C_1)}{\mathcal F_p(C_1)}\approx\frac{4.1271}{4.0091}\approx1.0294\ ,\end{aligned}$$ extracted from Figure \[example1SpinechoANDrevs1\], is due to the violation of (\[45ee\]) by our realistic field configuration, $$\begin{aligned} c_\gamma\approx-\overline c_\gamma\approx0.022\ .\end{aligned}$$ With $c_\varphi\approx\overline c_\varphi\approx1.079$ (see Figures \[SpinEchoPuzzle20141121data\] and \[SpinEchoPuzzle20141121reverseddata\]) we find, using (\[Fpapprox\]), that $$\begin{aligned} \frac{\mathcal F_p(\overline C_1)}{\mathcal F_p(C_1)}&\approx\frac{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]}\frac{1+\cos(\overline c_\varphi-\overline c_\gamma)}{1+\cos(c_\varphi-c_\gamma)}{\nonumber}\\ &\approx1.0295\ ,\end{aligned}$$ consistent with (\[Rs1\]). This observation underpins the necessity to measure the spin-echo curves for realistic field configurations over a sufficiently large $s$-range. In a followup experiment it will be necessary to make fits to $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ and extract the imaginary parts of the geometric phases from these. We now show, that, [e. g.]{}, the heights of the maxima of the spin-echo curves in Figure \[example1SpinechoANDrev\] can be used for this purpose. With (\[45e\])–(\[cos2\]) we get, for $|s-1|\lesssim0.3$, the approximate expressions $$\begin{aligned} &\mathcal F_p(C_s)\approx\left.\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{9{;}\hspace{-0.05em}11}}}(z_a)-\gamma_{{{9{;}\hspace{-0.05em}11}}}(z_a)\big]\big\}\right|_{C_s}{\nonumber}\\ &\times\big\{1+\cos\big[c_\varphi-c_\gamma+(m_\varphi-m_\gamma)(s-1)\big]\big\}\ ,\label{46c}\\ &\mathcal F_p(\overline C_s)\approx\left.\frac12\exp\big\{2\,\mathrm{Im}\,\big[\varphi_{{{11\hspace{-0.05em}{;}9}}}(z_a)-\gamma_{{{11\hspace{-0.05em}{;}9}}}(z_a)\big]\big\}\right|_{\overline C_s}{\nonumber}\\ &\times\big\{1+\cos\big[c_\varphi-\overline c_\gamma+(m_\varphi+m_\gamma)(s-1)\big]\big\}\ .\label{46d}\end{aligned}$$ With $m_\varphi>m_\gamma>0$, see (\[45e\]), we find that, near $s=1$, $\mathcal F_p(\overline C_s)$ should oscillate with higher frequency than $\mathcal F_p(C_s)$. We see from Figure \[example1SpinechoANDrev\] that this is indeed the case. At the maxima of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ that are the nearest to $s=1$ the cosines in (\[46c\]) and (\[46d\]) are equal to $1$ and the ratio of the fluxes is determined by the effective lifetimes of the states. We find these maxima for $\mathcal F_p(C_s)$ ($\mathcal F_p(\overline C_s)$) for $s=0.917$ ($s=0.945$) and $s=1.342$ ($s=1.253$). For the ratios of the fluxes we get $$\begin{aligned} \frac{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]}\approx\frac{\mathcal F_p(\overline C_{0.945})}{\mathcal F_p(C_{0.917})}=1.060\ ,\label{R1}\\ \frac{\exp[-T\,\Gamma_{{{11\hspace{-0.05em}{;}9}},\mathrm{eff}}(\overline{C}_1)]}{\exp[-T\,\Gamma_{{{9{;}\hspace{-0.05em}11}},\mathrm{eff}}(C_1)]}\approx\frac{\mathcal F_p(\overline C_{1.253})}{\mathcal F_p(C_{1.342})}=1.051\ .\label{R1b}\end{aligned}$$ As argued above, small uncertainties and asymmetries in a realistic experimental setup can lead to shifts and distortions of the spin-echo curves as compared to ideally symmetric field configurations. To extract the changes in atomic lifetimes at a desired confidence level, it is therefore in general not sufficient to measure the flux of atoms for a single value of $s$. We rather have to determine the spin-echo curves within a range of $s$ that includes the maxima of $\mathcal F_p(C_s)$ around $s=1$ and then invoke the same procedure for $\mathcal F_p(\overline C_s)$. Between $s=0.8$ and $s=1.4$ the two maxima of $\mathcal F_p(C_s)$ have approximately the same values, and the same holds for the maxima of $\mathcal F_p(\overline C_s)$. Therefore, both, the maxima for $s>1$ and $s<1$ serve to determine the geometry-induced relative changes in atomic lifetimes within the range $0.8\lesssim s\lesssim 1.4$. We can regard the difference between (\[R1\]) and (\[R1b\]) as a rough measure of the uncertainty of our prediction for the geometric lifetime effects given the imperfections of a realistic field configuration. For other field configurations the quantities entering in $\mathcal F_p$ have to be investigated analogously to determine whether the lifetime modifications can be extracted from the maxima of the spin-echo curves. ![The ratios $R_1=\mathcal F_p(\overline C_{0.945})/\mathcal F_p(C_{0.917})$ and $R_2=\mathcal F_p(\overline C_{1.253})/\mathcal F_p(C_{1.342})$ as well as the flux $\mathcal F_p(C_{0.917})$ as a function of max$\,\mathcal E_1$. The electric field in Figure \[example1fields\] corresponds to max$\,\mathcal E_1=6\,$V/cm. At max$\,\mathcal E_1=0$ we find $R_1\approx1$. There, the corresponding maxima $\mathcal F_p(\overline C_{0.945})$ and $\mathcal F_p(C_{0.917})$ of the spin-echo curves, separated by $\Delta s\approx0.028$ and relatively close to $s=1$, are practically of the same height. This is not the case for $R_2$ due to the fact that the maxima are found at $s$-values which differ significantly more, $\Delta s\approx0.089$; cf. Figure \[example1zeroE1SpinechoANDrev\]. We see here that comparison of fluxes at different values of $s$ that are separated by large $\Delta s$ can increasingly mask the effect of the geometric phases on the decay rates. At which values of $s$ we can find the best estimate of the geometry-induced lifetime modification depends on the position of the maxima relative to $s=1$.[]{data-label="RampUpE1"}](LifeTimeModification_RampUpE1.pdf){width="0.99\linewidth"} We now study the dependence of the geometric lifetime effects on the applied electric field component $\mathcal{E}_{1}$. In the range where $\mathcal{E}_{1,\mathrm{max}}\le7.2\,$V/cm we find that the maxima of the spin-echo curves, both for $\mathcal{F}_{p}(C_{s})$ and $\mathcal{F}_{p}(\overline C_{s})$, remain essentially at the same values of $s$ as extracted from Figure \[example1SpinechoANDrev\]. We therefore show, in Figure \[RampUpE1\], the ratios $$\begin{aligned} R_1=\frac{\mathcal F_p(\overline C_{0.945})}{\mathcal F_p(C_{0.917})}\end{aligned}$$ and $$\begin{aligned} R_2=\frac{\mathcal F_p(\overline C_{1.253})}{\mathcal F_p(C_{1.342})}\end{aligned}$$ at the same values of $s$ as in (\[R1\]) and (\[R1b\]), respectively. We also show the absolute value of the spin-echo signal $\mathcal F_p(C_{0.917})$ as a function of the magnitude of $\mathcal E_1$. The field $\mathcal E_1$ is scaled such that its maximum ranges between $0$ and $7.2\,$V/cm. The ratio $R_1$ increases also for electric fields larger than $\mathcal E_{1,\mathrm{max}}=6\,$V/cm, but at the expense of the count rate which is proportional to $\mathcal F_p(C_{0.917})$. We chose $\mathcal E_{1,\mathrm{max}}=6\,$V/cm, see Figure \[example1fields\], as a reasonable compromise between the observable relative effect on the lifetimes and experimental feasibility. The measurement of $\mathcal F_p$ can be considered as measurement of a random variable $\xi$ taking on two values. We set $\xi=1$ if an atom is detected at $z=z_{a}$ and $\xi=0$ if no atom is detected. In the latter case the atom may have decayed before arriving at $z_{a}$ or it may be in a state orthogonal to the analysing state ${|p)}$ at $z_{a}$; cf. (\[palpha\]). Suppose now that we start with one atom at $z=0$. Then the probability to get $\xi=1$ is given by $\mathcal F_p$, the probability to get $\xi=0$ is $1-\mathcal F_p$. Thus, we have for the expectation value and the variance of $\xi$ $$\begin{aligned} E_{1}(\xi) &= \mathcal F_p\,,{\nonumber}\\ \mathrm{Var}_{1}(\xi) &= E_{1}(\xi^{2})-[E_{1}(\xi)]^{2}{\nonumber}\\ &=\mathcal F_p-\mathcal F_p^{2}=\mathcal F_p(1-\mathcal F_p)\, . \end{aligned}$$ Next we suppose that we start with $N$ atoms. Then we get for the average $\bar\xi$ the following expectation value and variance: $$\begin{aligned} E_{N}(\bar\xi) &= \mathcal F_p\,,{\nonumber}\\ \mathrm{Var}_{N}(\bar\xi) &= \frac1N\mathrm{Var}_{1}(\xi)= \frac1N\mathcal F_p(1-\mathcal F_p)\, . \end{aligned}$$ If we want to measure $\mathcal F_p$ with a relative accuracy $\delta$ we should achieve $$\begin{aligned} [\mathrm{Var}_{N}(\bar\xi)]^{1/2} <\delta E_{N}(\bar\xi)\, , \end{aligned}$$ that is, $$\begin{aligned} \frac1N\mathcal F_p(1-\mathcal F_p)<\delta^{2}\mathcal F_p^{2}\, . \end{aligned}$$ This requires the number of atoms to obey $$\begin{aligned} N>\frac{1}{\delta^{2}}\left(\frac{1}{\mathcal F_p}-1\right)\, . \label{Nmindelta}\end{aligned}$$ We consider, as a representative value of $\mathcal F_p$, half the maximum value $\mathcal F_p(C_{0.917})$ at $\mathrm{max}\,\mathcal E_{1}=6\,$V/cm. From Figure \[RampUpE1\] we then find $\mathcal F_p(C_{0.917})/2\approx2.7\times10^{-3}$. For a $0.5\%$ measurement of this value of $\mathcal F_p$, condition (\[Nmindelta\]) requires us to work with $$\begin{aligned} N>1.5\times10^{7}\, \end{aligned}$$ atoms to obtain one data point on the spin-echo curve. To measure the complete spin-echo curves we will demand $100$ data points for each, $C_{s}$ and $\overline C_{s}$. Hence, the total number of atoms needed is $N>3\times10^{9}$. With the corresponding accuracy of $0.5\%$ per data point of $\mathcal F_{p}$ on the spin-echo curves[^2] it should be possible to obtain an accuracy of $10\%$ for our geometric lifetime effect which is of the order of $5$ to $6\%$. Conclusions ----------- In this article we calculate the lifetime modification of metastable states of hydrogen due to geometric phases. A geometry-induced modification of atomic decay rates has not been observed experimentally thus far. In addition to imaginary *dynamic* phases, which emerge in an effective description of decaying atomic states travelling in an adiabatic way through electromagnetic fields, the hydrogen state vectors acquire imaginary *geometric* phases in suitable chiral electromagnetic field configurations. We use the time evolution of a superposition of metastable states propagating in a field configuration which is based on realistic experimental conditions to compute the flux of atoms arriving at the detector of a longitudinal atomic-beam spin-echo apparatus. We analyse the relevant quantities entering the description of the propagating atomic wave packet, in particular the dynamic and geometric phases, and propose a realistic scheme to observe the change of lifetimes experimentally. We ensure adiabatic evolution in spatial regions where geometric phases for the hydrogen state vectors emerge. We vary the field configuration to obtain spin-echo curves which are conveniently accessible in experiment. We show in detail how to extract the geometry-induced change of lifetime from the maxima of the spin-echo curves and estimate the necessary number of metastable atoms to be $4\times10^{9}$ for a statistically significant measurement. We find that the lifetime is modified at the level of $5\%$ due to geometric phases. We estimate that this effect is large enough to be observed under realistic experimental conditions. Appendix {#appendix .unnumbered} ======== Conditions for adiabatic evolution of the states {#AppendixAdiabat} ------------------------------------------------ Employing the field configuration from Figure \[example1fields\] with $s=1$, we find that the adiabaticity conditions (B.16) and (B.22) from [@BeGaNa07_II] for the field variations are satisfied. We get $$\begin{aligned} \frac{1}{\mathcal E_0}\underset{t\in [0,T]}{\mathrm{max}}\left|\frac{\partial{\mathbfcal{E}}}{\partial t}\right|&<\frac{1}{T}\longleftrightarrow 0.69\lesssim 1\,,\\ \frac{1}{\mathcal B_0}\underset{t\in[0,T]}{\mathrm{max}}\left|\frac{\partial{\mathbfcal{B}}}{\partial t}\right|&<\frac{1}{T}\longleftrightarrow 0.1\lesssim 1\,, \end{aligned}$$ with $\mathcal E_0=477.3\,$V/cm, $\mathcal B_0=43.65\,$mT, $T=(z_a-z_0)/v_z$. Wherever geometric phases emerge along the $z$-axis the energy separation $\Delta E$ between the involved states is large enough for adiabatic evolution: $$\begin{aligned} \label{adiabE} \Delta z\gtrsim\frac{h\,v_z}{\Delta E}\longleftrightarrow 90\,\mathrm{mm}\gg 19\,\mathrm{mm}\ ; \end{aligned}$$ see (27) from [@BeGaMaNaTr08_I]. For $\Delta z$ we have $90\,$mm from the fields of Figure \[example1fields\]. Of course, for $s>1$ the adiabaticity condition (\[adiabE\]) is satisfied as well, whereas $s$ may not be taken much smaller than $s=1$. Field configuration {#AppendixField} ------------------- We employ a field configuration as depicted in Figure \[example1fields\] which is actually available in the laboratory. The magnetic field components are fits to measured data. The electric field component $\mathcal E_1(z)$ is calculated via a finite-elements method and is experimentally realisable with an appropriate set of capacitor plates. It is straightforward to adjust the analysis presented in this work for slightly different experimental realisations of $\mathcal E_1(z)$. The remaining field components are chosen to be zero, the electric field is given in units of V/cm, the magnetic field components in units of $\mu$T. For the calculation of $\mathcal F_p(C_s)$ and $\mathcal F_p(\overline C_s)$ with $C_1$ illustrated in Figure \[example1fields\], we vary $s$ in the $z$-intervals $[0.32330919,0.66]$ and $[0,0.33669081]$, respectively. We define $$\begin{aligned} \begin{split} z_m&=0.33380917\ ,\\ z_1&=0.31676777\ ,\\ z_2&=0.57813638\ . \end{split}\end{aligned}$$ Using $c_0= -15625$, $c_1 = 0.009$, $c_2 = 0.0105$, $c_3 = 0.07$, $c_4 = 0.08$, $c_5 = 0.16$, $c_6 = 0.17$, and employing the syntax ‘A ? B : C’ for ‘B to be true if A is, and C to be true if A is not’ and use logical ‘AND’ and ‘OR’, the fields are given as $$\begin{aligned} \mathcal{E}_1(z)\,=\,& 3\,\big\{\left(z + c_1 > z_m - c_5 \;\mathrm{AND}\; z + c_1 < z_m - c_3\right) \;\mathrm{OR}\; \left(z + c_1 > z_m + c_4 \;\mathrm{AND}\; z + c_1 < z_m + 0.17\right) \;\,?\;\, 1\;\, :\;\,{\nonumber}\\ &\exp \left[c_0\,\left(z + c_1 - (z_m - c_5)\right)\,\left(z + c_1 - (z_m - c_5)\right)\right] + \exp \left[c_0\,\left(z + c_1 - (z_m - c_3)\right)\,\left(z + c_1 - (z_m - c_3)\right)\right]{\nonumber}\\ &+\exp \left[c_0\,\left(z + c_1 - (z_m + c_4)\right)\,\left(z + c_1 - (z_m + c_4)\right)\right] + \exp \left[c_0\,\left(z + c_1 - (z_m + c_6)\right)\,\left(z + c_1 - (z_m + c_6)\right)\right]{\nonumber}\\ &+ \left(z_a + c_1 - z > z_m - c_5 \;\mathrm{AND}\; z_a + c_1 - z < z_m - c_3\right) \;\mathrm{OR}\; \left(z_a + c_1 - z > z_m + c_4 \;\mathrm{AND}\; z_a + c_1 - z < z_m + c_6\right) \;\,?\;\, 1 \;\, :\;\,{\nonumber}\\ &\exp \left[c_0\,\left(z_a + c_1 - z - (z_m - c_5)\right)\,\left(z_a + c_1 - z - (z_m - c_5)\right)\right] + \exp \left[c_0\,\left(z_a + c_1 - z - (z_m - c_3)\right)\,\left(z_a + c_1 - z - (z_m - c_3)\right)\right]{\nonumber}\\ &+\exp \left[c_0\,\left(z_a + c_1 - z - (z_m + c_4)\right)\,\left(z_a + c_1 - z - (z_m + c_4)\right)\right] + \exp \left[c_0\,\left(z_a + c_1 - z - (z_m + c_6)\right)\,\left(z_a + c_1 - z - (z_m + c_6)\right)\right]\big\}\\ \mathcal{B}_1(z)\,=\,& z + c_2 < z_m\;\,?\;\,\big\{z + c_2 < z_1\;\,?\;\,-153.283\,\exp [-2251.75\,(z + c_2 - 0.221739)^2]\, \sin [3.95282\,(z + c_2 - 0.222097)]{\nonumber}\\ &-50.294\,\exp [-2174.46\,(z + c_2 - 0.457696)^2]\, \sin [11.8633\,(z + c_2 - 0.193543)] \;\, :\;\,{\nonumber}\\ &\big[z + c_2 < 0.36223008\;\,?\;\, 0 \;\, :\;\, \{z + c_2 < z_2\;\,?\;\,-153.283\,\exp [-2251.75\,(z + c_2 - 0.221739)^2]\,\sin [3.95282\,(z + c_2 - 0.222097)]{\nonumber}\\ &-50.294\,\exp [-2174.46\,(z + c_2 - 0.457696)^2]\,\sin [11.8633\,(z + c_2 - 0.193543)] \;\, :\;\, 0\}\big]\big\} \;\, :\;\,{\nonumber}\\ &\big\{z + c_2 < z_1\;\,?\;\,153.283\,\exp [-2251.75\,(z + c_2 - 0.221739)^2]\,\sin [3.95282\,(z + c_2 - 0.222097)]{\nonumber}\\ &50.294\,\exp [-2174.46\,(z + c_2 - 0.457696)^2]\,\sin [11.8633\,(z + c_2 - 0.193543)] \;\, :\;\,{\nonumber}\\ &\big[z + c_2 < 0.36223008\;\,?\;\, 0 \;\, :\;\, \{z + c_2 < z_2\;\,?\;\,153.283\,\exp [-2251.75\,(z + c_2 - 0.221739)^2]\,\sin [3.95282\,(z + c_2 - 0.222097)]{\nonumber}\\ &50.294\,\exp [-2174.46\,(z + c_2 - 0.457696)^2]\,\sin [11.8633\,(z + c_2 - 0.193543)] \;\, :\;\, 0\}\big]\big\}\\ \mathcal{B}_2(z)\,=\,& z + c_2 < z_1\;\,?\;\,\big\{32.34\, \exp [-490.685\,(z + c_2 - 0.222096)^2]\,(\cos^2[29.8529\,(z + c_2 - 0.222682)] - 0.894862){\nonumber}\\ &-34.967\, \exp [-515.945\,(z + c_2 - 0.458747)^2]\,(\cos^2[29.0659\,(z + c_2 - 0.459074)] - 0.901612)\big\} \;\, :\;\, \big\{z + c_2 < 0.36223007941\;\,?\;\, 0 \;\, :\;\,{\nonumber}\\ &\big(z + c_2 < z_2\;\,?\;\,32.34\,\exp [-490.685\,(z + c_2 - 0.222096)^2]\,(\cos^2[29.8529\,(z + c_2 - 0.222682)] - 0.894862){\nonumber}\\ &-34.967\,\exp [-515.945\,(z + c_2 - 0.458747)^2]\,(\cos^2[29.0659\,(z + c_2 - 0.459074)] - 0.901612) \;\, :\;\, 0\big)\big\}\\ \mathcal{B}_3(1;z)\,=\,&1.4\times10^{-3} - 7.476112\,\exp [-535.705\,(z + c_2 - 0.451987)^2] + 7.482594\,\exp [-566.72\,(z + c_2 - 0.215842)^2]\end{aligned}$$  \ [88]{} M. V. Berry, *Quantal phase factors accompanying adiabatic changes*, Proc. R. Soc. Lond. A **392**, 45 (1984). B. Simon, *[Holonomy, the quantum adiabatic theorem, and Berry’s phase]{}*, Phys. Rev. Lett. **51**, 2167 (1983). J. C. Garrison and E. M. Wright, *Complex geometrical phases for dissipative systems*, Phys. Lett. A **128**, 177 (1988). Ch. Miniatura, C. Sire, J. Baudon and J. Bellissard, *Geometrical Phase Factor for a Non-Hermitian Hamiltonian*, Europhys. Lett. **13**, 199 (1990), Correction: Europhys. Lett. **14**, 91 (1991). S. Massar, *Applications of the complex geometric phase for metastable systems*, Phys. Rev. A **54**, 4770 (1996). M. V. Berry, *Physics of Nonhermitian Degeneracies*, Czech. J. Phys. **54**, 1039 (2004). T. Bergmann, T. Gasenzer and O. Nachtmann, *[Metastable states, the adiabatic theorem and parity violating geometric phases I]{}*, Eur. Phys. J. D **45**, 197 (2007). T. Bergmann, T. Gasenzer and O. Nachtmann, *[Metastable states, the adiabatic theorem and parity violating geometric phases II]{}*, Eur. Phys. J. D **45**, 211 (2007). T. Bergmann, M. DeKieviet, T. Gasenzer, O. Nachtmann and M.-I. Trappe, *[Parity Violation in Hydrogen and Longitudinal Atomic Beam Spin Echo I]{}*, Eur. Phys. J. D **54**, 551 (2009).\ M. DeKieviet, T. Gasenzer, O. Nachtmann and M.-I. Trappe, *Longitudinal atomic beam spin echo experiments: a possible way to study parity violation in hydrogen*, Hyperfine Interact. **200**, 35 (2011). T. Gasenzer, O. Nachtmann and M.-I. Trappe, *[Metastable states of hydrogen: their geometric phases and flux densities]{}*, Eur. Phys. J. D **66**, 113 (2012). M. DeKieviet, D. Dubbers, C. Schmidt, D. Scholz and U. Spinola, *${}^3$[He]{} spin echo: A new atomic beam technique for probing phenomena in the [neV]{} range*, Phys. Rev. Lett. **75**, 1919 (1995). T. Bergmann, *[Theorie des longitudinalen Atomstrahl-Spinechos und paritätsverletzende Berry-Phasen in Atomen]{}*, PhD thesis, Ruprecht-Karls-Universität, Heidelberg (2006). M.-I. Trappe, *[Parity-Violating and Parity-Conserving Berry Phases for Hydrogen and Helium in an Atom Interferometer]{}*, PhD thesis, Ruprecht-Karls-Universität, Heidelberg (2012). [^1]: $^*$martin.trappe@quantumlah.org\ $^\dagger$augenstein@physi.uni-heidelberg.de\ $^\ddag$maarten.dekieviet@physik.uni-heidelberg.de\ $^\S$t.gasenzer@uni-heidelberg.de\ $^\P$O.Nachtmann@thphys.uni-heidelberg.de [^2]: The theoretical error of $\mathcal F_{p}$ is estimated to be of the same order; see [@BeGaMaNaTr08_I].
--- abstract: | Using the quasi-Maxwell form of the vacuum Einstein equations and demanding the presence of a cylindrically symmetric radial gravomagnetic field, we find the solution to the Einstein equations which represents the gravity field of a line gravomagnetic monopole. We show that this is the generalization of the Levi-Civita’s cylindrically symmetric static spacetime, in the same way that the NUT metric is the empty space generalization of the Schwarzschild metric. Some of the features of this metric as well as its relation to other metrics are discussed. author: - | M. Nouri-Zonoz[^1]\ Institute of Astronomy, Madingley Road, Cambridge CB3 0HA title: '**Cylindrical analogue of NUT space: spacetime of a line gravomagnetic monopole** ' --- Introduction ============= Despite their different mathematical structures the analogy between gravitation and electromagnetism has proven to be fruitful. This analogy is particularly useful in the case of gravitation when we try to relate our 3-dimensional physical intuition and experience on the one hand with the 4-dimensional spacetime events on the other hand. It is clear that achiveing this goal requires some sort of splitting of spacetime to space and time components. In this letter we will use this analogy and in particular the quasi-maxwell form of the vacuum Einstein equations to find a solution which corresponds to the presence of a line gravomagnetic monopole.[^2] In what follows we will use the $1+3$ or the threading formulation (splitting) of spacetimes in general relativity introduced by Landau and Lifshitz \[6\]. In this formulation the fundamental geometrical objects used for splitting spacetime are timelike worldlines (threads) and so the formalism is best suited to treating stationary gravitational fields which are characterized by the existence of a timelike Killing vector. The 1+3 splitting (Threading) ------------------------------ Suppose that $\cal M$ is the 4-dimensional manifold of a stationary spacetime with metric $g_{ab}$ and $p\in \cal M$, then one can show that there is a 3-dimensional manifold $\Sigma_3$ defined invariantly by the smooth map \[1,3\] $$\Psi (p):{\cal M} \rightarrow \Sigma_3$$ where $\Psi (p)$ denotes the orbit of the timelike Killing vector $\mbox{\boldmath $\xi$}_t$ passing through $p$. The 3-space $\Sigma_3$ along with the 3-dimensional metric $\gamma_{\alpha\beta}$ (defined on $\Sigma_3 $) is called the factor space $( {\cal M} , g)\over {G_1}$, where $G_1$ is the 1-dimensional group of transformations generated by $\mbox{\boldmath $\xi$}_t$. One can use $\gamma_{\alpha\beta}$ to define differential operators on $\Sigma_3$ in the same way that $g_{ab}$ defines differential operators on $\cal M$. For example the covariant derivative of a 3-vector $\bf A$ is defined as follows $$A^\alpha_{;\beta}=\partial_\beta A^\alpha + \lambda ^\alpha_{\gamma\beta} A^\gamma$$ $$\hspace{3in}\alpha,\beta=1,2,3$$ $$A_{\alpha ;\beta}=\partial_\beta A_\alpha - \lambda _{\alpha\beta}^\gamma A_\gamma$$ where $\lambda ^\alpha_{\gamma\beta}$ is the 3-dimensional Christoffel symbol cosntructed from the components of $\gamma_{\alpha\beta}$ in the following way $${\lambda_{\mu\nu}^\sigma}={1\over 2}\gamma^{\sigma\eta}(\partial_ {\nu}\gamma_{\eta\mu}+ \partial_{\mu}\gamma_{\eta\nu}-\partial_{\eta}\gamma_{\mu\nu})$$ It has been shown that the metric of a stationary spacetime can be written in the following form \[5\] $$ds^2=e^{-2\nu}(dx^0-A_\alpha dx^\alpha)^2-{dl}^2 \eqno (1)$$ where $$A_{\alpha}={-g_{0\alpha}\over g_{00}}\;\;\;\;\;\;\;\;\;\; ,\;\;\;\;\;\;\;\;\; e^{-2\nu}\equiv g_{00}$$ and $${dl}^2=\gamma_{\alpha\beta}dx^{\alpha}dx^{\beta}=(-g_{\alpha\beta}+ {g_{0\alpha}g_{0\beta}\over g_{00}})dx^{\alpha}dx^{\beta}\eqno (2)$$ is the spatial distance written in terms of the 3-dimensional metric $\gamma_{\alpha\beta}$ of $\Sigma_3$. Using this formulation for a stationary spacetime one can write the vacuum Einstein equations in quasi-Maxwell form as follows \[8\] $${\rm div} \ {\bf B}_g = 0 \eqno (3)$$ $${\rm Curl} \ {\bf E}_g = 0 \eqno (4)$$ $${\rm div} \ {\bf E}_g = -c^{-2} \left( \ {\textstyle {1 \over 2}} e^{2 \nu} B_g^2 + E_g^2 \right) \eqno (5a)$$ $${\rm Curl} \ (e^{\nu} {\bf B}_g) = - 2c^{-3} {\bf E}_g \times e^{\nu} {\bf B}_g \eqno (5b)$$ $$P^{\alpha \beta} = E_g^{\alpha; \beta} + e^{2 \nu} (B^\alpha _g B^\beta _g - B^2_g \gamma ^{\alpha \beta}) + E^\alpha _g E^\beta _g \eqno (6)$$ where the gravitational fields are $${\bf E}_g = -c^2 {\bf \nabla}\nu \eqno(7)$$ $${\bf B}_g = c^2 \ {\rm Curl} \ {\bf A}. \eqno (8)$$ and $P^{\alpha \beta}$ is the 3-dimensional Ricci Tensor constructed from the metric $\gamma^{\alpha \beta}$. Note that all operations in these equations are defined in the 3-dimensional space with metric $\gamma _{\alpha \beta}$. Non-relativistic considerations ------------------------------- Before deriving the spacetime representing a line gravomagnetic monopole, a simple consideration of line magnetic monoploes would be useful. Suppose that a line magnetic monopole is stretched along the $z$-axis in cylindrical coordinates. Using Gauss’s law and cylindrical symmetry the $\bf B$ field produced by this monoploe is given by $${\bf B}={2Q{\bf\hat\rho}\over \rho}$$ where $Q$=const. is the monopole strength per unit length. This field can be produced either by the potential ${\bf A}_1={2Qz\over \rho}{\bf{\hat\phi}}$ or by ${\bf A}_2=-{2Q\phi{\hat z}}$. The two potentials are related through a gauge transformation $${\bf A}_1={\bf A}_2+{\bf \nabla}\chi$$ with $\chi=2Qz\phi$. Later we will find the gravitational analogue of this gauge transformation. Generalized Cylindrical solution -------------------------------- Levi-Civita’s static cylindrically symmetric solutions of the vacuum Einstein equations have the following form \[7\] $$ds^2=\rho^{2m}d{t^2}-\rho^{-2m}[\rho^{2m^2}(d{\rho ^2}+d{z^2})+\rho^{2} d{\phi^2}] \eqno (9)$$where $m$ is a constant.[^3] To find the stationary cylindrically symmetric solution for empty space (with a radial gravomagnetic field), using equations (1) and (9), we take the following cylindrically symmetric form for the spatial metric $${dl}^2=\gamma_{\alpha\beta}dx^{\alpha}dx^{\beta}=e^{2\lambda(\rho)} (d{\rho^2}+dz^2)+{\rho^2}e^{2\nu(\rho)}d\phi^2 \eqno (10)$$ For cylindrical symmetry with a radial ${\bf B}_g$, Gauss’s law gives $${\bf B}_g ={{Le^{-2\lambda-\nu}}\over\rho}{\hat{\bf\rho}} \eqno(11)$$ where $L$=const. is the gravomagnetic monople strength per unit length. Now substituting (7) and (11) into (5a) and (5b) we get the following equation for $\nu(\rho)$ from (5a) $${\rho}^2\nu^{\prime\prime}+\rho\nu^\prime-{{e^{-4\nu}L^2}\over 2 } =0\eqno(12a)$$ while (5b) is satisfied identically. To construct $P^{\alpha \beta}$ from equation (6) we need the 3-dimensional Christoffel symbols of the metric (10) which are[^4]\ \ $${\lambda_{\mu\sigma}^\sigma}={1\over 2\gamma}({\partial_\mu}\gamma)\; \; \; \; \; {\lambda_{\rho\rho}^\rho}={1\over 2 }\gamma^{\rho\rho} (\partial_\rho{\gamma_{\rho\rho}})\; \; \; \; \; {\lambda_{zz}^\rho}={-1\over 2 }\gamma^{\rho\rho} (\partial_\rho{\gamma_{zz}})$$ $${\lambda_{z\sigma}^\tau}{\lambda_{z\tau}^\sigma}={-1\over 2 }\gamma^{\rho\rho}\gamma^{zz}(\partial_\rho{\gamma_{zz}})^2 \; \; \; \; \; \; \; \; \; {\lambda_{\phi\sigma}^\tau}{\lambda_{\phi\tau}^\sigma}= {-1\over 2 }\gamma^{\rho\rho}\gamma^{\phi\phi}(\partial_\rho {\gamma_{\phi\phi}})^2$$ $${\lambda_{\phi\phi}^\rho}={-1\over 2 }\gamma^{\rho\rho} (\partial_\rho{\gamma_{\phi\phi}}) \; \; \; \; \; \; \; \; \; \; \; {\lambda_{\phi\tau}^\sigma}={1\over 2 }\gamma^{\sigma\eta} (\partial_\rho{\gamma_{\eta\tau}})$$ Using the above results and equations $(7)$ and $(11)$, we find that the surviving components of the field equation are $$\lambda^{\prime\prime}-{{\lambda^\prime}\over \rho}+2{\nu^\prime}^2+ 2{{\nu^\prime}\over \rho}=0 \eqno(12b)$$and $${\rho}^2\lambda^{\prime\prime}+\rho\lambda^\prime-{e^{-4\nu}L^2\over 2} =0\eqno(12c)$$ where the prime denotes ${\partial \over \partial \rho}$.\ Now we proceed to solve equations (12) for $\nu(\rho)$ and $\lambda(\rho)$. Equation $(12a)$ can be written in the following form $$\rho {d\over d\rho}(\rho \nu^\prime)={e^{-4\nu}L^2\over 2}$$ in terms of the variable $u={\rm ln}\rho$ this becomes $${d\over du}{({{d\nu}\over du})^2}={L^2}{e^{-4\nu}}{d\nu\over du}$$ which on integration gives the following solution $${g_{00}}\equiv {e^{-2\nu}}={2m\over L}{1\over {{\rm cosh} [2m {\rm ln}({\rho / c})]}} ={4m \over L}{1 \over ({\rho / c})^{2m}+ ({\rho / c})^{-2m}} \eqno(13)$$ Substituting this into $(12c)$ one finds $${e^{2\lambda}}=a{\rho^{2b}}{\rm cosh}[2m {\rm ln}({\rho / c})] =a{\rho^{2b}}{{({\rho / c})^{2m}+({\rho / c})^{-2m}}\over 2}\eqno(14)$$ where $m$, $a$, $b$ and $c$ are constants. Substituting $(13)$ and $(14)$ in $(12b)$ one finds that $b=m^2$. So writing the metric in form (1), we have $$\displaylines{ds^2={2m\over L}{1\over {{\rm cosh} [2m {\rm ln}({\rho/ c})]}} {(dt-A_\alpha dx^\alpha)}^2\hfill\cr\hfill-{L\over 2m} {\rm cosh}[2m {\rm ln}({\rho / c})] [{2ma\over L}{\rho^{2m^2}}(d{\rho^2}+dz^2)+{\rho^2}d\phi^2]\hfill\cr}$$ To complete the metric we need to choose a potential which produces the field (11) through (8). One can see that the potential expression ${\bf A}_1\equiv A_{\phi}=Lz$ $(L=\rm const.)$ will do the job . So by substituting this potential expression, the full metric becomes $$\displaylines{ds^2={2m\over L}{1\over {{\rm cosh} (2m {\rm ln}({\rho/ c}))}} {(dt-Lzd\phi)}^2\hfill\cr\hfill-{L\over 2m} {\rm cosh}(2m {\rm ln}({\rho / c})) [{2ma\over L}{\rho^{2m^2}}(d{\rho^2}+dz^2)+{\rho^2}d\phi^2]\hfill (15a)\cr}$$ Regaining the static cylindrically symmetric metric as $L\to 0$ requires that $$c=c_0L^{-1\over 2m}$$ $$a={L\over 2m}$$ The behaviour of $g_{00}$ as a function of $\rho$ is shown in figure 1. This spacetime is one of the members of the Papapetrou class \[9\] and has two event horizons, one at $\rho=0$ and another at $\rho=\infty$. symmetries ========== If we shift the $z$ coordinate in (15a) by a constant $c$ we have $$z\to {z^\prime}=z+c$$ and $$ds^2 \to ds^{\prime 2}= e^{-2\nu(\rho)}{[dt-(Lz+b)d\phi]}^2- e^{2\lambda(\rho)}(d{\rho^2}+dz^2)-{\rho^2}e^{2\nu(\rho)}d\phi^2$$ where $b=Lc$. But now one can shift the zero point of the time coordinate in the following way $$t\to {t^\prime}=t-b \phi$$ so that $$ds^{\prime 2}=e^{-2\nu(\rho)}{(dt^\prime -Lz d\phi)}^2-e^{2\lambda(\rho)} (d{\rho^2}+dz^2)-{\rho^2}e^{2\nu(\rho)}d\phi^2$$ has the same form as $(15a)$ in terms of the new coordinate $t^\prime$. This shows that $(15a)$ is translationally invariant along the $z$-direction. In other words, although the metric is not from the mathematical point of view cylindrically symmetric (i.e. not all the components are $z$-independent), it is so physically. This is also clear if one represents the spacetime (15a) by its gravoelectric and gravomagnetic fields which are both explicitly cylindrically symmetric.\ One can equally well choose as the gravomagnetic potential the expression ${\bf A}_2 \equiv A_z=- L\phi$ which leads to the same gravomagnetic field ${\bf B}_g$. But the metric becomes $$\displaylines{ds^2={2m\over L}{1\over {{\rm cosh} (2m {\rm ln}({\rho / c}))}} {(dt+L\phi dz)}^2\hfill\cr\hfill-{L\over 2m} {\rm cosh}(2m {\rm ln}({\rho / c})) [{2ma\over L}{\rho^{2m^2}}(d{\rho^2}+dz^2)+{\rho^2}d\phi^2]\hfill (15b)\cr}$$ This metric is explicitly cylindrically symmetric but it suffers from multivaluedness. If we apply the same reasoning as that we used for the metric $(15a)$ above, we can associate this multivaluedness with different choices of the zero point of the time coordinate through the following transformation $$t\to {t^\prime}=t+bz\ \ \ ,\ \ \ \ \ \ \ b=2\pi nL\ \ \ \ \ n=1,2,3,...$$ As in the case of the magnetic monopole (section 1.2) the two different choices for the vector potential $\bf A$ correspond to the following gauge transformation between them $${\bf A}_1={\bf A}_2+{\bf \nabla}\chi$$ where $\chi= Lz\phi$. Killing vectors --------------- It is already clear that $\mbox{\boldmath $\xi$}_t=(1,0,0,0)$ and $\mbox{\boldmath $\xi$}_\phi=(0,0,0,1)$ are killing vectors of $(15a)$ however one can see that the form $(15b)$ can be reached alternatively by the transformation $t \to t^\prime = t - Lz\phi$. Now, as is obvious from $(15b)$, $\xi_{z}^{\prime\beta}=(0,0,1,0)$ is a Killing vector in the transformed coordinates and upon converting to the previous coordinates we find that $$\xi^\alpha ={\partial x^\alpha \over \partial x^{\prime\beta}} \xi^{\prime\beta}=({\partial t\over \partial z^\prime} ,0, {\partial z\over \partial z^\prime} ,0)=(L\phi,0,1,0)$$ is also a Killing vector of $(15)$.[^5] This is a multivalued Killing vector whose interpretation might require the consideration of not just the space itself but also the covering space with the many values of $\phi$ unroled. Relation to other metrics ========================== To find the connection between this metric and some other solutions, we use the Ernst formulation of axially symmetric spacetimes. In this formulation the vacuum field equations, in the Weyl’s coordinates $(\rho,z)$,[^6] can be written in the following form \[2,5\] $$(\varepsilon+ \varepsilon^*)(\varepsilon_{,\rho\rho}+\rho^{-1}\varepsilon _{,\rho}+\varepsilon_{,zz})=2({\varepsilon_{,\rho}}^2+{\varepsilon_{,z}}^2) \eqno (16)$$where $\varepsilon$ is a complex potential defined as follows $$\varepsilon=e^{2U}+i\omega \;\;\;\;\;\;\;\; ,\;\;\;\;\;\;\;\;\; e^{2U}=g_{00}\eqno (17)$$ Every solution of $(16)$ gives a stationary axially symmetric metric. one can easily see that the metric $(15a)$ is a solution of (16) with $$e^{2U}=g_{00}={2m\over L}{1\over {{\rm cosh} (2m {\rm ln}({\rho / c}))}}$$ and $$\omega={2m\over L}{\rm tanh}(2m {\rm ln}({\rho / c}))$$ Now the following theorems lead us to some other solutions \[5\] $(I)$. Given a stationary axisymmetric vacuum solution $(\varepsilon=e^{2U} +i\omega)$, the substitution $$U^ \prime =-U+{1\over 2} ln\rho$$ $$\omega ^\prime =i\omega$$ yields another vacuum solution $({U^\prime},\omega^\prime)$. $(II)$. and the substitution$${U^\prime}=2U$$ $${\omega^\prime} =i\omega$$ yields a solution $({U^\prime},\omega^\prime)$ of the Einstein-Maxwell equations. $(III)$. For a stationary axisymmetric vacuum solution $(U,\omega),$ (in Weyl’s coordinates) one obtains a corresponding cylindrically symmetric Einstein-Maxwell field $(U^\prime , \omega^\prime)$ by the substitution $$t\to iz\;\;\;\;\;\; z\to it \;\;\;\;\;\;2U\to U^\prime \;\; \;\;\;\; \omega\to \omega^\prime$$ For example applying (I) followed by (III) to (15a), one obtains the Rahdakrishna metric \[10\], which is a time dependent cylindrically symmetric solution of Einstein-Maxwell equations. Gravitational duality rotation ------------------------------ One can also obtain metric (15) by applying the gravitational duality rotation (sometimes called Ehlers’ transformation)[^7]to metric (9). This transformation, in its original formulation, states that if $$g_{\mu\nu}=e^{2U}{(dx^0)}^2-e^{-2U}{dl}^2$$ is the metric of a static exterior spacetime, then $$\bar g_{\mu\nu}=(a{\rm cosh}(2U))^{-1}({dx^0}-{A_\alpha}dx^{\alpha})^{2}- a{\rm cosh}(2U){dl}^2$$ \[with $a=\rm const.> 0$ , $U=U({x^\alpha})$ and $A_{\beta}= A_{\beta}(x^{\alpha})$\] would be the metric of a stationary exterior spacetime provided that ${A_\alpha}$ satisfies $$-a {\sqrt\gamma}\epsilon_{\alpha\beta\eta}U^{,\eta}=A_{[\alpha,\beta]} \eqno(18)$$ where $\epsilon_{\alpha\beta\gamma}$ is the flat space alternating symbol. From equations (9) and (15a) we see that in this case $$a={L\over 2m}$$ $${A_\alpha} \equiv {A_\phi}={L_z}$$ $$U=m{\rm ln}\rho \eqno(19)$$ and $${dl^2}=\rho^{2m^2}(d{\rho ^2}+d{z^2})+\rho^{2}d{\phi^2}$$\ It can easily be seen that equations (19) satisfy (18). In the same way the NUT metric has been shown to be the gravitational dual of Schwarzschild metric \[3\]. ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ================ I would like to thank my supervisor, Professor D. Lynden-Bell, without whose help this work would not have been done. I am also grateful to Dr. H. Ardavan for useful discussions and comments. Ehlers, J., Colloques internationaux C.N.R.S. No. 91 ( Les theories relativistes de la gravitation), 275, 1962. Ernst, F. J., Phys.Rev.,167, 1175, 1968; 168, 1415, 1968. Geroch, R., J.Math.Phys., 12, 918, 1972. Kasner, E., Amer. J. Math. 43, 217, 1921. D. Kramer, H. Stephani, E. Herlt and M. MacCallum, Exact Solutions of Einstein’s Field Equations, Ed. E. Schmutzer, 1980, CUP. L. D. Landau and E. M. Lifishitz, classical theory of fields., 4th edn. Pergamon press, Oxford, 1975. Levi-Civita, T., Rend.acc. Lince; 26, 307, 1917. D. Lynden-Bell and M. Nouri-Zonoz, gr-qc/9612049. A. Papapetrou., Annalen Physik; 12, 309, 1953. Radhakrishna, L., Proc.Nat.Inst.India, A29, 588, 1968. [^1]: Supported by a grant from the ministry of culture and higher education of Iran. [^2]: For a discussion on NUT space and gravomagnetic monopoles see \[8\]. [^3]: For $m=0,1$ this is a flat metric. Also note that (9) is identical to Kasner solution \[4\]. [^4]: The summation convention is being employed here. [^5]: I am grateful to D. Lynden-Bell and J. Katz for pointing out this to me. [^6]: In these coordinates the metric of an axially symmetric stationary spacetime can be written in the following form $$ds^2=e^{2U}(dt+Ad\phi)^2-e^{-2U}[e^{2K}(d\rho^2+dz^2)+\rho^2 d\phi^2].$$ [^7]: This transformation was first introduced by Ehlers \[1\] and later developed by Geroch \[3\].
Introduction ============ Quantum superposition of macroscopically distinct states is one of the most characteristic features of quantum behavior at the macroscopic level. In the “mesoscopic” regime, when the states involved in the superposition correspond to a collective motion of a number of particles that is larger than one but not quite macroscopic, the superposition of states has been demonstrated for photons in a high-quality microwave cavity [@b1], and for the center-of-mass motion of large molecules [@b2]. Among the most basic dynamic manifestations of quantum superposition of states are quantum coherent oscillations between the two basis states of a two-state system. However, while the macroscopic quantum phenomena brought about by the incoherent quantum tunneling are by now commonly found in a variety of systems ranging from mesoscopic tunnel junctions in the regimes of flux [@b3; @b4] and charge [@b5; @b6] dynamics, to molecular [@b7; @b8] and nano-magnets [@b9], the situation with experimental observation of macroscopic quantum coherent (MQC) oscillations remains much more uncertain. Claim of the observation of the MQC oscillations in molecular magnets [@b12] remain highly controversial [@b13; @b14]. Remarkable experimental demonstration [@b10] of the MQC oscillations in the charge-dynamics regime of a small Josephson junction is open to criticism that the two charge states of the observed quantum superposition differ in charge only by the the charge of one Cooper pair, and the oscillations between these two states can not be interpreted as macroscopic. Although this criticism is not fully justified, since the charge dynamics of a small Josephson junction is just another representation of its flux dynamics, which is the paradigm of the “macroscopic” quantum dynamics [@b11], it does not allow to consider the question of MQC oscillations to be completely settled. Recently, macroscopic quantum dynamics has attracted renewed attention as the possible basis for development of scalable quantum logic circuits for quantum computation. In this context, macroscopic quantum two-state system plays the role of a qubit, an elementary building block of a quantum computer. Several variants of qubits and quantum logic gates have been proposed [@b15; @b16; @b17; @b18] that are based on the macroscopic quantum dynamics of Josephson junctions. Many characteristics of the macroscopic qubits compare favorably with those of the microscopic qubits: they are insensitive to disorder at the microscopic level and offer much larger freedom in design and fabrication of complex systems of qubits. The price of these advantages is the problem of the environment-induced decoherence, which is typically much more serious for the macroscopic than microscopic quantum systems. Suppression of decoherence to the acceptable level requires thorough isolation of the qubit from its environment, the condition that typically limits the ability to control the qubit dynamics. This trade-off between the external control and decoherence in a qubit has a fundamental aspect related to measurement. Even in the case of a perfect set-up, the measurement necessary to observe the MQC oscillations perturbs the system by projecting its state on the eigenstates of the measured observable, and therefore presents an unavoidable source of decoherence. The intensity of this measurement-induced decoherence increases with increasing coupling strength between the detector and the oscillations, and more efficient measurement leads to stronger decoherence. First approaches to the problem of measurement of the MQC oscillations in an individual two-states system [@b11; @b19; @b20; @b21] suggested for oscillations of magnetic flux in SQUIDs, considered only the conventional limit of strong or “projective” quantum measurements, in which the detector-oscillation coupling is strong. In this case, the measurement leads to rapid localization of the measured observable (flux) in one of its eigenstates, and suppresses the oscillation. This means that the time evolution of oscillations can be studied with strong measurements only if the detector can be switched on and off on the time scale shorter that the oscillations period, and only in the “ensemble” of measurements, i.e. when the experiment is repeated many times with the same initial conditions. The measurement cycle consists then of preparation of the initial state of the system followed by its free evolution and the subsequent measurement. The information about dynamics of the oscillation is contained in the probability distribution of the measurement outcomes. Since the oscillation frequency is limited from below by several factors including the decoherence rate and temperature, the need to switch the detector on and off rapidly in this approach presents at the very least a serious technical challenge. The goal of this work is to study quantitatively a new approach to measurement of the MQC oscillations that is based on weak quantum measurements [@b22; @b23; @b24], in which the dynamic interaction between the detector and the measured system is weak and does not establish perfect correlation between their states. Such a weak measurement provides only limited information about the system but, in contrast to strong measurements, perturbs the system only slightly and can be performed continuously. In this work, the process of continuous weak measurement of the MQC oscillations in an individual two-state system is considered quantitatively. Recent results [@b25] for continuous measurements of electron oscillations in coupled quantum dots by a quantum point-contact are reformulated within a generic detector model and applied explicitly to the MQC oscillations of flux measured by a dc SQUID and oscillations of charge measured with a Cooper-pair electrometer. It is shown that the signal-to-noise ratio of the measurement, defined as the ratio of the amplitude of the oscillation line in the output spectrum of the detector to the background noise, is fundamentally limited by the trade-off between the acquisition of information and dephasing due to detector backaction on the oscillations. This limitation is the least restrictive for a symmetric detector, for which the signal-to-noise ratio can be expressed as $(\hbar/\epsilon)^2$, where $\epsilon$ is the detector energy sensitivity. Since the energy sensitivity $\epsilon$ is limited for regular (non-QND) quantum measurements by $\hbar/2$, the signal-to-noise ratio of the continuous weak measurement of the quantum coherent oscillations is limited by 4. This limit reflects the fundamental tendency of quantum measurement to localize the system in one of the eigenstates of the measured observable. As a spin-off, the established relation between the signal-to-noise ratio and energy sensitivity is used to demonstrate that the quantum point contact is the quantum-limited detector with energy sensitivity $\epsilon=\hbar/2$. Continuos measurement of the MQC oscillations with a linear detector ==================================================================== We consider the MQC oscillations in an individual two-states system, with the two basis states separated in energy by $\varepsilon$ and coupled by the tunneling amplitude $-\Delta/2$. The basis states are chosen to coincide with the eigenstates of an oscillating variable $x$, for instance, magnetic flux in a SQUID loop. In this basis, $x=x_0 \sigma_z/2$, where $x_0$ is the difference between the values of $x$ in the two states of the system, and $\sigma_z$ is the Pauli matrix. In the simplest measurement scheme considered in this work, the detector measures directly the oscillating variable $x$, and therefore is coupled to $x$. This means that the Hamiltonian describing the measurement set-up (Fig. 1) is: $$H=-\frac{1}{2}(\varepsilon \sigma_z +\Delta \sigma_x +\sigma_z f) + H_0\, , \label{1}$$ where $H_0$ is the Hamiltonian of the detector, and $f$ is the detector operator that couples it to the oscillations. For convenience, the amplitude $x_0$ of the oscillations and the coupling strength are included in $f$. (3.3,.9) (-0.05,0.15)[=3.3in=0.7in]{} We assume that the characteristic response time of the detector is short, e.g., much shorter than the period of measured oscillations, and that the detector operates in the linear regime. These two assumptions mean that the detector output $o(t)$ can be written as $$o(t)=q(t) + \frac{\lambda}{2} \sigma_z(t) \, , \label{2}$$ where $q(t)$ is the noise part of the output, and $\lambda$ is a response coefficient. The linearity assumption implies that the coupling $-\sigma_z f/2$ of the detector to the oscillations is sufficiently weak so that the detector response can be described in the linear-response approximation: $$\lambda = i \int_0^{\infty} d\tau e^{i\omega \tau} \langle [q(\tau),f] \rangle_0 \, . \label{3}$$ The average $\langle \ldots \rangle_0$ in eq. (\[3\]) is taken over the stationary density matrix of the detector. Since the response coefficient $\lambda$ is independent of frequency $\omega$, eq. (\[3\]) can be written in terms of the $q$–$f$ correlator as $$\langle q(t+\tau)f(t) \rangle_0 = 2\pi S_{qf} \delta (\tau-0)\, , \;\;\; S_{qf}\equiv \frac{a-i\lambda}{4\pi} \, , \label{4}$$ where the infinitesimal shift in the argument of the $\delta$-function represents small but finite response time of the detector, and is needed to resolve the ambiguity in eq. (\[3\]). Parameter $a$ in eq. (\[4\]) is introduced to represent the real part of the $q$–$f$ correlator that is not determined by the response coefficient $\lambda$. The role of detector in a quantum measurement is to convert the quantum input signal, in our case, the oscillations $x(t)=x_0 \sigma_z(t)/2$, into the output signal that is already classical and can be dealt with (e.g., monitored or recordered) without “fundamental” problems. Condition of the classical behavior of the detector output requires the output spectral density to be much larger than the spectral density of the zero-point fluctuations in the relevant range of low frequencies of the input signal. For a detector with a short internal time scale this means that the noise $q(t)$ is $\delta$-correlated on the time scale of the input signal: $$\langle q(t+\tau)q(t) \rangle_0 = 2\pi S_q \delta(\tau)\, . \label{5}$$ Here $S_q$ is the constant low-frequency part of the spectral density $S_q(\omega)$ of the detector output noise. For a quantum-limited detector at small temperature $T\rightarrow 0$, eqs. (\[4\]) and (\[5\]) impose a constraint on the spectral density $S_f$ of the coupling operator $f$. Indeed, eqs. (\[4\]) and (\[5\]) can be written explicitly in the basis of the energy eigenstates $|k\rangle$ of the detector and give the following expression for the spectral density $S_q$: $$S_q=\int d \varepsilon_k d \varepsilon_{k'} \nu (\varepsilon_k) \nu (\varepsilon_{k'}) \rho_k \langle k| q|k'\rangle \langle k'|q| k\rangle \delta(\varepsilon_k - \varepsilon_k' -\omega) \, . \label{6}$$ Expression for the correlation amplitude $S_{qf}$ is similar, with the matrix element $\langle k'|q|k \rangle$ replaced by the matrix element of $f$. In these expressions, $\varepsilon_k$ is the energy of the state $|k\rangle$, $\rho_k$ is the probability to be in this state, and $\nu$ is the state energy density. Since $S_q$ is independent of the frequency $\omega$, eq. (\[6\]) is satisfied when the matrix elements of $q$ and the density of states are constant, and eq. (\[6\]) can then be written as $$S_q= |\langle q\rangle |^2 \nu^2 \, .$$ It should be noted that the constant matrix elements $\langle q \rangle$ and $\langle f\rangle$ are off-diagonal in the $k$-basis and can be imaginary. Following the same steps for $S_f$ we express this spectral density in terms of $S_{q}$ and $S_{qf}$: $$S_f= |\langle f\rangle |^2 \nu^2 = |S_{qf}|^2/S_q \, . \label{8}$$ Equations (\[8\]) and (\[4\]) relate the backaction noise of the detector determined by the spectral density $S_f$ to its response coefficient and the output noise. Conceptually, such a relation resembles the fluctuation-dissipation theorem that links response of the system to its equilibrium fluctuations, but it does not have the status of a “theorem”. It is obvious from the derivation above that eq. (\[8\]) is not necessarily valid for an arbitrary system playing the role of detector in a quantum measurement. Nevertheless, it holds for several of the “standard” detectors: quantum point contact, resistively-shunted dc SQUID, and Cooper-pair electrometer, considered later in this work. Making use of $q$– and $f$– correlators, we can calculate the spectral density of the detector output $o(t)$ in the process of continuos measurement of the MQC oscillations. From eq. (\[2\]), the correlation function of $o(t)$ is: $$K_o(\tau)= 2\pi S_q \delta (\tau) + \frac{\lambda^2}{4} \mbox{Tr} \{ \rho \sigma_z \sigma_z(\tau) \} \, , \label{10}$$ where $\rho$ is the stationary density matrix of the two-state system established as a result of the interaction with the detector. Averaging the Heisenberg equation of motion of the operator $\sigma_z(\tau)$ over the $\delta$-correlated backaction noise $f$ of the detector, we get the set of equations for the time evolution of the matrix elements $\sigma_{ij}$ of $\sigma_z (\tau)$: $$\dot{\sigma}_{11}= \Delta \,\mbox{Im} \sigma_{12}\, , \;\;\;\; {\dot\sigma}_{12}= (i\varepsilon -\Gamma ) \sigma_{12} - i\Delta \, \sigma_{11} \, , \label{9}$$ and $\sigma_{22}=-\sigma_{11}$, with the rate $$\Gamma=\pi S_f \label{12}$$ describing the backaction dephasing of the oscillations by the detector. The density matrix $\rho$ of the two-state system satisfies the same set of equations (\[9\]), except for the normalization, $\rho_{11}+\rho_{22}=1$, and its stationary value is $\rho=1/2$. Solving eqs. (\[9\]) with the initial condition $\sigma_z (0)= \sigma_z$ and averaging $\sigma_z\sigma_z (\tau)$ over $\rho=1/2$ we find the spectral density $S_o (\omega ) = (1/2\pi) \int_{-\infty}^\infty K_o(\tau) e^{i \omega \tau } d\tau$. Under the conditions of “resonance”, $\varepsilon=0$, when the oscillation amplitude is maximum, we get: $$S_o(\omega ) = S_q +\frac{\Gamma \lambda^2}{4\pi} \frac{ \Delta^2 } {(\omega^2-\Delta^2)^2+\Gamma^2\omega^2} \, . \label{11}$$ When $\varepsilon \neq 0$, it is convenient to calculate the spectrum numerically from eq. (\[9\]). The spectrum in this case is plotted in Fig. 2 for several values of $\varepsilon$ and the dephasing rate $\Gamma$. For weak dephasing, $\Gamma \ll \Delta$, the spectrum consists of a zero-frequency Lorentzian that vanishes at $\varepsilon=0$ and grows with increasing $|\varepsilon |$, and a peak at the oscillation frequency $\Omega= (\Delta^2+ \varepsilon^2 )^{1/2}$. The peak at zero frequency reflects the incoherent transitions with a small rate of order $\Gamma$ between the states of the two-state system. The high-frequency peak of the MQC oscillations also has the width $\Gamma$. While this width can be small for sufficiently weak dot-contact coupling, the height of the oscillation peak cannot be arbitrarily large in comparison to the background noise spectral density $S_q$. At $\varepsilon=0$, when the amplitude of the oscillations is maximum, the peak height is $S_{max}=\lambda^2/4\pi \Gamma$. Even in this case, the ratio of the peak height to the background is limited: $$\frac{S_{max}}{S_q} = \frac{\lambda^2}{4\pi^2S_fS_q} = \frac{4 \lambda^2}{\lambda^2+a^2} \leq 4 \, . \label{15}$$ This limitation is universal, e.g., independent of the coupling strength between the detector and oscillations, and reflects quantitatively the interplay between measurement of the MQC oscillations and their backaction dephasing. The fact that the height of the spectral line of the oscillations can not be much larger than the noise background means that, in the time domain, the oscillations are drowned in the shot noise. (3.,2.4) (.0,.0)[=3.in=2.4in]{} (3.0,2.4) (.0,.0)[=3.0in=2.4in]{} When the backaction dephasing rate $\Gamma$ increases, the oscillation line broadens towards the lower frequencies, and eventually turns into the growing spectral peak at zero frequency associated with the incoherent jumps between the two basis states of the two-state system. At large $\Gamma$, when the coherent oscillations are suppressed, the rate of incoherent tunneling decreases with increasing $\Gamma$. For instance, at $\Gamma \gg \Omega$, the tunneling rate is $\gamma=\Delta^2/ 2\Gamma$ , and the spectral density of the detector response at low frequencies $\omega \! \sim \! \gamma$ has the standard Lorentzian form, $S_o(\omega ) -S_q=2 \gamma \lambda^2/4\pi (4\gamma^2+ \omega^2)$. Suppression of the tunneling rate $\gamma$ with increasing dephasing rate $\Gamma$ is an example of the generic “Quantum Zeno Effect” in which quantum measurement suppresses the decay rate of a metastable state. In the context of search for the macroscopic quantum coherent oscillations, the Lorentzian spectral density has been observed and used for measuring the tunneling rate of incoherent quantum flux tunneling in SQUIDs [@b26]. The maximum signal-to-noise ratio $S_{max}/S_q$ (\[15\]) is attained if the fundamental backaction of the detector is the only mechanism of dephasing of the coherent oscillations. We now discuss briefly the effect of a weak additional dephasing and energy-relaxation on the spectral density of the oscillations. The efect of such a weak relaxation is noticeable if the backaction dephasing is also weak, $\Gamma \ll \Delta$. Energy relaxation arises typically due to interaction with some external system (“reservoir”) that is in equilibrium at temperature $T$. The interaction term in the Hamiltonian can be written similarly to the interaction with the detector (\[1\]) as $$H_c = -\sigma_z f_r \, , \label{16}$$ where $f_r$ is the reservoir force acting on the system. Under the assumption of the frequency-independent relaxation rate, the standard free equilibrium correlator of this force is (see, e.g., [@b27]): $$\langle f_r(t) f_r(t+\tau) \rangle = \alpha \int \frac{d \omega }{\pi} \frac{ \omega e^{i\omega \tau } }{1-e^{- \omega /T} } \, , \label{cor}$$ where parameter $\alpha$ characterizes the relaxation strength. Comparison of this correlator with the $\delta$-correlated backaction noise of the detector shows that the detector (\[1\]) is acting effectively as a reservoir with temperature that is much larger that the energies of the two-state system. Energy relaxation makes the stationary average value of $\sigma_z$ non-vanishing, and the output correlation function should now be calculated as $$K_o(\tau)= K_q (\tau) + \frac{\lambda^2}{8} [\langle \sigma_z \sigma_z(\tau) +\sigma_z(\tau) \sigma_z\rangle - 2 \langle \sigma_z \rangle^2 ] \, . \label{cor2}$$ For weak coupling, it is convenient to find the time evolution of $\sigma_z(\tau)$ in the basis of eigenstates of the two-state Hamiltonian. In this basis, the Hamiltonian including interaction with the reservoir (and omitting temporarily the detector) is : $$H = -\frac{1}{2} \Omega \sigma_z - \frac{1}{\Omega }(\varepsilon \sigma_z - \Delta \sigma_x)f_r +H_r\, . \label{17}$$ $H_r$ is the Hamiltonian of the reservoir. Heisenberg equations of motion that follow from the Hamiltonian (\[17\]) are: $$\dot{f}_r=i[H_r,f_r] \, , \;\;\;\; \dot{H}_r= \frac{i}{\Omega } (\Delta \sigma_x-\varepsilon \sigma_z )[f_r,H_r] \, ,$$ $$\dot{\sigma}_z=2f_r\Delta \sigma_y \, , \;\;\;\; \dot{\sigma}_\pm= \mp i (\Omega +2\varepsilon f_r/\Omega)\sigma_\pm \mp i \Delta \sigma_z/\Omega \, ,$$ where $\sigma_\pm\equiv (\sigma_x\pm i\sigma_y)/2$. Integration of the first two of these equations gives in the first order in coupling (\[16\]) to the two-state system: $$f_r(t)=f_r^{(0)}(t)-i\Delta \int^t \! d\tau \int^\tau \! d\tau' [f_r^{(0)}(\tau),f_r^{(0)}(\tau')] \sigma_y(\tau') \, ,$$ where $f_r^{(0)}$ is the free part of the fluctuating reservoir force in absence of coupling. Solving the second pair of the Heisenberg equations up to the second order in coupling, making the rotating-wave approximation, and tracing out the reservoir degrees of freedom with the help of the correlator (\[cor\]), we get a set of equations for the evolution of the matrix elements $s_{ij}$ of the operator of the oscillating variable (given by $\sigma_z(\tau)$ in the original “position” basis) in the eigenstate basis: $$\dot{s}_{jj}(\tau) = \Gamma_e [\frac{\varepsilon}{\Omega} -\coth \{ \frac{\Omega}{2T} \} s_{jj}] + (-1)^j \frac{\Gamma \Delta^2}{2\Omega^2} (s_{11}-s_{22}) \, ,$$ $${\dot s}_{12}(\tau) =(i\varepsilon -\Gamma_0 ) s_{12} \, . \label{21}$$ Initial conditions for these equations are: $s_{11}=-s_{22}=\varepsilon/\Omega$, and $s_{12}=-\Delta/\Omega$. The characteristic energy-relaxation rate in eq. (\[21\]) is $\Gamma_e =2\alpha \Delta^2/\Omega$, and the total dephasing rate is $$\Gamma_0= \frac{1}{\Omega^2} [\alpha (\Delta^2 \Omega \coth \{ \frac{\Omega}{2T} \} +4\varepsilon^2 T ) + \Gamma (\varepsilon^2+ \frac{\Delta^2}{2})] \, .$$ In eqs. (\[21\]), we also added the detector dephasing terms from eq. (\[9\]) “rotated” from the position basis into the eigenstate basis. The density matrix $r$ of the two-state system in the basis of eigenstates satisfies similar equations, and the stationary values of its matrix elements are $r_{12}=0$ and $r_{11}=(\Gamma_t+ \Gamma_e)/2 \Gamma_t$, where $$\Gamma_t\equiv \Gamma_e \coth (\Omega/2T)+\Gamma \Delta^2/\Omega^2 \, .$$ Using these relations, the definition (\[cor2\]), and the evolution equations (\[21\]) we find the spectral density: $$\begin{aligned} \lefteqn{ S_o(\omega )=S_q + \frac{\lambda^2 }{4\pi \Omega^2} \times } \nonumber \\ & & \left( [1-(\frac{\Gamma_e}{\Gamma_t})^2] \frac{2 \varepsilon^2 \Gamma_t }{\omega^2+ \Gamma_t^2} + \sum_{\pm} \frac{\Delta^2 \Gamma_0} {(\omega \pm\Omega )^2+\Gamma_0^2 } \right) \, . \label{23} \end{aligned}$$ As in the case without energy relaxation, the spectral density consists of a zero-frequency Lorentzian of width $\Gamma_t$ and peaks at $\pm \Omega$ of width $\Gamma_0$ due to coherent oscillations. For weak relaxation, the incoherent slow transitions giving rise to the low-frequency noise are the transitions between the two energy eigenstates of the system. The height of the oscillation peak is suppressed in presence of additional energy relaxation that contributes to the dephasing rate $\Gamma_e$, and the relative magnitude of the peak, $S_{max}/S_q$ is smaller than its value without the relaxation. Relation to energy sensitivity ============================== The detector characteristics for measurement of the quantum coherent oscillations in a two-state system considered in the previous Section are related to another detector characteristic that is used for measurements of the harmonic signals – see, e.g., examples in [@b28; @b29; @b30; @b31], and sometimes is loosely referred to as “energy sensitivity”. It is defined by considering the detector measuring a harmonic oscillator with a frequency $\omega_0$ and a small relaxation rate $\gamma \ll \omega_0$. In this Section, we establish the quantitative relation between the signal-to-noise ratio $S_{max}/S_q$ for measurements of the two-state systems discussed above with the energy sensitivity used in the literature for measurements of harmonic signals. The Hamiltonian of the damped harmonic oscillator attached to a detector is obtained by replacing the two-state part of eqs. (\[1\]) and (\[16\]) with the corresponding oscillator terms: $$H = \frac{M}{2}(\dot{x}^2+\omega_0^2 x^2) -x(f+ f_r)+H_0+H_r \, , \label{ho}$$ where $M$ is the mass of the oscillator and $x$ is the oscillating coordinate. Due to linearity of the system, the Heisenberg equation of motion for $x$ that follow from the Hamiltonian (\[ho\]) (see, e.g., [@b27]) coincides with the classical equation of motion of the damped oscillator: $$\ddot{x}-\gamma \dot{x} +\omega_0^2 x = \frac{1}{M}(f_r(t) + f(t)) \, . \label{31}$$ As in the previous Section, the random forces $f_r(t)$ and $f(t)$ are produced, respectively, by the reservoir responsible for the energy relaxation of the oscillator and by the detector. (Although the operators $f_r$ and $f$ in eqs. (\[ho\]) and (\[31\]), as well as the transfer coefficient $\lambda$ in eq. (\[32\]) below, differ from the corresponding quantities used in Section 2 by a normalization factor, this distinction is not made explicit. This should not lead to any confusion, since the normalization factor drops out of all expression which are compared between the two Sections.) In general, the right-hand-side of eq. (\[31\]) should also contain an external perturbation that creates a “signal” component of the oscillations $x(t)$. However, for the discussion of the detector sensitivity, it is appropriate to treat the equilibrium fluctuations of the oscillator driven by the reservoir noise $f_r(t)$ (e.g., the zero-point oscillations at vanishing temperature $T$) as part of the signal. This allows us not to include additional signal terms in eq. (\[31\]). The sensitivity of the detector is characterized by the detector noise contribution to the spectral density of the output $S_o$ reduced to the detector input. Similarly to eq. (\[2\]) the output of the detector measuring harmonic oscillator is: $$o(t)=q(t) + \lambda x(t) \, . \label{32}$$ This equation implies that the detector contribution to the output noise comes from the two sources: direct output noise $q(t)$, and effect of the backaction noise $f(t)$ on the oscillator coordinate $x(t)$. Introducing the dynamic response function $G(t)$ of the oscillator: $$x(t) = \int_0^{+\infty} d\tau G(\tau) (f_r(t-\tau)+f(t-\tau))\, , \label{33}$$ with $G(\tau) = 0$ for $\tau<0$, we see from eq. (\[31\]) that $$G(\omega )= \int d\tau e^{-i\omega \tau} G(\tau)=\frac{1}{M} \frac{1}{\omega_0^2-\omega^2+i\gamma\omega} \, . \label{34}$$ In terms of the response function, $x(\omega)=G(\omega ) (f_r(\omega)+f(\omega))$. Since the detector noise $f(t)$ is uncorrelated with the reservoir force $f_r(t)$, or in general, any other signal component of $x(t)$, we see from eqs. (\[32\]), (\[33\]), and (\[34\]) that the spectral density of the detector output consists of the two additive components: signal, i.e., equilibrium spectral density of the harmonic oscillations transformed to the output, and the detector noise $S_N (\omega)$: $$S_o(\omega)=\lambda^2 |G(\omega )|^2 S_r (\omega) + S_N (\omega) \, ,$$ $$S_N(\omega )= S_q+\lambda^2 |G(\omega )|^2 S_f+2 \lambda \mbox{Re} G(\omega ) \bar{S}_{qf} \, . \label{35}$$ Here $\bar{S}_{qf}$ is the symmetrized correlator of the detector output and input noises, $\bar{S}_{qf}=(S_{qf}+S_{fq})/2=a/4\pi$ (with $a$ defined in eq. (\[4\])), and $S_r (\omega)$ is the spectral density of the reservoir force $f_r$: $$S_r (\omega)=(\gamma \omega M/ 2\pi) \coth (\hbar \omega/2T). \label{36}$$ The noise properties of the detector are better characterized if its contribution to the output noise is reduced to the input, i.e. instead of $S_N(\omega )$ (\[35\]) we consider the quantity $F(\omega) \equiv S_N(\omega )/\lambda^2 |G(\omega )|^2$. For weak damping, $\gamma \ll \omega_0$, when $$|G(\omega )|^2 \simeq \frac{1}{4\omega_0^2 M^2 ((\omega-\omega_0)^2+ \gamma^2/4)} \, ,$$ we get $$F(\omega)= S_f+S_q (2\omega_0 M/\lambda)^2 ((\omega-\omega_0)^2+ \gamma^2/4)-$$ $$\bar{S}_{qf} (4\omega_0M/\lambda) (\omega-\omega_0) \, .$$ The three terms in this equation scale differently with the strength of the detector-oscillator coupling, since $\lambda$ and $\bar{S}_{qf}$ are proportional to the first power, while $S_f$ is proportional to the second power of the coupling strength. $F(\omega)$ can be minimized with respect to the coupling strength and also with respect to the small detuning $\omega-\omega_0$ between the signal frequency and the oscillator frequency. The minimum is reached when $$\omega-\omega_0 = \frac{\lambda\bar{S}_{qf}}{2\omega_0 M S_q} \, ,$$ and $$\lambda = \frac{\gamma \omega_0 M S_q}{S_0} \, , \;\;\;\; S_0\equiv (S_q S_f -\bar{S}_{qf}^2)^{1/2} \, ,$$ and is equal to $$F_{min} = 2\gamma \omega_0 M\frac{S_0}{\lambda } \, . \label{37}$$ It is convenient to normalize the reduced noise $F$ in such a way that it can be directly compared to the equilibrium fluctuations in the oscillator driven by $S_r$. Since at this stage we can already neglect small difference between the oscillator frequency and signal frequency, both the minimum noise $F_{min}$ (\[37\]) and the equilibrium spectral density $S_r$ (\[36\]) are proportional to the oscillator parameters $\gamma \omega_0 M$. This means that with appropriate normalization we can define $F_{min}$ directly in terms of the number of quanta added to the signal at frequency $\omega$ by the detector. This is achieved by introducing the energy sensitivity as $$\epsilon \equiv \frac{\pi}{\gamma \omega_0 M} F_{min}= \frac{2\pi }{\lambda } (S_q S_f -\bar{S}_{qf}^2)^{1/2} \, . \label{38}$$ Equations (\[8\]) and (\[4\]) of the previous Section show that for a quantum-limited detector $$\epsilon =\hbar/2 \, . \label{39}$$ (Note that in this Chapter, $\hbar$ is shown only in some of the final results.) Equation (\[39\]) agrees with the conclusion of a general theory of quantum linear amplifiers, according to which a phase-insensitive linear amplifier adds at least half-a-quantum of noise to the amplified signal – see, e.g., [@b32]. This result is related to eq. (ref[39]{}) since detector in the measurement process plays the role of an amplifier transforming weak quantum input signal into the classical output. Comparing eq. (\[38\]) for the energy sensitivity with eq.  (\[15\]) for the signal-to-noise ratio of the measurement of the quantum coherent oscillations we see finally that these two quantities are closely related. When the output and input noises of the detector are uncorrelated, $\bar{S}_{qf}=0$, the relation is simple: $$\frac{S_{max}}{S_q}= (\hbar/\epsilon)^2 \, . \label{40}$$ As will be clear from the examples of specific detector considered below, the situation with $\bar{S}_{qf}=0$ can be reasonably referred to as “symmetric detector”. As follows from eq. (ref[40]{}), the largest signal-to-noise ratio of 4 is obtained for such a symmetric detector in the quantum-limited regime with $\varepsilon =\hbar/2$. When the input-output correlation is non-vanishing, the signal-to-noise ratio is smaller that the value given by eq.  (\[40\]), while the energy sensitivity $\epsilon$ for measurement of the harmonic signal can still be made equal to $\hbar/2$ by optimizing the detuning between the signal and the oscillator. Another difference between the measurement of harmonic oscillator defining $\epsilon$ and measurement of the two-state system, is that the minimum noise (\[37\]) in the oscillator measurement is reached only for optimum detector-oscillator coupling, while the maximum signal-to-noise ratio (\[15\]) for the quantum oscillation measurement is independent of the coupling strength to the detector as long as the coupling is weak. Energy sensitivity of a quantum point contact ============================================= One of the applications of the results obtained in the previous Section is the demonstration that a quantum point contact that is frequently used as the detector of electric charge or voltage [@b33; @b33b; @b34; @b35] can reach the quantum-limited regime with ultimate energy sensitivity (\[39\]). The mechanism of operation of a quantum point contact as a detector utilizes modifications of the electron transmission properties of the contact by the measured voltage [@b33]. When the contact is biased with a large voltage $V$, changes in the electron transmission probability lead to changes in the current $I$ flowing through the contact which serve as the measurement output. Fluctuations of the electric potential in the contact region due to the current flow produce the backaction dephasing of the measured object by the point contact [@b34; @b35]. This dephasing was calculated for symmetric contacts within different approaches in [@b36; @b37; @b38; @b39; @b40]. It is known [@b41; @b25] that in the case of measurement of a two-state system, when the coupling to the system is symmetric, quantum point contact is an ideal detector of the quantum coherent oscillations. Such a detector causes the minimum dephasing of the oscillations that is consistent with the information acquisition by the measurement. For asymmetric coupling, the dephasing by the point contact is larger that the fundamental minimum [@b25; @b35]. To calculate the energy sensitivity of the point contact detector, we start with the standard Hamiltonian of a single-mode point contact. Including a weak additional scattering potential $U(x)$ for the point contact electrons which is the input signal of the measurement we can write the Hamiltonian as $$H= \sum_{ik} \varepsilon_k a^{\dagger}_{ik}a_{ik} + U\, , \;\;\; U=\sum_{ij}U_{ij} \sum_{kp} a^{\dagger}_{ik}a_{jp}\, . \label{41}$$ The operators $a_{ik}$ in this Hamiltonian represent point-contact electrons in the two scattering states $i=1,2$ (incident from the two contact electrodes) with momentum $k$, and $U_{ij}= \int dx \, \psi_i^*(x) U(x) \psi_j(x)$ are the matrix elements of the potential $U(x)$ in the basis of the scattering states. Here $\psi_i(x)$ is the wavefunction of the scattering state, and $x$ is the coordinate along the point contact. Several assumptions are made about the contact. The bias energy $eV$ is assumed to be much larger than temperature $T$, but much smaller than both the Fermi energy in the point contact and the inverse traversal time of the contact. This allows us to linearize the energy spectrum of the point-contact electrons: $\varepsilon_k= v_F k$, where $v_F$ is the Fermi velocity, and neglect the momentum dependence of the matrix elements $U_{ij}$. The potential $U(x)$ is also assumed to be sufficiently weak and can be treated as perturbation. In this regime, the point contact operates as a linear detector, and the current response to the perturbation $U$ can be calculated in the linear-response approximation. The last assumption is that the frequencies of the input signal are much smaller than $eV$, the fact that allows to treat $U$ as the static perturbation. At frequencies much lower that both $eV$ and inverse traversal time of the contact, the current is constant throughout the contact and the contact response can be calculated at any point $x$. We choose the origin of the coordinate $x$ in such a way that the unperturbed scattering potential is effectively symmetric, i.e., the reflection amplitudes for both scattering states are the same, and then take $x$ to lie in the asymptotic region of the scattering states. In this case, the standard expression for the current in terms of the electron operators $\Psi(x)$: $$I = \frac{-ie\hbar}{2m} (\Psi^{\dagger} \frac{\partial \Psi} {\partial x} - \frac{\partial \Psi^{\dagger} }{\partial x} \Psi)\, , \;\;\; \Psi(x)= \sum_{ik} \psi_{ik}(x)a_{ik} \, ,$$ gives for the current operator at $x$: $$\begin{aligned} I= \frac{e v_F}{L} \sum_{kp} [D(a^{\dagger}_{1k}a_{1p} - a^{\dagger}_{2k}a_{2p}) + \nonumber \\ i(DR)^{1/2}e^{-i(k-p)|x|} (a^{\dagger}_{1k} a_{2p} -a^{\dagger}_{2k} a_{1p}) ] \, . \label{42} \end{aligned}$$ Here $D$ and $R=1-D$ are the transmission and reflection probabilities of the point contact, $L$ is a normalization length, and the variation of the momentum near the Fermi points (i.e., the difference between $k$ and $p$) was neglected everywhere besides the phase factor in the second term. The reason for keeping this factor will become clear later. In the linear-response regime, the current response of the point contact is driven by the part of the perturbation $U$ causing transitions between the two scattering states $\psi_{1,2}$. As shown in the Appendix, the real part of the transition matrix element $U_{12}$ is related to the change $\delta D$ of the transmission probability of the contact: $$U_{12}= \frac{v_F}{L} \frac{\delta D+iu}{2(DR)^{1/2}} \, , \;\;\;\; U_{21} = U_{12}^*\, . \label{43}$$ The imaginary part of $U_{12}$, expressed through a dimensionless parameter $u$ in eq. (\[4\]), does not affect the current $I$. Qualitatively, it characterizes the degree of asymmetry in the coupling of the quantum dots to the point contact; $u=0$ if the perturbation potential $U(x)$ is applied symmetrically with respect to the main scattering potential of the point contact. In the measurement process, the perturbation $U$ represents the coupling operator between the contact and the measured system, whereas the current $I$ is the measurement output. As follows from the discussion in the previous Section, the energy sensitivity of the contact is determined by the correlators of $U$ and $I$, and by the response coefficient $\lambda$. Using eqs. (\[41\]), (\[42\]), and (\[43\]) we can evaluate the correlators directly. In the limit of large voltages, $eV\gg T$, when the contact noise properties are dominated by the shot noise, we get: $$\begin{aligned} \langle U(t)U(t+\tau) \rangle_0 =\frac{eV}{4\pi} \, \frac{(\delta D)^2+u^2}{DR} \, \delta(\tau) \, , \nonumber \\ \langle I(t+\tau)I(t) \rangle_0 =\frac{e^3VDR}{\pi}\, \delta(\tau) \, , \label{45} \\ \langle U(t)I(t+\tau) \rangle_0 = \frac{e^2V}{2\pi} (i\delta D+u) \, \delta(\tau-\eta)\, . \nonumber\end{aligned}$$ The time delay $\eta \equiv |x|/v_F$ in the last of eqs. (\[45\]) comes from the phase factor $e^{-i(k-p)|x|}$ kept in eq. (\[42\]), and is infinitesimally small for small traversal time of the contact. It is nevertheless important for correct calculation of the contact response $\lambda$. From the $U$–$I$ correlator and the standard expression for the linear response (\[3\]) we confirm that $\lambda$ is equal to the change of current through the contact due to change $\delta D$ of the transmission coefficient, $\lambda=e^2V \delta D/\pi$. The correlators (\[45\]) satisfy the general relations (\[8\]) and (\[4\]), and therefore the energy sensitivity of the quantum point contact reaches the fundamental quantum limit (\[39\]). It should be noted, however, that this conclusion is strictly valid only in the large-voltage limit $eV\gg T$. At finite temperature $T$, scattering of the point-contact electrons within the same direction of propagation (described by the terms $U_{11}$ and $U_{22}$ of the perturbation $U$) creates additional contribution to the backaction noise and degrades the energy sensitivity. The magnitude of this effect depends on the magnitude of the “forward” scattering matrix elements $U_{11}$ and $U_{22}$ relative to the backscattering matrix element $U_{12}$, increasing with intensity of the forward scattering but decreasing with $T/eV$ ratio. Flux and charge MQC oscillations ================================= This Section provides specific examples of measurements of the macroscopic quantum coherent oscillations of magnetic flux and electric charge. It is shown that the typical detectors for the flux and charge measurements, dc SQUID and Cooper-pair electrometer, satisfy the general equations of Sections 2 and 3, and should be capable of reaching the fundamental limit of the signal-to-noise ratio for the continuous weak measurement of the MQC oscillations. Flux oscillations measured with a dc SQUID ------------------------------------------ Typical set-up of a measurement of the MQC oscillations of flux with a dc SQUID consists of a two-state flux system (rf SQUID with half of a magnetic flux quantum $\Phi_0=\pi\hbar/e$ induced in it by an external magnetic field) coupled inductively to a dc SQUID biased with an external current $I_0$ and shunted by a resistor $R$ (Fig. 3). When the inductance of dc SQUID loop is small, the difference between the two Josephson phases $\varphi_{1,2}$ across the two junctions of the SQUID is directly linked to the flux $\Phi$ induced in the dc SQUID by the flux oscillations: $$\varphi_{1}- \varphi_{2} =2\pi \Phi/\Phi_0 \equiv \Theta\, .$$ In this case, the SQUID is equivalent to a single Josephson junction, with the supercurrent in this junction modulated by the flux $\Phi$. The total amplitude of Cooper-pair tunneling in the SQUID is equal to a sum of the two individual amplitudes of tunneling in the two SQUID junction, and can be written as $$E_J/2=I^{(+)}(\Theta)/4e\, ,\;\;\;\; I^{(+)}(\Theta) = I_1e^{i\Theta/2} + I_2e^{-i\Theta/2} \, , \label{51}$$ where $I_{1,2}$ are the critical currents of the two junctions. Coherent sum of the two tunneling amplitudes in (\[51\]) leads to modulation of the total supercurrent of the SQUID. (3.,1.76) (.25,.1)[=2.7in=1.6in]{} We consider the simplest regime of the dc SQUID dynamics when the bias current $I_0$ and associated average voltage $V_0=RI_0$ across the dc SQUID are sufficiently large, and the dc component of the Josephson current through it is small in comparison to $I_0$. In this regime, the Cooper-pair tunneling through the SQUID is adequately described by perturbation theory in the tunneling amplitude (\[51\]) and can be qualitatively interpreted as incoherent jumps of individual Cooper pairs. The resistor $R$ provides the dissipation mechanism that transforms reversible dissipationless Cooper-pair oscillations between the two electrodes of the SQUID into incoherent tunneling. Using the known results for the incoherent Cooper-pair tunneling [@b42], we can calculate the rate of this tunneling and find all the detector characteristics of the dc SQUID. The SQUID is coupled to the oscillations by the operator of the current $I_-$ circulating in the SQUID loop multiplied by the change $\delta \Phi$ of the flux in the loop induced by the flux oscillations. Changes in the flux through the dc SQUID change the total current $I_+$ through both SQUID junctions and create deviations $V$ of the voltage across the SQUID from $V_0$, $V=-RI_+$, that serve as the measurement output. As follows from the general discussion in Sec.  2, SQUID parameters important for measurement are the coefficient $\lambda$ of the transformation of the oscillating flux into the voltage $V$, the spectral density $S_I$ of the circulating current $I_-$ that is responsible for backaction dephasing by the SQUID, the spectral density of the output voltage $S_V$, and the correlator $S_{VI}$ between $V$ and $I_-$. These parameters can be found quantitatively starting from the tunneling part $H_T$ of the SQUID Hamiltonian that can be written as $$H_T= -\frac{E_J}{2} e^{i(2eV_0t+\varphi(t))} +h.c. \, , \label{52}$$ with $\varphi(t)$ being the random Josephson phase across the dc SQUID accumulated due to equilibrium voltage fluctuations produced by the resistor $R$. It is characterized by the correlator $$\langle \varphi (t) \varphi \rangle = \rho \int \frac{d \omega }{\omega} g(\omega) \frac{ e^{i\omega t } }{1-e^{- \omega /T} } \, , \label{55}$$ where $\rho=R/R_Q$ is the resistance $R$ in units of the quantum resistance $R_Q=\pi \hbar/4e^2$, the average $\langle \ldots \rangle$ is taken over equilibrium density matrix of the resistor $R$, and $g(\omega)$ describes the cut-off of the dissipation provided by $R$ at some large frequency $\omega_c$ associated with either finite inductance of the SQUID or finite capacitance of its junctions, while $g(\omega)=1$ at $\omega \ll \omega_c$. The operators of the two currents $I_\pm$ that determine the SQUID parameters are: $$I_\pm= \frac{-i}{2}[I^{(\pm )} (\Theta) e^{i(2eV_0t+\varphi (t))} - h.c.] \, , \label{53}$$ $$I^{(-)} \equiv (I_1e^{i\Theta/2}-I_2e^{-i\Theta/2})/2\, .$$ In the regime of the incoherent Cooper-pair tunneling the average dc current $\langle I_+ \rangle$ can be found treating the tunneling $H_T$ as perturbation: $$\langle I_+ \rangle = -i\int_0^{\infty} dt \langle [I_+,H_T(t)] \rangle = \pi |I^{(+)}(\Theta)|^2 \tau/e \, , \label{54}$$ where $$\tau\equiv \frac{1}{4\pi} \mbox{Re} \int_0^{\infty} dt e^{i2eV_0t} \langle [e^{i\varphi(t)},e^{-i\varphi} ] \rangle \, . \label{56}$$ For example, for vanishing temperature $T$, and small bias voltages, $2eV_0 \ll \omega_c$, the time $\tau$ defined in (\[56\]) can be found from eqs. (\[56\]) and (\[55\]) to be (see, e.g., [@b43]): $$\tau = (1/4 \omega_c\Gamma (\rho)) (2eV_0/\omega_c )^{ \rho-1} \, . \label{56a}$$ When the resistance $R$ is small, $R\ll R_Q$, $\tau$ becomes independent of $\omega_c$, $\tau=eR/2\pi V$. In this case, all the SQUID characteristics, including the average current $\langle I_+ \rangle$ (\[54\]) can be obtained by direct time averaging of the classical Josephson oscillations in the SQUID. The noise spectral densities of the two currents, $I_\pm$, are obtained by directly taking the average over equilibrium density matrix of the resistor $R$. They vary with frequency on the scale of the Josephson frequency $2eV_0$, and are constant at $\omega \ll 2eV_0$. In this frequency range, $$S_I =\frac{1}{2\pi} \int d t \langle I_-(t)I_- \rangle = \frac{1}{8\pi}|I^{(-)}(\Theta)|^2 \times$$ $$\int d t e^{i2eV_0t} \langle [e^{i\varphi(t)},e^{-i\varphi} ]_+ \rangle \, . \label{57}$$ Fluctuation-dissipation theorem relates the anticommutator $[\ldots ]_+$ in this equation to the commutator in eq. (\[56\]) and gives: $$S_I = |I^{(-)}(\Theta)|^2 \tau' \, , \;\;\; S_V = R^2 |I^{(+)}(\Theta)|^2 \tau'\, , \label{58}$$ where $\tau' \equiv \tau \coth (eV_0/T)$. The correlation function $S_{VI}$ is found similarly: $$S_{VI} = R [I^{(+)}(\Theta)]^*I^{(-)} (\Theta) \tau'\, . \label{59}$$ Comparison of the spectral density $S_V$ (\[58\]) and the average current (\[54\]) shows that $S_{I_+}=S_V/R^2= (e \langle I_+ \rangle /\pi)\coth (eV_0/T)$, i.e. the noise of the current $I_+$ can indeed be interpreted as resulting from uncorrelated transitions of individual Cooper pairs. In particular, at $T\ll eV_0$, the noise is the shot noise of Cooper pairs. Finally, eq. (\[54\]) gives the response coefficient of the SQUID $$\lambda \equiv \partial V/\partial \Phi = 2\pi R (\partial |I^{(+)}(\Theta)|^2/ \partial \Theta) \tau \, . \label{60}$$ (Note that as in eq. (\[58\]) for the backaction noise $S_I$ and also in eq. (\[59\]) for the correlator $S_{VI}$, the factor $\delta \Phi$ is omitted from the definition of $\lambda$.) For temperatures negligible on the scale of $eV_0$, $\tau'$ in the noise spectral densities is equal to $\tau$ and eqs.  (\[58\]) through (\[60\]) show that the spectral densities $S_V$, $S_I$, and $S_{VI}$ satisfy the general relation (\[8\]), and since $\partial |I^{(+)}(\Theta)|^2/ \partial \Theta = - 2 \mbox{Im} \{ [I^{(+)}(\Theta)]^* I^{(-)} (\Theta)\}$, the correlator $S_{VI}$ is also related to the response coefficient $\lambda$ by the expression identical to eq. (\[4\]). Moreover, since $$[I^{(+)}(\Theta)]^*I^{(-)} (\Theta)=\frac{1}{2}(I_1^2-I_2^2)+ i I_1I_2 \sin \Theta \, ,$$ we see that the real part of the correlator $S_{VI}$, which increases backaction dephasing produced by the SQUID, is indeed associated with the SQUID asymmetry. When $I_1=I_2$, the real part vanishes and the SQUID as detector reaches quantum-limited optimum for measurement of the quantum coherent oscillations. For such a symmetric SQUID the signal-to-noise ratio of the oscillation measurement is given by eq. (\[40\]) (with the intensity of background output noise given by $S_V$). If the temperature $T$ of the symmetric SQUID is negligible, eqs.  (\[58\]) through (\[60\]), and (\[38\]) show that in this regime the SQUID is the quantum-limited detector with the energy sensitivity $\epsilon=\hbar/2$, and the signal-to-noise ratio for measurement of the quantum flux oscillations is $S_{max}/S_V=4$. When $T$ becomes non-vanishing, both the output and backaction noise increase, $\tau'>\tau$, and eq. (\[40\]) describes the gradual suppression of the signal-to-noise ratio with increasing temperature. Charge oscillations measured with a Cooper-pair electrometer ------------------------------------------------------------ Coherent oscillations of charge take place in Josephson junctions which are sufficiently small for the charging energy $E_C$ of an individual Cooper pair, $E_C=(2e)^2/2C$ to be larger than temperature $T$ and Josephson coupling energy $E_J$. The supercurrent flow through the junction is “discretized” in this regime into the transfer of individual Cooper pairs by strong Coulomb repulsion. Quantitatively, if the charging energy is smaller that the superconducting energy gap $\Delta$ so that the dissipative quasiparticle tunneling is suppressed, the junction dynamics is governed by the simple Hamiltonian: $$H=E_C (n-q)^2-\frac{E_J}{2}(|n\rangle\langle n+1|+|n+1\rangle \langle n| ) \, , \label{70}$$ where $n$ is the number of Coper-pairs charging the junction, and, here and below, $q$ is the charge (in units of 2$e$) injected into the junction from external circuit. Eigenstates of the Hamiltonian (\[70\]) form energy bands as functions of the injected charge $q$, which can be varied continuously. Variations of the injected charge $q$ within these bands leads to the possibility of controlling the tunneling of individual Cooper pairs [@b44]. The best way of injecting the charge $q$ in a junction is provided by the “Cooper-pair box” system [@b45; @b46] in which the junction is attached to external bias voltage $V_g$ through a capacitor. If $q$ is fixed at half of a Cooper-pair charge, $q\simeq 1/2$, and the tunneling amplitude $E_J/2$ is much less than $E_C$, the two states of the Hamiltonian (\[70\]): $n=0$ and $n=1$ are nearly-degenerate and separated by the large energy gaps from all other states. In this regime, the junction dynamics is equivalent to that of a regular quantum two-state system with the two basis state that correspond to a Coooper pair being on the left or on the right electrode of the junction. Coherent superposition of charge states in such a two-state system is observed indirectly by measuring either the width of the transition region between the two charge states [@b47] or the energy gap between the eigenstates [@b48] as functions of the induced charge $q$. Quantum coherent oscillations between the two charge states were also observed directly in the time-dependent measurement [@b10]. (3.3,1.15) (-.15,.1)[=3.5in=1.1in]{} The experiment [@b10] was effectively based on the strong measurement of charge oscillations, when each measurement suppresses the oscillations, and they are observed as oscillations of probability in an ensemble of measurements. Continuous weak measurement of the quantum-coherent charge oscillations similar to the measurement of the flux oscillations with a dc SQUID discussed above would provide a less intrusive way of studying these oscillations. One of the detectors appropriate for such a measurement is a Cooper-pair electrometer [@b31; @b49]: two small Josephson junctions with Josephson coupling energies $E_{1,2}$ and capacitances $C_{1,2}$ connected in series and shunted with a resistor $R$ (Fig. 4). As in the case of dc SQUID, we consider dynamics of the Cooper pair transfer through the electrometer in the regime of incoherent tunneling. The main difference with the SQUID case is that now the amplitude of Cooper pair tunneling is modulated through the modulation of energy of the intermediate state in the process of the two-step transfer of Cooper pairs in the two junctions. At small bias voltages $V_0\ll E_C/e$, where from now on $E_C=2e^2/(C_1+C_2)$ is the charging energy of the central electrode of the electrometer, the intermediate state is virtual, and the average current $\langle I\rangle$ through the electrometer is determined by the same eq. (\[54\]) with $I^{(+)}(\Theta)$ replaced with the amplitude $I(q)$ of Cooper-pair transfer through both junctions (defined below). The charge $q$ injected into the central electrode controls the energy of the intermediate states in the process of the Cooper-pair transfer and modulates the tunneling amplitude. At small Josephson coupling energies $E_{1,2}\ll E_C$, and away from the resonance points $q=\pm 1/2$, Cooper-pair tunneling can be treated as perturbation. Then the instantaneous value of the current $I$ for a fixed value of the Josephson phase $\varphi$ across the electrometer (see, e.g., description of the two-junction system in [@b50]) is: $$I= I(q)\sin (2eV_0t+\varphi(t))\, , \label{71}$$ $$I(q)\equiv \frac{eE_1 E_2 }{ E_C} (\frac{1}{1-2q}+\frac{1}{1+2q})\, .$$ The two terms in the second equation in (\[71\]) correspond to the two intermediate states with different charges $n=\pm 1$ on the central electrode of the electrometer in the Cooper-pair transfer process. Averaging eq. (\[71\]) over the equilibrium quantum fluctuations of $\varphi$ we get expression for the average value of the current $I$ that is equivalent to eq. (\[54\]). Since the tunneling current $I$ through the electrometer is the measurement output, this expression determines the response coefficient $\lambda$ of the electrometer: $$\lambda \equiv (\partial \langle I \rangle/ \partial q)= \pi (\partial [I(q)] ^2/ \partial q) \tau/e \, , \label{72}$$ where $\tau$ is given by eqs. (\[56\]) and (\[56a\]). Similarly, the output noise of the current $I$ can be obtained as $$S_I = [I(q)]^2 \tau' \, . \label{73}$$ The backaction noise of the Cooper-pair electrometer is created by fluctuations of the charge on its central electrode in the process of the Cooper-pair tunneling. This fluctuations lead to fluctuations of electric potential of this electrode. In the same regime as for eq. (\[71\]), the magnitude of this fluctuations is determined by the magnitude of the instantaneous value of the potential at a fixed Josephson phase difference $\varphi$ across the electrometer: $$V= U(q)\cos (2eV_0t+\varphi(t))\, , \label{74}$$ where $$U(q)\equiv \frac{E_1E_2 }{2e E_C} (\frac{1}{(1-2q)^2}-\frac{1}{(1+2q)^2})\, .$$ As before, averaging over the fluctuations of $\varphi(t)$ we get the spectral density of the backaction noise and its correlation with the output noise: $$S_V = [U(q)]^2 \tau'\, , \;\;\; S_{VI} = -i U(q) I(q) \tau'\, . \label{75}$$ Equations (\[72\]), (\[73\]), and (\[75\]) show that at vanishing temperature, when $\tau'= \tau$, the noise characteristics of the Cooper-pair electrometer satisfy the general relations (\[8\]) and (\[4\]) of a quantum-limited detector. Moreover, the electrometer is “symmetric” detector in a sense that the input-output correlator $S_{VI}$ is purely imaginary. This means that both its energy sensitivity $\epsilon$ and the signal-to-noise ratio (\[40\]) for measurement of the two-state system reach fundamental limits. In summary, the examples of specific detectors considered in this Section show explicitly that many standard detectors should be capable of reaching the fundamental limits of sensitivity for measurements of electric charge and magnetic flux. In this regime, they are characterized by the fundamental signal-to-noise ration of 4 for continuous weak measurement of the macroscopic quantum coherent oscillations. The limitation on the signal-to-noise ratio of such a measurement has the same origin as the quantum limitation on the operation of an ideal linear phase-insensitive amplifier that adds a minimum of half-a-quantum of noise to the amplified signal. The author would like to acknowledge discussions with M.H. Devoret, J.R. Friedman, A.N. Korotkov, K.K. Likharev, J.E. Lukens, Yu.V. Nazarov, R.J. Schoelkopf, G. Schön, and A.B. Zorin. This work was supported in part by AFOSR. [**Appendix**]{} In this Appendix, we derive eq. (\[43\]) that relates the matrix elements of the perturbation of the scattering potential to the transmission properties of a point contact. Relation (\[43\]) can be established considering the stationary states of an electron confined to move on the interval $x\in [-L/2,L/2]$ with the main scattering potential located at the center of the interval, $x\simeq 0$. The scattering matrix $S$ for the symmetric scattering potential can be written as $$S=e^{i\Theta} \left( \begin{array}{cc} i\sin \nu \, , & \cos \nu \\ \cos \nu \, , & i\sin \nu \end{array} \right) \, ,$$ where $\Theta$ is the phase of the transmission amplitude, and $\nu$ parametrizes transmission probability $D$: $$D = \cos^2 \nu\, . \label{a1}$$ Diagonalizing the scattering matrix $S$, we find the two phase shifts $\varphi_{1,2}$ associated with it: $\varphi_1= \Theta +\nu$, $\varphi_2= \Theta +\pi -\nu$. The variations $\delta \varphi_j$ of the two phase shifts due to perturbation of the scattering potential lead to changes $\delta \varepsilon_j$ in energies of the two stationary states, $$\delta \varepsilon_j = v_F \delta \varphi_j/L \label{a2}$$ For symmetric main scattering potential, the two stationary states $\chi_j(x)$ are given by the even and odd combinations of the scattering states $\psi_j(x)$. In the basis of the states $\chi_j$, the perturbation matrix $U$ introduced in eq. (\[41\]) is: $$U=\left( \begin{array}{cc} (U_{11}+U_{22})/2+\mbox{Re} U_{12} \, , & (U_{11}-U_{22})/2-i\mbox{Im} U_{12}\\ (U_{11}-U_{22})/2+i\mbox{Im} U_{12}\, , & (U_{11}+U_{22})/2-\mbox{Re} U_{12} \end{array} \right) \, . \label{a3}$$ The diagonal elements of this matrix give the first-order corrections to the energies of the stationary states. Comparing expressions for the energy corrections given by eq. (\[a3\]) to expressions for the phase shifts combined with eq. (\[a2\]), we see that $$\mbox{Re} U_{12} = v_F \delta \nu /L \, .$$ This equation, together with the relation (\[a1\]) between $\nu$ and transmission probability $D$, gives eq. (\[43\]) of the main text. Since the matrix (\[a3\]) should be symmetric for the perturbation of the scattering potential that is symmetric with respect to the main part of the potential, we also see that $\mbox{Im} U_{12} =0$ in the symmetric case. This means that the nonvanishing imaginary part of $U_{12}$ can be viewed as a measure of the asymmetry of coupling between the point contact and a source of the perturbation. M. Brune, E. Hagley, J. Dreyer, X. MaÑtre, A. Maali, C. Wunderlich, J.M.Raimond, and S.Haroche, Phys. Rev. Lett. [**77**]{}, 4887 (1996). M. Arndt, O. Nairz, J. Vos-Andreae, C. Keller, G. van der Zouw, and A. Zeilinger, Nature [**401**]{}, 680 (1999). R. Rouse, S. Han, and J.E. Lukens, Phys. Rev. Lett. [**75**]{}, 1614 (1995). P. Silvestrini, V.G. Palmieri, B. Ruggiero, and M. Russo, Phys. Rev. Lett. [**79**]{}, 3046 (1997). L.J. Geerligs, D.V. Averin, and J.E. Mooij, Phys.  Rev. Lett. [**65**]{}, 3037 (1990). D.V. Averin, A.N. Korotkov, A.J. Manninen, and J.P. Pekola, Phys. Rev. Lett. [**78**]{}, 4821 (1997). J.R. Friedman, M.P. Sarachik, J. Tejada, and R. Ziolo, Phys. Rev. Lett. [**76**]{}, 3830 (1996). L. Thomas, F. Lionti, R. Ballou, D. Gatteschi, R. Sessoli, and B. Barbara, Nature [**383**]{}, 145 (1996). W. Wernsdorfer, E. Bonet Orozco, K. Hasselbach, A. Benoit, D. Mailly, O. Kubo, H. Nakano, and B. Barbara, Phys. Rev. Lett. [**79**]{}, 4014 (1997). Y. Nakamura, Yu.A. Pashkin, and J.S. Tsai, Nature [**398**]{}, 786 (1999). A.J. Leggett and A. Garg, Phys. Rev. Lett. [**54**]{}, 857 (1985). D.D. Awschalom, J.F. Smyth, G. Grinstein, D.P. DiVincenzo, and D. Loss, Phys. Rev. Lett. [**68**]{}, 3092 (1992); [**71**]{}, 4279 (E) (1993). N.V. Prokof’ev and P.C.E. Stamp, J. Phys.: Condens. Matter [**5**]{}, L633 (1993). A. Garg, Phys. Rev. Lett. [**74**]{}, 1458 (1995); Czech. J. Phys. [**46**]{}, Suppl. 4, 1854 (1996). D.V. Averin, Solid State Commun. [**105**]{}, 659 (1998). Yu. Makhlin, G. Schön, and A. Shnirman, Nature [**398**]{}, 305 (1999). L.B. Ioffe, V.B. Geshkenbein, M.V. Feigel’man, A.L. Fauchère, and G. Blatter, Nature [**398**]{}, 679 (1999). J.E. Mooij, T.P. Orlando, L. Levitov, Lin Tian, Caspar H. van der Wal, and Seth Lloyd, Science [**285**]{}, 1036 (1999). L.E. Ballentine, Phys. Rev. Lett. [**59**]{}, 1493 (1987). A. Peres, Phys. Rev. Lett. [**61**]{}, 2019 (1988). C.D. Tesche, Phys. Rev. Lett. [**64**]{}, 2358 (1990). Y. Aharonov, D.Z. Albert, and L. Vaidman, Phys.  Rev. Lett. [**60**]{}, 1351 (1988). V.B. Braginsky and F.Ya. Khalili, [*Quantum measurement*]{}, (Cambridge, 1992). M.B. Mensky, Phys. Usp. [**41**]{}, 923 (1998). A.N. Korotkov and D.V. Averin, cond-mat/0002203. S. Han, J. Lapointe, and J.E. Lukens, Phys. Rev.  Lett. [**66**]{}, 810 (1991). U. Weiss, [*Quantum dissipative systems*]{}, (World Scientific, 1993). J. Clarke, C.D. Tesche, and R.P. Giffard, J. Low Temp. Phys. [**37**]{}, 405 (1979). V.V. Danilov, K.K. Likharev, and O.V. Snigirev, in: [*SQUID’80*]{}, ed. by H.-D. Hahlbohm and H. Lübbig, (Berlin, W. de Gruyter, 1980), p. 473. V.V. Danilov, K.K. Likharev, and A.B. Zorin, IEEE Trans. Magn. [**19**]{}, 572 (1983). A.B. Zorin, Phys. Rev. Lett. [**76**]{}, 4408 (1996). C.M. Caves, Phys. Rev. D [**26**]{}, 1817 (1982). M. Field, C.G. Smith, M. Pepper, D.A. Ritchie, J.E.F. Frost, G.A.C. Jones, and D.G. Hasko, Phys. Rev. Lett. [**70**]{}, 1311 (1993). M. Kataoka, C.J.B. Ford, G. Faini, D. Mailly, M.Y. Simmons, D.R. Mace, C.-T. Liang, and D. A. Ritchie, Phys. Rev. Lett. [**83**]{}, 160 (1999). E. Buks, R. Schuster, M. Heiblum, D. Mahalu, and V. Umansky, Nature [**391**]{}, 871 (1998). D. Sprinzak, E. Buks, M. Heiblum, and H. Shtrikman, cond-mat/9907162. S.A. Gurvitz, Phys. Rev. B [**56**]{}, 15215 (1997). Y. Levinson, Europhys. Lett. [**39**]{}, 299 (1997). I.L. Aleiner, N.S. Wingreen, and Y. Meir, Phys. Rev. Lett. [**79**]{}, 3740 (1997). G. Hackenbroich, B. Rosenow, and H. A. Weidenmüller, Phys. Rev. Lett. [**81**]{}, 5896 (1998). M. Büttiker and A. M. Martin, Phys. Rev. B [bf 61]{}, 2737 (2000). A.N. Korotkov, Phys. Rev. B [**60**]{}, 5737 (1999). D.V. Averin, Yu.V. Nazarov, and A.A. Odintsov, Physica B [**165&166**]{}, 945 (1990). G.-L. Ingold and Yu.V. Nazarov, in: [ *“Single Charge Tunneling”*]{}, ed. by H. Grabert and M.H. Devoret (Plenum, New York, 1992), p. 21. D.V. Averin, A.B. Zorin, and K.K. Likharev, Zh.  Eksp. Teor. Fiz. [**88**]{}, 692 (1985) \[Sov. Phys. JETP [**61**]{}, 407\]. M. Büttiker, Phys. Rev. B [**36**]{}, 3548 (1987). P. Lafarge, H. Pothier, E.R. Williams, D. Esteve, C. Urbina, and M.H. Devoret, Z. Phys. B [**85**]{}, 327 (1991). V. Bouchiat, D. Vion, P. Joyez, D. Esteve, and M.H. Devoret, J. of Supercond. [**12**]{}, 789 (1999). D.J. Flees, S. Han, and J.E. Lukens, J. of Supercond. [**12**]{}, 813 (1999). A.B. Zorin, S.V. Lotkhov, Yu.A. Pashkin, H. Zangerle, V.A. Krupenin, T. Weimann, H. Schere, and J. Niemeyer, J. of Supercond. [**12**]{}, 747 (1999). D.V. Averin and K.K. Likharev, in: [*“Mesoscopic Phenomena in Solids”*]{}, ed. by B.L. Altshuler et al. (Elsevier, Amsterdam, 1991), p. 173.
--- abstract: 'Ultrafast laser-induced magnetic switching in rare earth-transition metal ferrimagnetic alloys has recently been reported to occur by ultrafast heating alone. Using atomistic simulations and a ferrimagnetic Landau-Lifshitz-Bloch formalism, we demonstrate that for switching to occur it is necessary that angular momentum is transferred from the longitudinal to transverse magnetization components in the transition metal. This dynamical path leads to the transfer of the angular momentum to the rare earth metal and magnetization switching with subsequent ultrafast precession caused by the inter-sublattice exchange field on the atomic scale.' author: - 'U. Atxitia$^{1}$' - 'T. Ostler$^{1}$' - 'J. Barker$^{1}$' - 'R. F. L. Evans$^{1}$' - 'R. W. Chantrell$^{1}$' - 'O. Chubykalo-Fesenko$^{2}$' title: Ultrafast dynamical path for the switching of a ferrimagnet after femtosecond heating --- The behavior of magnetization dynamics triggered by an ultrafast laser stimulus is a topic of intense research interest in both fundamental and applied magnetism [@Siegmann]. A range of studies using ultrafast laser pulses have shown very different timescales of demagnetization for different materials; from 100 fs in Ni [@BeaurepairePRL1996] to 100 ps in Gd [@WeistrukPRL2011]. Any potential applications utilizing such a mechanism would require, not only ultrafast demagnetization, but also controlled magnetization switching. Magnetization reversal induced by an ultrafast laser pulse has been reported in the ferrimagnet GdFeCo, together with a rich variety of phenomena [@StanciuPRL2007; @Hansteen; @VahaplarPRL2009; @Radu2011; @Ostler2012]. Several hypotheses have been put forward to explain the observed magnetization switching: crossing of the angular momentum compensation point [@StanciuPRL2007], the Inverse Faraday Effect [@Hansteen], and its combination with ultrafast heating [@VahaplarPRL2009]. It has been shown that the rare earth (RE) responds more slowly to the laser pulse than the transition metal (TM) [@Radu2011], even though the sublattices are strongly exchange coupled. Intriguingly, Radu *et al.* [@Radu2011] show experimentally and theoretically the existence of a transient ferromagnetic-like state, whereby the two sublattices align against their exchange interaction, existing for a few hundred femtoseconds. Recently [@Ostler2012], the atomistic model outlined in [@Radu2011; @Ostler2012] predicted the phenomenon of magnetization reversal induced by heat alone, in the absence of any external field; a prediction verified experimentally. This remarkable result opens many interesting possibilities in terms of ultrafast magnetization reversal and potential areas of practical exploitation, however a complete theoretical understanding of this effect is currently missing. In magnets consisting of more than one magnetic species, excitation of the spins on a time scale comparable with that of the inter-sublattice exchange takes the sublattices out of equilibrium with each other. It is in this regime where the thermally driven switching of ferrimagnetic GdFeCo occurs. A recent study by Mentink *et al.* [@Mentalnik2012] proposed an explanation of the process using a phenomenological model of the magnetization dynamics, which assumes the additive character of two relaxation mechanisms: one governed by the inter-sublattice exchange and another by the relativistic contribution (coupling to external degrees of freedom). The model is based on the physically plausible argument that the switching is driven by angular momentum transfer in the exchange-dominated regime. However, the assumption of a linear path to reversal allows the angular momentum transfer to occur through longitudinal components only, since the perpendicular components are neglected. Additionally, the dynamical equation in Ref. [@Mentalnik2012] was derived from the Onsager principle, generally valid for small deviations from the equilibrium only. Thus far, a complete explanation of the heat driven, ultrafast reversal process remains illusive. In this Letter we demonstrate that the switching of magnetization in a ferrimagnet after femtosecond heating is due to the transfer of angular momentum from the longitudinal to the transverse magnetization components in the TM and consequent transfer of the angular momentum through perpendicular components to the RE. We present a general formalism, leading to a macroscopic dynamical equation for a ferrimagnet. This is in the form of a Landau-Lifshitz-Bloch (LLB) equation, in which, unlike the phenomenological model of Ref. , the two relaxation mechanisms are not additive. Our theory gives the non-equilibrium conditions necessary for this angular momentum transfer to happen and thus to produce the precessional rather than linear reversal as suggested in Ref. . These predictions are supported by calculations using an atomistic model based on the Heisenberg exchange Hamiltonian with Langevin dynamics. In the absence of any external stimulus, the energetics of the atomistic spin model are described purely by exchange interactions, given by the spin Hamiltonian: $$\mathcal{H}= -\sum_{j<i} J_{ij} \mathbf{S}_{i} \cdot \mathbf{S}_{j} \label{eq:fullham}$$ where $J_{ij}$ is the exchange integral between spins $i$ and $j$ ($i, j$ are lattice sites), and where $j$ runs over first nearest neighbors only, $\mathbf{S}_{i}$ is the normalized magnetic moment $\left|\mathbf{S}_{i}\right|=1$. We model the magnetization dynamics of the system using the Landau-Lifshitz-Gilbert (LLG) equation with Langevin dynamics, as detailed in Ref. . The system consists of $\mathcal{N} \times\mathcal{N} \times \mathcal{N}$ cells in a fcc structure lattice which we populate with a random distribution of TM and RE ions in the desired concentration $q$ and $x=1-q$. To simulate the effect of an ultrafast heat pulse we use a step-like temperature pulse of duration 500 fs with a value of $T=T_{\text{max}}$. The model predicts the switching of GdFeCo compound under the ultrafast heat alone, as demonstrated in Ref. . Atomistic models have proven to be a powerful tool in predicting heat-induced switching, but fail to provide a simple picture for the cause of its physical origin. However, the macroscopic LLB equation has been demonstrated to be an adequate approach, allowing a simple description of ultrafast magnetization phenomena [@Atxitia2010; @Sultan], but up to now it existed only for a single species ferromagnet [@Garanin]. Recently [@LLBferri], we have derived the LLB equation for a two species system which describes the average magnetization dynamics in each sublattice $\mathbf{m}_\nu=\left\langle\mathbf{S}^{\nu}_i\right\rangle$, where $\nu$ stands for TM or RE sublattice in this case and $i$ for spins in the sublattice $\nu$. Importantly, unlike the approach used in Ref. [@Mentalnik2012], the derivation does not use the Onsager principle and is thus valid far from equilibrium. ![Numerical integration of the switching behavior for the non-stochastic LLB with a small angle (15 degrees) between sublattices (solid lines). Without the angle (dashed lines) switching does not occur, as predicted. The time $t=0$ corresponds to the end of the square shaped laser pulse with $T_{\text{max}}=1500$ K. For the integration at temperatures above $T_C$ we use the paramagnetic version of the ferrimagnetic LLB equation with MFA [@LLBferri].[]{data-label="fig:LLB-switching"}](magicalreversalLLB.pdf){width="8.0cm"} In the absence of an applied field and anisotropy, the LLB equation for the TM is written as: $$\label{eq:LLBT} \frac{1}{|\gamma_{{{\text{\tiny{T}}}}}|}\frac{\mathrm{d} \mathbf{m}_{{{\text{\tiny{T}}}}}}{\mathrm{d} t}{=} {-}\mathbf{m}_{{{\text{\tiny{T}}}}}{\times} \Big[ \mathbf{H}^{{{\text{\tiny{EX}}}}}_{{{\text{\tiny{T}}}}}{+} \frac{\alpha_{{{\text{\tiny{T}}}}}^{\perp}}{m^2_{{{\text{\tiny{T}}}}}}\mathbf{m}_{{{\text{\tiny{T}}}}} {\times} \mathbf{H}^{{{\text{\tiny{EX}}}}}_{{{\text{\tiny{T}}}}} \Big] {+}\alpha_{{{\text{\tiny{T}}}}}^{\|} H^{\|}_{{{\text{\tiny{T}}}}} \mathbf{m}_{{{\text{\tiny{T}}}}}\textbf{,}$$ with a complementary equation for the RE. The exchange field from the RE is calculated via the mean-field approximation (MFA) of the impurity model presented in [@Ostler2011] as $\mathbf{H}^{{{\text{\tiny{EX}}}}}_{{{\text{\tiny{T}}}}}=(J_{0,{{\text{\tiny{TR}}}}}/\mu_{{{\text{\tiny{T}}}}})\mathbf{m}_{{{\text{\tiny{R}}}}}$, where $J_{0,{{\text{\tiny{TR}}}}}=xzJ_{{{\text{\tiny{TR}}}}}$, $x$ is the impurity content, $z$ the number of nearest TM neighbors in the ordered lattice and $J_{{{\text{\tiny{TR}}}}}<0$ the inter-sublattice exchange parameter. The TM magnetic moment is denoted $\mu_{T}$, $\gamma_{{{\text{\tiny{T}}}}}$ is the gyromagnetic ratio for the TM lattice, $\alpha_{{{\text{\tiny{T}}}}}^{\|}(T)$ and $\alpha_{{{\text{\tiny{T}}}}}^{\bot}(T)$ are temperature-dependent TM longitudinal and transverse damping parameters, linearly proportional to the intrinsic coupling to the bath parameter $\lambda_{{{\text{\tiny{T}}}}}$ [@LLBferri]. The longitudinal effective field in Eq. reads $$H^{\|}_{{{\text{\tiny{T}}}}}= \frac{\Gamma_{{{\text{\tiny{TT}}}}}}{2} \left(1-\frac{m_{{{\text{\tiny{T}}}}}^{2}} {m_{e,{{\text{\tiny{T}}}}}^{2}}\right)-\frac{\Gamma_{{{\text{\tiny{TR}}}}}} {2} \left(1-\frac{\tau_{{{\text{\tiny{R}}}}}^{2}} {\tau_{e,{{\text{\tiny{R}}}}}^{2}}\right) \textbf{,} \label{eq:longitudinalrates}$$ where $\tau_{{{\text{\tiny{R}}}}}=|(\mathbf{m}_{{{\text{\tiny{T}}}}}\cdot\mathbf{m}_{{{\text{\tiny{R}}}}})|/m_{{{\text{\tiny{T}}}}}$ is the absolute value of the projection of the RE magnetization onto the TM magnetization and $\tau_{e,{{\text{\tiny{R}}}}}$ is its equilibrium value. The rate parameters in Eq. read $$\begin{aligned} \Gamma_{{{\text{\tiny{TT}}}}}=\frac{1}{\widetilde{\chi}_{{{\text{\tiny{T}}}},\|}} \left(1+\frac{|J_{0,{{\text{\tiny{TR}}}}}|}{\mu_{{{\text{\tiny{T}}}}}}\widetilde{\chi}_{{{\text{\tiny{R}}}},\|} \right), \ \Gamma_{{{\text{\tiny{TR}}}}}=\frac{|J_{0,{{\text{\tiny{TR}}}}}|}{\mu_{{{\text{\tiny{T}}}}}} \frac{\tau_{e,{{\text{\tiny{R}}}}}}{m_{e,{{\text{\tiny{T}}}}}}. \label{Rates}\end{aligned}$$ They are temperature-dependent via the equilibrium magnetizations and partial longitudinal susceptibilities $\widetilde{\chi}_{{{\text{\tiny{T}}}},\|}=\left( \partial m_{{{\text{\tiny{T}}}}}/ \partial H \right)_{H \rightarrow 0}$, $\widetilde{\chi}_{{{\text{\tiny{R}}}},\|}=\left( \partial m_{{{\text{\tiny{R}}}}}/ \partial H \right)_{H \rightarrow 0}$, evaluated in the MFA in the presence of inter-sublattice and intra-sublattice exchange [@LLBferri]. In Eq. the first term in the r.h.s. describes the precession of the TM magnetization, $\mathbf{m}_{{{\text{\tiny{T}}}}}$, around the exchange field produced by the RE sublattice. Although this term conserves the magnetization modulus, $m_{{{\text{\tiny{T}}}}}$, it allows transfer of angular momentum between lattices. The second term in Eq. describes the relaxation of $\mathbf{m}_{{{\text{\tiny{T}}}}}$ towards the antiparallel alignment between both sublattice magnetizations. Finally, the third term in Eq. defines the longitudinal relaxation, comprised of; the difference between relaxation coming from the deviations of TM magnetization from equilibrium and those of RE. In the ferrimagnetic LLB all three terms act on the timescale given by the exchange interactions in comparison to the ferromagnetic LLB case, where the longitudinal and transverse motion have very different timescales [@Chubykalo; @AtxitiaQ]. Fig. \[fig:LLB-switching\] shows the direct numerical integration of Eq. . With initial antiparallel alignment of the RE and TM, $\mathbf{m}_{{{\text{\tiny{T}}}}}\| \ \mathbf{m}_{{{\text{\tiny{R}}}}}$, when the temperature is raised both sublattice magnetizations are reduced, followed by the linear magnetization recovery path to the expected ground state \[see dashed lines in Fig. \[fig:LLB-switching\]\] and does *not* produce switching. In this case no torque is exerted from one sub-lattice to another as $\mathbf{m}_{{{\text{\tiny{T}}}}} \times \mathbf{m}_{{{\text{\tiny{R}}}}}=0$. However this torque, which allows transfer of angular momentum between sublattices, is always present in the full atomistic approach with stochastic fields because of the high temperatures reached during the reversal process. We can include in Eq. the presence of this torque by canting by a small angle the two sub-lattices magnetization once the heat pulse is gone or alternatively by the integration of the stochastic LLB equation [@Evans2012]. The solid lines in Fig. \[fig:LLB-switching\] show the integration of Eq.  including this angle and shows reversal. This small angle generates a mutual precessional motion which occurs due to the exchange field exerted by the opposite sub-lattice and the transverse relaxation directed towards the direction of the opposite sub-lattice. This mutual motion leads to the switching, as illustrated in Fig. \[fig:LLB-switching\] and is presented schematically in Fig. \[schematics\](a). Though the longitudinal magnetization process contributes to the timescale of reversal it does not drive the switching process. Unlike the statement in Ref. [@Mentalnik2012], the longitudinal relaxation itself cannot change the direction of $\mathbf{m_{{{\text{\tiny{T}}}}}}$, due to the multiplication of the longitudinal relaxation term in Eq.  by $\mathbf{m_{{{\text{\tiny{T}}}}}}$. In order to understand the switching mechanism we therefore need to consider both longitudinal and transverse relaxation. Now we demonstrate that at high temperatures the longitudinal relaxation becomes unstable. This happens because close to $T_C$ the sign of $H^{\|}_{{{\text{\tiny{T}}}}}$ can change. In Fig. \[param\] we present the temperature dependence of relaxation rates evaluated for the parameters of GdFeCo [@Ostler2011] in the MFA. One can see that close to $T_C$: $\Gamma_{{{\text{\tiny{TT}}}}} < \Gamma_{{{\text{\tiny{TR}}}}}$. ![Longitudinal relaxation rates as a function of temperature in the LLB equation, evaluated for GdFeCo parameters. The dashed line shows the TM-RE relaxation rate and the solid line is that of the TM-TM interaction. At low temperatures $\Gamma_{{{\text{\tiny{TT}}}}} \gg \Gamma_{{{\text{\tiny{TR}}}}} $, due to the small value of the susceptibility $\widetilde{\chi}_{{{\text{\tiny{T}}}},\|}$, therefore the relaxation of the TM magnetization is always to its own equilibrium. However, at temperatures close to $T_C$, $\Gamma_{{{\text{\tiny{TT}}}}} < \Gamma_{{{\text{\tiny{TR}}}}}$, thus the TM prefers to relax towards the RE magnetization in this regime.[]{data-label="param"}](papermatrixcloseTc.pdf){width="8cm"} Firstly we reduce the LLB equation to a dynamical system, based on information from atomistic modeling. We can assume that slightly before the reversal the initial transverse moments of the sublattices are small (but not zero), and that the modulus of the TM sublattice is much smaller than that of the RE ($m_{{{\text{\tiny{T}}}}}^{z}\ll m_{{{\text{\tiny{R}}}}}^{0}$), owing to the faster relaxation time of the TM. In this approximation the longitudinal field is positive $H^{\|}_{{{\text{\tiny{T}}}}}>0$: $H^{\|}_{{{\text{\tiny{T}}}}}\simeq \Gamma_{{{\text{\tiny{TR}}}}}\frac{m_{{{\text{\tiny{R}}}}}^{0}}{m_{{{\text{\tiny{R}}}}}^{e}}$ for the case before the heat pulse is removed ($m_{{{\text{\tiny{T}}}}} > m_{e,{{\text{\tiny{T}}}}} = 0$) and because after the heat pulse is gone the system cools down $H^{\|}_{{{\text{\tiny{T}}}}}\simeq [\Gamma_{{{\text{\tiny{TT}}}}}-\Gamma_{{{\text{\tiny{TR}}}}}]/2>0$ with $m_{{{\text{\tiny{T}}}}} \ll m_{e,{{\text{\tiny{T}}}}}$($T$). The LLB equation for the TM is reduced to the following system of equations: $$\begin{aligned} \frac{\mathrm{d} m_{{{\text{\tiny{T}}}}}^2}{\mathrm{d} t}&=&2|\gamma_{{{\text{\tiny{T}}}}}|\alpha_{{{\text{\tiny{T}}}}}^{\|} H^{\|}_{{{\text{\tiny{T}}}}}m_{{{\text{\tiny{T}}}}}^2, \nonumber \\ \frac{\mathrm{d} \rho}{\mathrm{d} t}&=&-2 \Big[ \alpha_{{{\text{\tiny{T}}}}}^{\bot} \Omega_{{{\text{\tiny{T}}}}} \sqrt{1-\rho/m_{{{\text{\tiny{T}}}}}^2} -|\gamma_{{{\text{\tiny{T}}}}}|\alpha_{{{\text{\tiny{T}}}}}^{\|} H^{\|}_{{{\text{\tiny{T}}}}} \Big] \rho \label{eq:rozsystem}\end{aligned}$$ where $\rho=(m_{{{\text{\tiny{T}}}}}^{t})^2= (m_{{{\text{\tiny{T}}}}}^{x})^2+(m_{{{\text{\tiny{T}}}}}^{y})^2$ is the TM transverse magnetization component, $\Omega_{{{\text{\tiny{T}}}}}=m_{{{\text{\tiny{R}}}}}^{0} |\gamma_{{{\text{\tiny{T}}}}}| |J_{0,{{\text{\tiny{TR}}}}}| /\mu_{{{\text{\tiny{T}}}}}$ is the precessional frequency of the anti-ferromagnetic exchange mode. The trajectory $\rho=0$ corresponds to a linear dynamical mode. The standard analysis of the dynamical system shows that for $H^{\|}_{{{\text{\tiny{T}}}}}>0$ and $m_{{{\text{\tiny{T}}}}}^z<\alpha_{{{\text{\tiny{T}}}}}^{\bot} \Omega_{{{\text{\tiny{T}}}}}/(|\gamma_{{{\text{\tiny{T}}}}}|H^{\|}_{{{\text{\tiny{T}}}}})$ this trajectory becomes unstable. Before the end of the pulse it is equivalent to $m_{{{\text{\tiny{T}}}}}>(\alpha_{{{\text{\tiny{T}}}}}^{\perp}/\alpha_{{{\text{\tiny{T}}}}}^{\|})m_{e,{{\text{\tiny{T}}}}}$ which is also easily satisfied, taking into account that $\alpha_{{{\text{\tiny{T}}}}}^{\perp}>\alpha_{{{\text{\tiny{T}}}}}^{\|}$, see Ref. [@LLBferri]. The physical interpretation is that in this case very small perturbations from $\rho=0$ will not be damped but will lead to the development of a perpendicular magnetization component, as is indeed observed by the atomistic simulations Fig. \[schematics\](b), in which we use the atomistic model and apply heat pulses of different temperatures to drive the system into different states. The atomistic simulations clearly confirm the development of the perpendicular component. ![ (a) Precession of sublattice magnetizations around the exchange field of each other in the macroscopic (LLB) description. After the action of an ultrafast laser pulse the large amplitude of the TM precession causes it to cross $m_z = 0$, and for sufficiently large angular momentum transfer, the angle between sublattices becomes small. After cooling the dominance of the TM sublattice forces the RE to realign along the opposite direction, completing the switching process. (b) Trajectories of the parallel and transverse magnetization components for TM calculated via atomistic simulations of the Heisenberg model (\[eq:fullham\]) at different maximum pulse temperatures $T_{\text{max}}=1000, 1200, 1250, 1300,1350$ and $1400$ K. After the pulse, the temperature is removed, this moment is indicated by small circles. []{data-label="schematics"}](figure){width="9cm"} However, the dynamical system alone does not describe the reversal due to the assumption of the static RE magnetization. In the same approximation, the LLB equation for the RE reads: $$\frac{\mathrm{d} m_{{{\text{\tiny{R}}}}}^{x(y)}}{\mathrm{d} t}= \pm\Omega_{{{\text{\tiny{R}}}}}m_{{{\text{\tiny{T}}}}}^{y(x)}-\frac{\alpha_{{{\text{\tiny{R}}}}}^{\perp}}{m_{{{\text{\tiny{R}}}}}^{0}} \Omega_{{{\text{\tiny{R}}}}} m_{{{\text{\tiny{T}}}}}^{x(y)}-|\gamma_{{{\text{\tiny{R}}}}}| \alpha_{{{\text{\tiny{R}}}}}^{\|}H^{\|}_{{{\text{\tiny{R}}}}}m_{{{\text{\tiny{R}}}}}^{x(y)} \label{eq:REtransverse}$$ where the upper sign corresponds to the equation for $m_{{{\text{\tiny{R}}}}}^{x}$ and the lower sign for the $m_{{{\text{\tiny{R}}}}}^{y}$ one, $\Omega_{{{\text{\tiny{R}}}}}=zqm_{{{\text{\tiny{R}}}}}^{0}|\gamma_{{{\text{\tiny{R}}}}}| |J_{{{\text{\tiny{TR}}}}}| /\mu_{{{\text{\tiny{R}}}}}$ and $H^{\|}_{{{\text{\tiny{R}}}}}$ is the RE longitudinal field. Equation shows that the perpendicular motion of the TM triggers the corresponding precessional motion of the RE via the angular momentum transfer (the first two terms of Eq. , *i.e.* via perpendicular components) with the same frequency $\Omega_{{{\text{\tiny{T}}}}}$, but different amplitude, see Fig. \[schematics\](a). During this dynamical process in some time interval the RE and TM magnetization have both the same sign of the $z$-component, forming the transient ferromagnetic-like state seen experimentally [@Radu2011]. Note that the subsequent precession has a frequency which is proportional to the exchange field and thus is extremely fast. The motion of the TM around RE direction and vice versa occurs during and after the ferromagnetic-like state until the system has relaxed to equilibrium. An outstanding question is whether the magnetization precession, a central part of the process, can be observed experimentally on a macroscopic sample. We should recall that in non-equilibrium at high temperatures the correlation between atomic sites is weak, thus we cannot expect the precession to occur with the same phase in the whole sample; an effect which would make the precession macroscopically unobservable. To demonstrate the effect we present in Fig. \[size\] the results of atomistic switching simulations in GdFeCo for different system sizes ($T_{\text{max}}=2000$ K). In Fig. \[size\] we observe that for small system sizes transverse oscillations with the frequency of an exchange mode are visible, consistent with the prediction of our analytical model. However, in large system sizes of the order of $($20 nm$)^3$ it is averaged out, consistent with the excitation of localized exchange modes with random phase. Note that the same effect happens for very high temperatures where the observed magnetization trajectory appears close to linear; although we stress again the importance of a small perpendicular component to initiate the magnetization reversal, which will occur on a local level as demonstrated by Fig. \[size\]. ![Atomistic modeling of the system size dependence of the transverse magnetization components of the TM under ultrafast switching, showing cancelation of the localized transverse magnetization components arising from exchange precession for larger system sizes. The time $t=0$ corresponds to the end of the laser pulse.[]{data-label="size"}](plot.pdf){width="8.0cm"} In conclusion, the LLB equation for a ferrimagnet describes the mutual relaxation of sublattices which occurs simultaneously under internal damping and inter-sublattice exchange. This model allows us to present a simple picture of the magnetization reversal of GdFeCo in response to an ultrafast heat pulse alone. The physical origin of this effect is revealed within the LLB equation as a dynamical reversal path resulting from the instability of the linear motion. To trigger the reversal path a small perpendicular component is necessary. In practice this will arise from random fluctuations of the magnetization at elevated temperatures. The perpendicular component grows in time resulting in ultrafast magnetization precession in the inter-sublattice exchange field, also observed in atomistic simulations for small system sizes. The switching is initiated by the TM which arrives at zero magnetization faster than the RE and responds dynamically to its exchange field. Thus, the non-equivalence of the two sub-lattices is an essential part of the process. Switching into the transient ferromagnetic state occurs due to large-amplitude precessional motion of the TM in the exchange field from the RE and a slow dynamics of RE. This work was supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreements NMP3-SL-2008-214469 (UltraMagnetron) N 214810 (FANTOMAS), NNP3-SL-2012-281043 (FEMTOSPIN) and the Spanish Ministry of Science and Innovation under the grant FIS2010-20979-C02-02. J. Stöhr, and H. C. Siegmann, *Magnetism: from Fundamentals to Nanoscale Dynamics* (Springer, Berlin, 2006). E. Beaurepaire, J.-C. Merle, A. Daunois, and J.-Y. Bigot, Phys. Rev. Lett. **76**, 4250 (1996). M. Wietstruk, A. Melnikov, C. Stamm, T. Kachel, N. Pontius, M. Sultan, C. Gahl, M. Weinelt, H. A. Dürr, and U. Bovensiepen Phys. Rev. Lett., **106**, 127401 (2011), C. D. Stanciu, F. Hansteen, A. V. Kimel, A. Kirilyuk, A. Tsukamoto, A. Itoh, Th. Rasing, Phys. Rev. Lett. **99**, 047601 (2007). F. Hansteen, A. Kimel, A. Kirilyuk and Th. Rasing, Phys. Rev. Lett. **95**, 047402 (2005). K. Vahaplar, A. M. Kalashnikova, A. V. Kimel, D. Hinzke, U. Nowak, R. Chantrell, A. Tsukamoto, A. Itoh, A. Kirilyuk and Th. Rasing, Phys. Rev. Lett. **103**, 117201 (2009). I. Radu, K. Vahaplar, C. Stamm, T. Kachel, N. Pontius, H. A. Dürr, T. A. Ostler, J. Barker, R. F. L. Evans, R. W. Chantrell, A. Tsukamoto, A. Itoh, A. Kirilyuk, Th. Rasing and A. V. Kimel, Nature, **472**, 205 (2011). T. A. Ostler, J. Barker, R. F. L. Evans, R. W. Chantrell, U. Atxitia, O. Chubykalo-Fesenko, S. El Moussaoui, L. Le Guyader, E. Mengotti, L. J. Heyderman, F. Nolting, A. Tsukamoto, A. Itoh, D. Afanasiev, B. A. Ivanov, A. M. Kalashnikova, K. Vahaplar, J. Mentink, A. Kirilyuk, Th. Rasing, and A. V. Kimel, Nature Commun. **3**, 666 (2012). T. A. Ostler, R. F. L.Evans, R. W. Chantrell, U. Atxitia, O. Chubykalo-Fesenko, I. Radu, R. Abduran, F. Radu, A. Tsukamoto, A. Itoh, A. Kirilyuk, Th. Tasing and A. Kimel, Phys. Rev. B. **84**, 024407 (2011). J. H. Mentink, J. Hellsvik, D. V. Afanasiev, B. A. Ivanov, A. Kirilyuk, A. V. Kimel, O. Eriksson, M. I. Katsnelson, and Th. Rasing, Phys. Rev. Lett. **108** 057202 (2012). U. Atxitia, O. Chubykalo-Fesenko, J. Walowski, A. Mann and M. Münzenberg, Phys. Rev. B **81**, 174401 (2010). M. Sultan, U. Atxitia, A. Melnikov, O. Chubykalo-Fesenko and U. Bovensiepen, Phys.Rev.B **85**, 184407 (2012) D. Garanin, Phys. Rev. B **55**, 3050 (1997). U. Atxitia, P. Nieves and O. Chubykalo-Fesenko, Phys. Rev. B **86**, 104414 (2012) O. Chubykalo-Fesenko, U. Nowak, R. W. Chantrell and D. Garanin, Phys. Rev. B **74**, 094436 (2006). R. F. L. Evans, D. Hinzke, U. Atxitia, U. Nowak, R. W. Chantrell and O. Chubykalo-Fesenko, Phys. Rev. B **85**, 014433 (2012) U. Atxitia and O. Chubykalo-Fesenko, Phys. Rev. B **84**, 144414 (2011)
--- abstract: 'We provide a model to study memory effects in quantum Gaussian channels with additive classical noise over an arbitrary number of uses. The correlation among different uses is introduced by contiguous two-mode interactions. Numerical results for few modes are presented. They confirm the possibility to enhance the classical information rate with the aid of entangled inputs, and show a likely asymptotic behavior that should lead to the full capacity of the channel.' author: - Giovanna Ruggeri - Stefano Mancini title: Quantum Gaussian Channels with Additive Correlated Classical Noise --- Introduction ============ A memoryless quantum communication channel makes the fundamental assumption that the noise between consecutive uses of the channel is independent. In many real-world applications this assumption may be good, but for many others the noise may be strongly correlated between uses of the channel. Such a possibility has been recently put forward in channels with continuous alphabet, specifically lossy bosonic channels [@GM05]. The main motivation that has led to investigate memory effects in such channels has been the possibility to enhance their classical capacity by means of entangled inputs [@Rug05]. Lossy bosonic channels belong to the class of Gaussian channels which play a prominent role because it might be simple enough to be analytically tractable, thus providing a fertile testing ground for the general theory of quantum capacities [@Hol99]. Another example of Gaussian channel is provided by a channel that adds thermal noise to the input. For that channel it has been recently proved that entangled inputs enhance the 2-shot classical capacity in presence of correlated noise [@Cerf04]. Here we present an extension of the model used in Ref.[@Cerf04], that can be employed for an arbitrary number of channel uses. The memory effect is modeled by assuming that the noise affecting subsequent uses of the channel follows a Gaussian distribution with correlation introduced by contiguous two-mode interactions. We show that if the memory is non-zero and the input energy constraint is satisfied, the channel transmission rate can be improved by entangled input states instead of product states. Moreover, we notice the appearance of an asymptotic value of the transmission rate after many uses of the channel that should lead to the full capacity of the channel. The paper is organized as follows. In Sect.II and III, we review the basic notions about bosonic channels and Gaussian channels with additive classical noise. In Sect.IV we describe the action of the memory. Finally, in Sect.V we analyze the transmission rate of this channel and discuss our results. Bosonic channels ================ A bosonic channel is any completely positive and trace preserving map acting on the state of a mode of electromagnetic field. On multiple uses of that channel one has to account for many of such modes. Let us consider $n$ of them associated with $n$ pairs of annihilation and creation operators $a_{j}$ and $a_{j}^{\dagger}$ (satisfying the canonical commutation relation), or equivalently of quadrature components $q_{j}\equiv(a_j+a_j^{\dag})/\sqrt{2}$ and $p_{j}\equiv -i(a_j-a_j^{\dag})/\sqrt{2}$. Ordering these in a column vector $${\bf r}=\left[ q_{1},\cdots, q_{n},p_{1},\cdots,p_{n}\right] ^{T}, \label{er}$$ we may define the mean vector ${\bf m}$ and the covariance matrix $\mathcal{V}$ of an $n$-mode state $\rho$ as $${\bf m}=\mbox{Tr}\left(\rho{\bf r}\right), \label{em}$$ $$\mathcal{V}=\mbox{Tr}\left[\left( {\bf r}-{\bf m}\right) \rho \left( {\bf r}-{\bf m}\right)^{T}\right]-\frac{1}{2}\mathcal{J}, \label{calV}$$ where $\mathcal{J}=i\left( \begin{array}{cc} 0 & \mathcal{I} \\ -\mathcal{I} & 0 \end{array} \right) $ is the symplectic matrix, with $\mathcal{I}$ the $n\times n$ identity matrix. If the state $\rho$ is Gaussian is completely characterized by the mean vector ${\bf m}$ and the covariance matrix $\mathcal{V}$. Moreover, its von Neumann entropy is given by [@Hol99] $$S\left( \rho\right) =\sum_{j=1}^{n}g\left( \left| \lambda _{j}\right| -\frac{1}{2}\right) , \label{Srho}$$ where $\pm \lambda _{j}$ are the symplectic eigenvalues of the covariance matrix $\mathcal{V} $, that is the solutions of the equation $$\mbox{det}\left[ \mathcal{V}-\lambda \mathcal{J}\right] =0, \label{char}$$ and $$g\left( x\right) =\left\{ \begin{array}{lr} \left( x+1\right) \log _{2}\left( x+1\right) -x\log _{2}x & x>0 \\ 0 & x=0 \end{array} \right. \label{gx}$$ is the entropy of a thermal state with a mean photon number $x$. Let us now consider the classical use of a bosonic channel described by a completely positive and trace preserving map $T$: $$\rho\mapsto T[\rho], \label{mapT}$$ Then, quantum states carry the values of a random classical variable. Quite generally we could think to map phase space points into quantum states, hence for each mode we consider two real random values (or one complex). We then label an input state over $n$ modes (uses) as $\rho_{\mbox{\boldmath$\alpha$}}$ with $\mbox{\boldmath$\alpha$}\in \mathbb{C}^n$. The coding theorem for quantum channels asserts that the classical capacity of the bosonic channel $T$ is given by $$\begin{aligned} C\left( T\right) &=&\lim_{n\to\infty}\frac{1}{n}\max \left[ S\left( \int P({d^{2n}\mbox{\boldmath$\alpha$}})T\left[ \rho _{\mbox{\boldmath$\alpha$}}\right] \right)\right. \nonumber\\ &&\left.\quad\qquad -\int P({d^{2n}\mbox{\boldmath$\alpha$}})S\left( T\left[ \rho _{\mbox{\boldmath$\alpha$}}\right] \right) \right] , \label{Cap}\end{aligned}$$ where $S$ denotes the von Neumann entropy and $P({d^{2n}\mbox{\boldmath$\alpha$}})$ is a probability measure. In Eq.(\[Cap\]), the maximum is taken over all probability measures $\left\{ P({d^{2n}\mbox{\boldmath$\alpha$}})\right\} $ and collections of density operators $\left\{ \rho _{\mbox{\boldmath$\alpha$}}\right\} $ satisfying the energy constraint $$\frac{1}{n}\int P(d^{2n}\mbox{\boldmath$\alpha$})\mbox{Tr}\left( \rho _{\mbox{\boldmath$\alpha$}} \sum_{j=1}^{n} a_{j}^{\dagger}a_{j}\right) \leq \overline{n}, \label{bound}$$ with  $\overline{n}$ the maximum mean photon number per mode at the input of the channel. Gaussian channels with additive classical noise =============================================== Consider now a specific bosonic channel $T$ acting as follows $$\begin{aligned} T\left[\rho_{\mbox{\boldmath$\alpha$}}\right]=\int d^{2n} \mbox{\boldmath$\beta$}\ Q( \mbox{\boldmath$\beta$})\ D( \mbox{\boldmath$\beta$})\rho_{\mbox{\boldmath$\alpha$}}^{in} D^\dagger( \mbox{\boldmath$\beta$}), \label{outexp}\end{aligned}$$ where $D(\mbox{\boldmath$\beta$})\equiv D(\beta_1)D(\beta_2)\ldots D(\beta_n)$ with $D(\beta_j)$ denoting the displacement operator on the $j$th mode, $\beta_j\in\mathbb{C}$. This is known as *Gaussian channel with additive classical noise* once the kernel is Gaussian, e.g. $$Q(\mbox{\boldmath$\beta$})=\prod_{k=1}^{n}\frac{1}{\pi N} \, e^{-\frac{|\beta_k|^2}{N}}. \label{Qbe}$$ In such a case the channel randomly displaces each input state according to a Gaussian distribution, which results in a thermal state ($N$ is the variance of the added noise on the quadrature components $q_j$ and $p_j$, or, equivalently the number of thermal photons added per mode by the channel). Since for Gaussian channels it is conjectured that Gaussian inputs allow to achieve the capacity [@Hol99], let us consider to encode $\mbox{\boldmath$\alpha$}$ into coherent input states $$\rho_{\mbox{\boldmath$\alpha$}}^{in}= D(\mbox{\boldmath$\alpha$})|{\bf{0}}\rangle \langle{\bf{0}}|D^{\dag}(\mbox{\boldmath$\alpha$}), \label{rhoin}$$ where $D(\mbox{\boldmath$\alpha$})\equiv D(\alpha_1)D(\alpha_2)\ldots D(\alpha_n)$ and $|{\bf 0}\rangle\equiv|0\rangle|0\rangle\ldots|0\rangle$ are the vacuum states of the $n$ modes. Moreover, suppose that the state (\[rhoin\]) is drawn with probability $$P(\mbox{\boldmath$\alpha$})=\prod_{k=1}^{n}\frac{1}{\pi \overline{n}} \, e^{-\frac{|\alpha_k|^2}{\overline{n}}}, \label{Pal}$$ then the output states and their average read $$\begin{aligned} \rho_{\mbox{\boldmath$\alpha$}}^{out}&=&T\left[\rho_{\mbox{\boldmath$\alpha$}}^{in}\right],\\ \overline{\rho}^{out}&=&\int d^{2n} \mbox{\boldmath$\alpha$}\ P( \mbox{\boldmath$\alpha$}) \rho_{\mbox{\boldmath$\alpha$}}^{out}. \label{outave}\end{aligned}$$ The Gaussian map effected by the Gaussian channel (\[outexp\]), in case of Gaussian input state, can be solely characterized by the covariance matrices. that is $$\mathcal{V}^{in} \mapsto \mathcal{V}^{in} + {\rm diag}(N,N,\ldots,N), \label{mapV}$$ where $\mathcal{V}^{in} $ is the input covariance matrix. For the states of Eq.(\[rhoin\]), it is $$\mathcal{V}^{in} = \frac{1}{2}{\rm diag}(1,1,\ldots,1). \label{calVind}$$ We could also consider the possibility of entangled input states. These can be accounted for by the transformation $\rho_{\mbox{\boldmath$\alpha$}}^{in}\rightarrow \Sigma(\mathcal{R})\rho_{\mbox{\boldmath$\alpha$}}^{in}\Sigma^{\dag}(\mathcal{R})$ that uses the multimode squeezing operator [@Lo93] $$\Sigma\left( \mathcal{R} \right) =\exp \left[ \frac{1}{2}\sum_{jk=1,j \neq k}^{n} \left( \mathcal{R}_{jk}a_{j}^{\dagger}a_{k}^{\dagger}-\mathcal{R}_{jk}^{\ast }a_{j}a_{k}\right) \right], \label{SigR}$$ with $\mathcal{R}$ a symmetric $n\times x$ matrix. Thus, the input covariance matrix results $$\mathcal{V}^{in}= \frac{1}{2}\left( \begin{array}{cc} \exp \left( \mathcal{R}\right) & 0 \\ 0 & \exp \left( -\mathcal{R}\right) \end{array} \right) . \label{calVin}$$ Let $\overline{n}_r$ be the average number per mode of squeezed photons added at input, i.e. $$\overline{n}_r=\frac{1}{n}\sum_{j=1}^{n}\langle{\bf 0}|\Sigma(\mathcal{R}) a_j^{\dag}a_j\Sigma^{\dag}(\mathcal{R})|{\bf 0}\rangle.$$ Then, due to the bound (\[bound\]) the effective number of photons to input on each mode would be $(\overline{n}-\overline{n}_r)$. The memory model ================ Now, let us consider memory effects in the Gaussian channel with additive classical noise. The output state (\[outexp\]) can be seen as the convolution of the input Gaussian state (\[rhoin\]) with a Gaussian thermal state having average number of thermal photons $N$ per mode. To introduce correlations in the latter we use arguments from multimode squeezing. It is physically reasonable that only contiguous uses of the channel would be strongly correlated, thus we introduce the following symmetric matrix $$\mathcal{S}=-s\left( \begin{array}{ccccc} 0 & 1 & \cdots & \cdots & 0 \\ 1 & 0 & 1 & \cdots & \cdots \\ \cdots & 1 & 0 & 1 & \cdots \\ \cdots & \cdots & 1 & 0 & 1 \\ 0 & \cdots & \cdots & 1 & 0 \end{array} \right), \label{calS}$$ where $s$ is the parameter measuring the degree of memory. Then, we take the added noise as characterized by the covariance matrix $$\mathcal{V}^{N}=\mathcal{V}_{1}^{N}+\epsilon \mathcal{V} _{2}^{N}, \label{calVN}$$ where $\mathcal{V}_1^N$ is the diagonal matrix $$[\mathcal{V}_1^N]_{jj}= N-\epsilon[\mathcal{V}_2^N]_{jj},$$ and $$\mathcal{V}^{N}_2= \frac{1}{2}\left( \begin{array}{cc} \exp \left( \mathcal{S}\right) & 0 \\ 0 & \exp \left( -\mathcal{S}\right) \end{array} \right) .\label{calVN2}$$ In order to realize a physical transformation, the channel must have $\mathcal{V}^N\ge 0$. This is clearly satisfied if $\mathcal{V}_1^N\ge 0$ and $\mathcal{V}_2^N\ge 0$. The first condition determines the range of allowed values for the memory parameter $s$, while the second condition is always satisfied. The presence of the parameter $\epsilon$ in Eq.(\[calVN\]) only deserves as mathematical trick to guarantee the positivity of $\mathcal{V}^N$ even in the case of $N< 1/2$. It is $$\begin{aligned} \epsilon=1&\quad& N\ge\frac{1}{2}\label{ep1}\\ 0\le\epsilon\le 2N &\quad& N<\frac{1}{2}\label{ep2}\end{aligned}$$ Notice that the noise correlations so introduced are classical and for $s\to 0$ in Eq.(\[calVN\]) we recover the memoryless case. =6.5cm0.2cm =6.5cm0.2cm =6.5cm0.2cm =6.5cm0.2cm For the entangled inputs we consider in Eq.(\[SigR\]) the symmetric matrix $$\mathcal{R}=-r\left( \begin{array}{ccccc} 0 & 1 & \cdots & \cdots & 0 \\ 1 & 0 & 1 & \cdots & \cdots \\ \cdots & 1 & 0 & 1 & \cdots \\ \cdots & \cdots & 1 & 0 & 1 \\ 0 & \cdots & \cdots & 1 & 0 \end{array} \right), \label{calR}$$ with $r$ is the input entanglement parameter. At the output of the channel, we get states with the covariance matrix $$\mathcal{V}^{out}=\mathcal{V}^{in}+\mathcal{V}^{N}. \label{calVout}$$ In turns, the covariance matrix associated with the mixture of the output states (\[outave\]) would be $$\overline{\mathcal{V}}^{out}=\mathcal{V}^{out}+\mathcal{K}. \label{calVoutave}$$ Here $$\mathcal{K}=\mathcal{K}_{1}+\theta \mathcal{K}_{2}. \label{calK}$$ with $\mathcal{K}_1$ a diagonal matrix $$[\mathcal{K}_1]_{jj}= (\overline{n}-\overline{n}_r)-\theta[\mathcal{K}_2]_{jj}, \label{calK1}$$ and $\mathcal{K}_{2}$ being of the same form of $\mathcal{V} _{2}^{N}$ with $s\rightarrow -y$. The coefficient $y$ accounts for possible classical input correlations. Also in this case we must have $\mathcal{K}\ge 0$. This is clearly satisfied if $\mathcal{K}_1\ge 0$ and $\mathcal{K}_2\ge 0$. The first condition determines the range of allowed values for the parameter $y$, while the second condition is always satisfied. Likewise the parameter $\epsilon$ in Eq.(\[calVN\]), we have introduced in Eq.(\[calK\]) a parameter $\theta$ that only deserves to guarantee the positivity of $\mathcal{K}$ when $(\overline{n}-\overline{n}_r)< 1/2$. It is $$\begin{aligned} \theta=1&\quad& (\overline{n}-\overline{n}_r)\ge\frac{1}{2}\label{th1}\\ 0\le\theta\le 2(\overline{n}-\overline{n}_r) &\quad& (\overline{n}-\overline{n}_r)<\frac{1}{2}. \label{th2}\end{aligned}$$ According to Eq.(\[Cap\]), the capacity is given by the asymptotical behavior ($n\to\infty$) of the maximum of transmission rate $$R = \frac{1}{n}\left[S\left(\overline{\rho}^{out}\right)-S\left(\rho^{out}\right)\right]. \label{Rate1}$$ Since $\overline{\rho}^{out}$ and ${\rho}^{out}$ are both Gaussian states, following Eq.(\[Srho\]) we have $$R(r,y)=\frac{1 }{n}\sum_{i=1}^{n}\left[ g\left( \left| \overline{\lambda _{j}}^{out}\right| - \frac{1}{2}\right) -g\left( \left| \lambda _{j}^{out}\right| -\frac{1}{2} \right) \right] .\label{Rate2}$$ Thus, in order to evaluate the transmission rate, we need to compute the $n$ symplectic values $\overline{ \lambda _{j}}^{out}$, $\lambda _{j}^{out}$ of $\overline{\mathcal{V}}^{out}$, $\mathcal{V}^{out}$ respectively. Results and Conclusions ======================= The transmission rate of Eq.(\[Rate2\]) has been evaluated numerically restricting to some cases where $\epsilon=\theta=1$. From Fig.\[fig1\], we notice that when $s>0$, the optimized rate $R$ over $y$ increases with the degree of entanglement $r$ and attains a maximum at some value $r^{\ast}>0$, so that the maximum is achieved by entangled input states and is greater than that at $s=r=0$. =6.5cm0.2cm Moreover, maximizing $R$ with respect to both $r$ and $y$ allows to find the behavior of the transmission rate versus the number of channel uses $n$ (see Fig.\[fig2\]). Here we have considered $s=0, 0.1, 0.2$, white, grey and black bars respectively. Unfortunately the evaluation of $R$ with larger $n$ requires a lot of numerical resources. Nonetheless, from Fig.\[fig2\], we can grasp that the transmission rate increases with the number of channel uses $n$, achieving an asymptotic value depending on $s$. This can also be understood by considering that the rate increases with noise correlations. Initially, they rapidly grow till $n$ reaches the relevant number of modes that effectively interact. In fact, each mode effectively interacts with a limited number of other modes which would be determined (besides the form of $\mathcal{S}$) by the value of $s$ (the smaller is $s$ the smaller is the number of modes that effectively interact) . Thus the number of uses after which $R$ becomes almost constant can be thought as the range of memory effects. In conclusion, we have provided a model to describe memory effects in a Gaussian channel with additive classical noise for an arbitrary number of uses. The model reduces to that of Ref.[@Cerf04] for only two uses. From our numerical results we guess the existence of an asymptotic value of the transmission rate after many uses of the channel that should lead to the full capacity of the channel. Hence, the 2-shot capacity would not suffice to characterize the channel performances, unless the memory is very weak. Quite generally the minimum number of uses to consider would depend on the strength of the memory, i.e. the correlation length of the noise. Of course the results found are conditioned to to the validity of the conjecture that Gaussian inputs allow to achieve the capacity of a Gaussian channel [@Hol99], however, the presented model for memory effects is always valid and could be extended to include attenuation/amplification besides additive noise. Acknowledgements {#acknowledgements .unnumbered} ================ The work of S.M. has been supported by the European Commission under the Integrated Projects “QAP” and “SCALA”. [000]{} V. Giovannetti and S. Mancini (2005), [*Bosonic memory channels*]{}, Phys. Rev. A Vol. 71, N. 062304, pp. 1-6. G. Ruggeri, G. Soliani, V. Giovannetti and S. Mancini (2005), [*Information transmission through lossy bosonic memory channels*]{}, Europhys. Lett. Vol. 70, pp. 719-725. H. S. Holevo, M. Sohma and O. Hirota (1999), [*Capacity of quantum Gaussian channels*]{}, Phys. Rev. A Vol. 59, pp. 1820-1828;\ H. S. Holevo, R. F. Werner (2001), [*Evaluating capacities of bosonic Gaussian channels*]{}, Phys. Rev. A Vol. 63, N. 032312, pp. 1-14. N. Cerf, J. Clavareau, C. Macchiavello and J. Roland (2005), [*Quantum entanglement enhances the capacity of bosonic channels with memory*]{}, Phys. Rev. A Vol. 72, N. 042330 pp. 1-4;\ N. Cerf, J. Clavareau, J. Roland and C. Macchiavello (2005), [*Information transmission via entangled quantum states in Gaussian channels with memory*]{}, quant-ph/0508197; C.F. Lo and R. Sollie (1993), [*Generalized multimode squeezed states*]{}, Phys. Rev. A Vol. 47, pp. 733-735.
--- abstract: | We present a method to obtain upper bounds on covering numbers. As applications of this method, we reprove and generalize results of Rogers on economically covering Euclidean $n$-space with translates of a convex body, or more generally, any measurable set. We obtain a bound for the density of covering the $n$-sphere by rotated copies of a spherically convex set (or, any measurable set). Using the same method, we sharpen an estimate by Artstein–Avidan and Slomka on covering a bounded set by translates of another. The main novelty of our method is that it is not probabilistic. The key idea, which makes our proofs rather simple and uniform through different settings, is an algorithmic result of Lovász and Stein. address: ' Márton Naszódi, Dept. of Geometry, Lorand Eötvös University, Pázmány Péter Sétány 1/C Budapest, Hungary 1117 ' author: - Márton Naszódi bibliography: - 'biblio.bib' title: On some covering problems in geometry --- Introduction ============ Given two sets $K$ and $L$ in ${{\mathbb R}^n}$ (resp. ${{\mathbb S}^n}$), and we want to cover $K$ by as few translates (resp. rotated copies) of $L$ as possible. Upper bounds for these kind of covering problems are often obtained by probabilistic methods, that is, by taking randomly chosen copies of $L$. We present a method that relies on an algorithmic result of Lovász and Stein, and yields proofs that are simple, non-probabilistic and quite uniform through different geometric settings. For two Borel measurable sets $K$ and $L$ in ${{\mathbb R}^n}$, let $N(K,L)$ denote the *translative covering number* of $K$ by $L$ ie. the minimum number of translates of $L$ that cover $K$. \[defn:fraccovcvx\] Let $K$ and $L$ be bounded Borel measurable sets in ${{\mathbb R}^n}$. A *fractional covering* of $K$ by translates of $L$ is a Borel measure $\mu$ on ${{\mathbb R}^n}$ with $\mu(x-L)\geq 1$ for all $x\in K$. The *fractional covering number* of $K$ by translates of $L$ is $$N^\ast(K,L)=$$$$\inf\left\{\mu({{\mathbb R}^n}){\; : \; }\mu \mbox{ is a fractional covering of } K \mbox{ by translates of } L\right\}.$$ Clearly, in Definition \[defn:fraccovgen\] we may assume that a fractional cover $\mu$ is supported on $\operatorname{cl}(K-L)$. According to Theorem 1.7 of [@AS]), we have $$\label{eq:simpleupperbound} \max\left\{\frac{\operatorname{vol}(K)}{\operatorname{vol}(L)},1\right\}\leq N^\ast(K,L)\leq\frac{\operatorname{vol}(K-L)}{\operatorname{vol}(L)}.$$ Here, the upper bound is easy to see, as the Lebesgue measure restricted to $K-L$ with the following scaling $\mu=\operatorname{vol}/\operatorname{vol}(L)$ is a fractional cover of $K$ by translates of $L$. For two sets $K, T\subset{{\mathbb R}^n}$, we denote their *Minkowski difference* by $K\sim T=\{x\in {{\mathbb R}^n}{\; : \; }T+x \subseteq K\}$. \[thm:cvxIG\] Let $K, L$ and $T$ be bounded Borel measurable sets in ${{\mathbb R}^n}$ and let $\Lambda\subset{{\mathbb R}^n}$ be a finite set with $K\subseteq \Lambda+T$. Then $$\label{eq:cvxIG} N(K,L)\leq$$ $$(1+\ln( \max_{x\in K-L} \operatorname{card}((x+(L\sim T))\cap \Lambda ) ) ) \cdot N^\ast(K-T,L\sim T).$$ If $\Lambda\subset K$, then we have $$\label{eq:cvxIGspec} N(K,L)\leq$$ $$(1+\ln( \max_{x\in K-L} \operatorname{card}((x+(L\sim T))\cap \Lambda ) ) ) \cdot N^\ast(K,L\sim T).$$ For a set $K\subset{{\mathbb R}^n}$ and $\delta>0$, we denote the *$\delta$-inner parallel body* of $K$ by $K_{-\delta}:=K\sim {\mathbf B}(o,\delta)=\{x\in K{\; : \; }{\mathbf B}(x,\delta)\subseteq K\}$, where ${\mathbf B}(x,\delta)$ denotes the Euclidean ball of radius $\delta$ centered at $x$. As an application of Theorem \[thm:cvxIG\], we will obtain \[thm:Renbyanything\] Let $K\subseteq{{\mathbb R}^n}$ be a bounded measurable set. Then there is a covering of ${{\mathbb R}^n}$ by translated copies of $K$ of density at most $$\inf_{\delta>0}\left[ \frac{\operatorname{vol}(K)}{\operatorname{vol}(K_{-\delta})} \left( 1+\ln\frac{\operatorname{vol}\left(K_{-\delta/2}\right)}{\operatorname{vol}\left({\mathbf B}\left(o,\frac{\delta}{2} \right)\right) }\right)\right].$$ The $\delta$-inner parallel body could be defined with respect to a norm that is distinct from the Euclidean. As is easily seen from the proof, the theorem would still hold. Now, we turn to coverings on the sphere. We denote the Haar probability measure on ${{\mathbb S}^n}\subset{{\mathbb R}^{n+1}}$ by $\sigma$, the closed spherical cap of spherical radius ${\varphi}$ centered at $u\in{{\mathbb S}^n}$ by $C(u,{\varphi})$, and its measure by $\Omega({\varphi})=\sigma(C(u,{\varphi}))$. For a set $K\subset{{\mathbb S}^n}$ and $\delta>0$, we denote the *$\delta$–inner parallel body* of $K$ by $K_{-\delta}=\{u\in K{\; : \; }C(u,\delta)\subseteq K\}$. A set $K\subset{{\mathbb S}^n}$ is called *spherically convex*, if it is contained in an open hemisphere and for any two of its points, it contains the shorter great circular arc connecting them. The *spherical circumradius* of a subset of an open hemisphere of ${{\mathbb S}^n}$ is the spherical radius of the smallest spherical cap (the *circum-cap*) that contains the set. \[thm:spherebyanything\] Let $K\subseteq{{\mathbb S}^n}$ be a measurable set. Then there is a covering of ${{\mathbb S}^n}$ by rotated copies of $K$ of density at most $$\inf_{\delta>0}\left[ \frac{\sigma(K)}{\sigma(K_{-\delta})} \left( 1+\ln\frac{\sigma\left(K_{-\delta/2}\right)}{\Omega\left(\frac{\delta}{2}\right) }\right)\right].$$ \[cor:spherebyconvex\] Let $K\subseteq{{\mathbb S}^n}$ be a spherically convex set of spherical circumradius $\rho$. Then there is a covering of ${{\mathbb S}^n}$ by rotated copies of $K$ of density at most $$\inf_{\kappa>0{\; : \; }K_{-\left(\kappa\rho\right)}\neq\emptyset}\left[ \frac{\sigma(K)}{\sigma(K)-\Omega(\rho)\left(1-(1-\kappa)^n\right)} \left( 2n+n\ln\frac{1}{\kappa\rho} \right)\right].$$ We prove the Euclidean results in Section \[sec:Renbound\], and the spherical results in Section \[sec:Sphbound\]. History {#sec:history} ======= An important point in the theory of coverings in geometry is the following theorem of Rogers [@Ro57]. For a definition of the covering density, cf. [@RoBook64]. \[thm:Rogers\] Let $K$ be a bounded convex set in ${{\mathbb R}^n}$ with non-empty interior. Then the covering density of $K$ is at most $$\theta(K)\leq n\ln n+ n\ln\ln n + 5n.$$ Earlier, exponential upper bounds for the covering density were obtained by Rogers, Bambah and Roth, and for the special case of the Euclidean ball by Davenport and Watson (cf. [@Ro57] for references). The current best bound is due to G. Fejes Tóth [@FTG09], who replaced the last term $5n$ by $n+o(n)$. We will obtain Theorem \[thm:Rogers\] as a corollary to our more general Theorem \[thm:Renbyanything\]. Another classical example of a geometric covering problem is the following. Estimate the minimum number of spherical caps of radius ${\varphi}$ needed to cover the sphere ${{\mathbb S}^n}$ in ${{\mathbb R}^{n+1}}$. \[thm:spherebycaps\] Let $0<{\varphi}<\frac{\pi}{2}$. Then there is a covering of ${{\mathbb S}^n}$ by spherical caps of radius ${\varphi}$ with density at most $n\ln n+n\ln\ln n+5n$. This estimate was proved in [@BW03] improving an earlier result of Rogers [@Ro63]. The current best bound is better when ${\varphi}<\frac{\pi}{3}$: Dumer [@Du07] gave a covering in this case of density at most $\frac{n\ln n}{2}$. We will obtain Theorem \[thm:spherebycaps\] as a corollary to our more general Theorem \[thm:spherebyanything\]. The fractional version of $N(K,\operatorname{int}K)$ (see Definition \[defn:fraccovgen\]) first appeared in [@Na09] and in general for $N(K,L)$ in [@AR11] and [@AS]. A result very similar to our Theorem \[thm:cvxIG\] appeared as Theorem 1.6 in the paper [@AS] by Artstein-Avidan and Slomka. The main differences are the following. Quantitatively, our result is somewhat stronger by having $\max \operatorname{card}(\dots)$ in the logarithm as opposed to $\operatorname{card}\Lambda$. This allows us to obtain Theorems \[thm:Rogers\] and \[thm:Renbyanything\] as corollaries to Theorem \[thm:cvxIG\]. Furthermore, we have no minor term of order $\sqrt{\ln (\operatorname{card}\Lambda)(N^\ast+1)}$. The method of the proof in [@AS] consist of two parts. One is to reduce the problem to a finite covering problem by replacing $K$ by a sufficiently dense finite set (a $\delta$-net). Next, a probabilistic argument is used to solve the finite covering problem. A similar route is followed in [@FuKa08] where a variant of Theorem \[thm:Rogers\] (previously obtained in [@ErRo61]) is proved (using Lovász’s Local Lemma) according to which such low density covering of ${{\mathbb R}^n}$ by translates of $K$ exists with the additional requirement that no point is covered too many times. An even earlier appearance of this method in the context of the illumination problem can be found in [@Sch88]. A major contribution of [@AS] is that they used this method to bridge the gap between $N$ and $N^\ast$, that is, they noticed that the method works with any measure, not just the volume. We also use the first part of the method (taking a $\delta$-net), but then replace the second (probabilistic) part by a simple application of a non-probabilistic result, Lemma \[lem:Lovasz\]. Preliminaries {#sec:prelim} ============= We start with introducing some combinatorial notions. \[defn:fraccovgen\] Let $Y$ be a set, ${\mathcal F}$ a family of subsets of $Y$ and $X\subseteq Y$. A *covering* of $X$ by ${\mathcal F}$ is a subset of ${\mathcal F}$ whose union contains $X$. The *covering number* $\tau(X,{\mathcal F})$ of $X$ by ${\mathcal F}$ is the minimum cardinality of its coverings by ${\mathcal F}$. A *fractional covering* of $X$ by ${\mathcal F}$ is a measure $\mu$ on ${\mathcal F}$ with $$\mu(\{F\in{\mathcal F}{\; : \; }x\in F\})\geq 1\;\;\;\mbox{ for all } x\in X.$$ The *fractional covering number* of ${\mathcal F}$ is $$\tau^\ast(X,{\mathcal F})=\inf\left\{\mu({\mathcal F}){\; : \; }\mu \mbox{ is a fractional covering of } X \mbox{ by } {\mathcal F}\right\}.$$ When a group $G$ acts on $Y$ and ${\mathcal F}$ is the set $\{g(A){\; : \; }g\in G\}$ for some fixed subset $A$ of $Y$, we will identify $F\in{\mathcal F}$ with $\{g\in G{\; : \; }g(A)=F\}\subseteq G$ and thus, we will call a measure $\mu$ on $G$ a fractional covering of $X$ by $G$ if $$\mu(\{g\in G {\; : \; }x\in g(A)\})\geq 1\;\;\;\mbox{ for all } x\in X.$$ For more on (fractional) coverings, cf. [@Fu88] in the abstract (combinatorial) setting and [@Ma02] in the geometric setting. The gap between $\tau$ and $\tau^\ast$ is bounded in the case of finite set families (hypergraphs) by the following result of Lovász [@Lo75] and Stein[@St74]. \[lem:Lovasz\] For any finite $\Lambda$ and ${\mathcal H}\subseteq 2^\Lambda$ we have $$\label{eq:LovaszIG} \tau(\Lambda,{\mathcal H}) < (1+\ln(\max_{H\in {\mathcal H}}\operatorname{card}H))\tau^\ast(\Lambda,{\mathcal H}).$$ Furthermore, the greedy algorithm (always picking the set that covers the most number of uncovered points) yields a covering of cardinality less than the right hand side in . The following straightforward corollary to Lemma \[lem:Lovasz\] is a key element of our proofs. \[obs:IG\] Let $Y$ be a set, ${\mathcal F}$ a family of subsets of $Y$, and $X\subseteq Y$. Let $\Lambda$ be a finite subset of $Y$ and $\Lambda\subseteq U\subseteq Y$. Assume that for another family ${\mathcal F}^\prime$ of subsets of $Y$ we have $\tau(X,{\mathcal F})\leq\tau(\Lambda,{\mathcal F}^\prime)$. Then $$\label{eq:IG} \tau(X,{\mathcal F})\leq \tau(\Lambda,{\mathcal F}^\prime)\leq (1+\ln( \max_{F^\prime\in{\mathcal F}^\prime} \operatorname{card}\{\Lambda\cap F^\prime \} ) ) \cdot \tau^\ast(U, {\mathcal F}^\prime).$$ We will rely on the following estimates of $\Omega$ by Böröczky and Wintsche [@BW03]. \[lem:BWcapsize\] Let $0<{\varphi}<\pi/2$. $$\begin{aligned} \Omega({\varphi})&>& \frac{\sin^n{\varphi}}{\sqrt{2\pi(n+1)}}, \label{eq:BWnagy} \\ \Omega({\varphi})&<& \frac{\sin^n{\varphi}}{\sqrt{2\pi n}\cos{\varphi}} ,\;\;\;\;\mbox{ if } {\varphi}\leq \arccos \frac{1}{\sqrt{n+1}}, \label{eq:BWkicsi} \\ \Omega(t{\varphi})&<& t^n\Omega({\varphi}),\;\;\;\;\mbox{ if } 1<t<\frac{\pi}{2{\varphi}}. \label{eq:BWtszer}\end{aligned}$$ The following is known as Jordan’s inequality: $$\label{eq:jordan} \frac{2x}{\pi}\leq\sin x\;\;\mbox{ for }\;\;x\in[0,\pi/2]$$ Proof of the covering results in Rn {#sec:Renbound} =================================== We present these proofs in the order of their difficulty. In this way, ideas and technicalities are –perhaps– easier to separate. The proof is simply a substitution into . We take $Y={{\mathbb R}^n}$, $X=K$, ${\mathcal F}=\{L+x{\; : \; }x\in K-L\}$, ${\mathcal F}^\prime=\{L\sim T+x{\; : \; }x\in K-L \}$. One can take $U=K-T$ as any member of $\Lambda$ not in $K-T$ could be dropped from $\Lambda$ and $\Lambda$ would still have the property that $\Lambda +T \supseteq K$. That proves . To prove , we notice that in the case when $\Lambda\subset K$, one can take $U=K$. Let $C$ denote the cube $C=[-a,a]^n$, where $a>0$ is large. Our goal is to cover $C$ by translates of $K$ economically. Fix $\delta>0$, and let $\Lambda\subset{{\mathbb R}^n}$ be a finite set such that $\Lambda+{\mathbf B}(o,\delta/2)$ is a saturated (ie. maximal) packing of ${\mathbf B}(o,\delta/2)$ in $C+{\mathbf B}(o,\delta/2)$. Thus, by the maximality, we have that $\Lambda$ is a $\delta$-net of $C$ with respect to the Euclidean distance, ie. $\Lambda+{\mathbf B}(o,\delta)\supseteq C$. By considering volume, for any $x\in {{\mathbb R}^n}$ we have $$\label{eq:lambdasmallRR} \operatorname{card}\big({\Lambda\cap (x+K_{-\delta})}\big)\leq \frac{\operatorname{vol}\left(K_{-\delta} +{\mathbf B}(o,\delta/2)\right)}{\operatorname{vol}\left({\mathbf B}(o,\delta/2)\right)}\leq \frac{\operatorname{vol}\left(K_{-\delta/2}\right)}{\operatorname{vol}\left({\mathbf B}(o,\delta/2)\right)}.$$ Let $\varepsilon>0$ be fixed. Clearly, if $a$ is sufficiently large then $$\label{eq:nstarobviousRR} N^\ast(C+{\mathbf B}(o,\delta/2),K_{-\delta})\leq \frac{\operatorname{vol}\left(C+{\mathbf B}(o,\delta/2)-K_{-\delta}\right)}{\operatorname{vol}K_{-\delta}}\leq (1+{\varepsilon})\frac{\operatorname{vol}C}{\operatorname{vol}K_{-\delta}}.$$ By , and we have $$N(C,K)\leq (1+{\varepsilon}) \left(1+\ln \frac{\operatorname{vol}\left(K_{-\delta/2}\right)}{\operatorname{vol}\left({\mathbf B}(o,\delta/2)\right)} \right) \frac{\operatorname{vol}C}{\operatorname{vol}K_{-\delta}}.$$ Finally, $$\label{eq:thetasymmRR} \theta(K)\leq N(C,K)\operatorname{vol}(K)/\operatorname{vol}(C)$$ yields the promised bound. Let $C$ denote the cube $C=[-a,a]^n$, where $a>0$ is large. Our goal is to cover $C$ by translates of $K$ economically. First, consider the case when $K=-K$. Let $\delta>0$ be fixed (to be chosen later) and let $\Lambda\subset{{\mathbb R}^n}$ be a finite set such that $\Lambda+\frac{\delta}{2}K$ is a saturated (ie. maximal) packing of $\frac{\delta}{2}K$ in $C-\frac{\delta}{2}K$. Thus, by the maximality, we have that $\Lambda$ is a $\delta$-net of $C$ with respect to $K$, ie. $\Lambda+\delta K\supseteq C$. By considering volume, for any $x\in {{\mathbb R}^n}$ we have $$\label{eq:lambdasmall} \operatorname{card}\big({\Lambda\cap (x+(1-\delta)K)}\big)\leq \frac{\operatorname{vol}\left((1-\delta)K +\frac{\delta}{2}K\right)}{\operatorname{vol}\left(\frac{\delta}{2}K\right)}\leq \left(\frac{2}{\delta}\right)^n.$$ Let $\varepsilon>0$ be fixed. Clearly, if $a$ is sufficiently large then $$\label{eq:nstarobvious} N^\ast(C-\delta K,(1-\delta)K)\leq (1+{\varepsilon})\frac{\operatorname{vol}C}{(1-\delta)^n\operatorname{vol}K}.$$ By , and we have $$N(C,K)\leq \frac{1+{\varepsilon}}{(1-\delta)^n} \left(1+n\ln \left(\frac{2}{\delta}\right)\right) \frac{\operatorname{vol}C}{\operatorname{vol}K}.$$ On the other hand, $$\label{eq:thetasymm} \theta(K)\leq N(C,K)\operatorname{vol}(K)/\operatorname{vol}(C)\leq \frac{1+{\varepsilon}}{(1-\delta)^n} \left(1+n\ln \left(\frac{2}{\delta}\right)\right).$$ We choose $\delta=\frac{1}{2n\ln n}$, and the following standard computation $$\begin{aligned} \label{eq:lnnesszam} (1+\varepsilon)^{-1}\theta(K)\leq \left(1+n\ln (4n\ln n)\right)\exp(1/\ln n) \\ \leq \left(1+n\ln (4n\ln n)\right) (1+2/\ln n) \leq \left(n\ln n+n\ln\ln n+ 5n\right),\nonumber\end{aligned}$$ yields the desired bound (as $\varepsilon$ can be taken arbitrarily close to 0). Next, consider the general case, that is when $K$ is not necessarily symmetric about the origin. We need to make the following modifications. Milman and Pajor (cf. Corollary 3 of [@MiPa00]) showed that, if the centroid (that is, the center of mass) of $K$ is the origin, then $\operatorname{vol}(K\cap-K)\geq \frac{\operatorname{vol}K}{2^n}$. (Note that the existence of a translate of $K$ for which this inequality holds was proved by Stein [@St56] using a probabilistic argument.) We define $\Lambda$ as a saturated packing of translates of $\frac{\delta}{2}(K\cap -K)$ in $C-\frac{\delta}{2}(K\cap -K)$. Thus, we have $C\subseteq \Lambda+\delta (K\cap-K)\subseteq \Lambda+\delta K$. Instead of , we now have $$\operatorname{card}\big({\Lambda\cap (x+(1-\delta)K)}\big)\leq \left(\frac{4}{\delta}\right)^n.$$ for any $x\in{{\mathbb R}^n}$. Rolling this change through the proof, at the end in place of , we obtain $$\theta(K)\leq \frac{1+{\varepsilon}}{(1-\delta)^n} \left(1+n\ln \left(\frac{4}{\delta}\right)\right),$$ which, however, is still less than $(1+{\varepsilon})\left(n\ln n+n\ln\ln n+ 5n\right)$ with the same choice of $\delta=\frac{1}{2n\ln n}$. Proof of the spherical results {#sec:Sphbound} ============================== Let $\Lambda$ be the set of centers of a saturated (ie. maximal) packing of caps of radius $\delta/2$. Clearly, $\Lambda$ is a $\delta$-net of ${{\mathbb S}^n}$, and thus, if we cover $\Lambda$ by rotated copies of radius $K_{-\delta}$, then the same rotations yield a covering of ${{\mathbb S}^n}$ by copies of $K$. Let $\bar\sigma$ denote the probability Haar measure on ${SO(n+1)}$. Let $H\subset{{\mathbb S}^n}$ be a measurable set, and denote the family of rotated copies of $H$ by ${\mathcal F}(H)=\{AH{\; : \; }A\in{SO(n+1)}\}$. Recall that for any fixed $u\in{{\mathbb S}^n}$ we have $$\begin{aligned} \bar\sigma(\{A\in{SO(n+1)}{\; : \; }u\in AH\}) =\\ \bar\sigma(\{A\in{SO(n+1)}{\; : \; }u\in A^{-1}H\})= \nonumber \\ \bar\sigma(\{A\in{SO(n+1)}{\; : \; }Au\in H\}) = \sigma(H) \nonumber.\end{aligned}$$ It follows that the measure $\frac{\bar\sigma}{\sigma(H)}$ on ${SO(n+1)}$ is a fractional cover of ${{\mathbb S}^n}$ by ${\mathcal F}(H)$ and thus, $\tau^{\ast}({{\mathbb S}^n},{\mathcal F}(H))\leq\frac{1}{\sigma(H)}$. Thus by , we obtain the following for the density of a covering by rotated images of $K$: $$\mbox{ density } \leq \sigma(K)\tau({{\mathbb S}^n},{\mathcal F}(K))\leq\sigma(K)\tau(\Lambda, {\mathcal F}(K_{-\delta}))$$ $$\leq (1+\ln( \max_{A\in{SO(n+1)}} \operatorname{card}\{\Lambda\cap AK_{-\delta} \} ) ) \cdot \frac{\sigma(K)}{\sigma(K_{-\delta})}$$ $$\leq \frac{\sigma(K)}{\sigma(K_{-\delta})} \left( 1+\ln\frac{\sigma\left(K_{-\delta/2}\right)}{\Omega\left(\frac{\delta}{2}\right) }\right).$$ Since it holds for any $\delta>0$, the theorem follows. We will apply Theorem \[thm:spherebyanything\] with $K$ being a cap of spherical radius ${\varphi}$. We set $\delta=\eta{\varphi}$, where $\eta$ will be specified later. By Theorem \[thm:spherebyanything\] and , we obtain for the density of a covering of ${{\mathbb S}^n}$ by caps of radius ${\varphi}$: $$\mbox{ density } \leq \left(1+n\ln\left(\frac{2}{\eta}\right) \right) \cdot \left(\frac{1}{1-\eta}\right)^n.$$ We choose $\eta=\frac{1}{2n\ln n}$, and the same computation as in yields the desired bound. We set $\delta=\kappa\rho$. First, observe that the measure of the belt-like region ${K\setminus K_{-\delta}}$ at the boundary of $K$ is at most as large as the measure of the belt-like region $C(v,\rho)\setminus C(c,\rho-\delta)$ at the boundary of the circum-cap $C(v,\rho)$ of $K$. Next, combine $\ln\frac{\sigma\left(K_{-\delta/2}\right)}{\Omega\left(\frac{\delta}{2}\right)} \leq \ln\frac{1}{\Omega\left(\frac{\delta}{2}\right)}$ with and to obtain the statement. Acknowledgement {#acknowledgement .unnumbered} =============== The author is grateful for the conversations with Károly Bezdek, Gábor Fejes Tóth and János Pach.
--- abstract: 'To probe the role of the intrinsic structure of the projectile on sub-barrier fusion, measurement of fusion cross sections has been carried out in $^{9}$Be + $^{197}$Au system in the energy range E$_{c.m.}$/V$_B$ $\approx$ 0.82 to 1.16 using off-beam gamma counting method. Measured fusion excitation function has been analyzed in the framework of the coupled-channel approach using CCFULL code. It is observed that the coupled-channel calculations, including couplings to the inelastic state of the target and the first two states of the rotational band built on the ground state of the projectile, provide a very good description of the sub-barrier fusion data. At above barrier energies, the fusion cross section is found to be suppressed by $\approx$ 39(2)% as compared to the coupled-channel prediction. A comparison of reduced excitation function of $^{9}$Be + $^{197}$Au with other $x$ + $^{197}$Au shows a larger enhancement for $^9$Be in the sub-barrier region amongst Z=2-5 weakly and tightly bound projectiles, which indicates the prominent role of the projectile deformation in addition to the weak binding.' author: - 'Malika Kaushik$^{1}$' - 'G. Gupta$^{2}$' - 'Swati Thakur$^{1}$' - 'H. Krishnamoorthy$^{3,4}$' - 'Pushpendra P. Singh$^{1}$' - 'V.V. Parkar$^{5}$' - 'V. Nanal$^{2}$' - 'A. Shrivastava$^{4,5}$' - 'R.G. Pillay$^{1}$' - 'K. Mahata$^{4,5}$' - 'K. Ramachandran$^{5}$' - 'S. Pal$^{6}$' - 'C.S. Palshetkar$^{2}$' - 'S.K. Pandit$^{5}$' title: 'Fusion of Borromean nucleus $^{9}$Be with $^{197}$Au target at near barrier energies' --- \[sec:level1\]Introduction ========================== Nuclear reactions involving weakly bound stable ($^{6,7}$Li, $^{9}$Be) and unstable ($^{6,8}$He, $^{7,10,11}$Be) projectiles have been extensively investigated in recent years due to their importance in understanding the effect of coupling to the continuum and many-body quantum tunneling phenomenon [@lf1; @nk; @lf2; @bb]. In particular, efforts have been made to understand the role of low break-up threshold of projectiles on the fusion cross sections arising from extended shapes and $\alpha$ + x cluster structures. During the projectile-target interaction, a weakly bound projectile may break-up into the constituent $\alpha$-cluster(s) before reaching the fusion barrier. Hence, both the complete fusion (CF) - where the entire projectile fuses with the target nucleus, and break-up fusion or incomplete fusion (ICF) - where a part of the projectile fuses with the target nucleus, are observed. The projectile break-up results in the reduced incoming flux [@nti; @jt], and therefore the cross section of CF is expected to be suppressed as compared to that for the tightly bound projectile. The observed sub-barrier fusion enhancement can be explained within the framework of the coupled-channel calculations by including couplings to the inelastic states and direct reaction channels such as neutron transfer and break-up. Although the phenomenon of fusion suppression is widely accepted and attributed to the weak binding of nuclei, its origin is not yet fully understood. In reactions involving $^{6,7}$Li and $^{9}$Be projectiles with heavy mass targets, the complete fusion has been reported to be suppressed by $\approx$ 30$\%$ as compared to the standard coupled-channel calculations [@lf1; @nk; @bb; @rr; @jla; @md3; @sg; @tp; @hgi; @bw]. A systematic study of the break-up effects on the complete fusion cross sections at energies above the Coulomb barrier is reported by Wang *et al.* [@bw]. In this report, it has been shown that for a given projectile, the suppression effect is independent of the target. Generally, a strong correlation is observed between the suppression factor and the lowest break-up threshold energy [@bw; @amplb]. Hinde *et al.* [@hinde2010] reported that in spite of widely different $\alpha$-break-up threshold energies, $^{9,10,11}$Be show a significant suppression of the complete fusion. Recently, Cook *et al.* [@kjc] concluded that the cluster transfer rather than actual break-up prior to reaching the fusion barrier is responsible for fusion suppression. While the CF suppression with weakly bound nuclei has been studied in many systems, a comparative study of fusion cross-section enhancement at sub-barrier energies to probe the effect of the projectile structure have been sparse. Lemasson *et al.* [@al] reported that complete fusion cross sections at sub-barrier energies for halo nucleus $^{8}$He were significantly enhanced as compared to $^{4}$He, mainly due to the coupling to the neutron transfer channel. Further, complete fusion cross section of $^{8}$He and $^{6}$He were found to be similar, which was attributed to the role of higher-order processes with neutron-pair transfer preceding fusion. As mentioned earlier, the weakly bound stable $^{6}$Li, $^{7}$Li, $^{9}$Be nuclei are dominantly clusters of alpha-deuteron, alpha-triton, and alpha-alpha-neutron, respectively. In particular, $^{9}$Be exhibits a Borromean structure ($\alpha$+$\alpha$+n) with a large deformation in the ground state. Moreover, the ground state is the only bound state of the system, and all excited states are particle unbound. Hence, reactions with $^9$Be at near barrier energies are important for a systematic study of weakly bound stable and unstable projectiles. In the present work, the fusion cross sections in $^{9}$Be + $^{197}$Au system have been measured at near barrier energies and analyzed in the framework of the coupled-channel approach using theoretical model code CCFULL. The choice of $^{9}$Be + $^{197}$Au system is primarily driven by the fact that the fusion studies with different weakly bound projectiles, namely, $^{6}$He [@yup], $^{8}$He [@al] and $^{6,7}$Li [@cs] on $^{197}$Au target have been reported earlier. A comparative study of these systems, together with $^{11}$B + $^{197}$Au data  [@asb], enables the assessment of the impact of weak binding on sub-barrier fusion. This paper is organized as follows - experimental details are given in Section II, results and discussions of experimental data employing statistical model calculations and coupled-channel calculations are described in Section III. In Section IV, a systematic comparison of weakly bound projectiles on $^{197}$Au is presented, and a summary of the present work is given in Section V. \[sec:level1\]Experimental details ================================== The experiment was performed at the Pelletron Linac facility at TIFR, Mumbai, India. Self-supporting $^{197}$Au target foils of thickness $\sim$ 1.3 - 1.7 mg/cm$^{2}$ were prepared using the rolling technique. The $^{9}$Be beam in the energy range E$_{lab}$ $\approx$ 30 - 47 MeV was bombarded on the $^{197}$Au target with a typical beam current of 8 - 15 pnA. The Aluminium catcher foils of the appropriate thickness ($\sim$ 1.5 mg/cm$^{2}$) were mounted behind the target foils to stop the recoiling reaction residues. For the effective utilization of beamtime, some of the irradiations were performed using a stack of two target-catcher foil assemblies. The incident energy and the energy spread at half target thickness were calculated using SRIM [@tr]. In order to correct for beam fluctuations during the irradiation, the beam current was recorded at regular intervals of 30 or 60 seconds using a CAMAC (Computer Automated Measurement And Control) scaler. The gamma-rays from irradiated samples were counted off-line using two efficiency calibrated HPGe detectors. The energy calibration and efficiency measurement of the HPGe detectors were carried out using a standard precalibrated $^{152}$Eu $\gamma$- ray source. Both the HPGe detectors were shielded with $\sim$5 cm thick lead rings for reducing the ambient background. Data were recorded using a digital data acquisition system (DAQ) employing CAEN digitizer (14 bit ADC, 100 MHz sampling rate), and the off-line data analysis were performed using LAMPS software [@lam]. Typical energy resolution obtained is about 2.4 keV at 1408 keV. The off-line counting was performed either at a distance of 10 cm from the face of the detector or in the close geometry (in which the sample was mounted on the face of the detector) depending on the activity of the irradiated sample. For the two lowest energies, target and catcher foils were counted separately to improve the sensitivity. -------------- ------------ ----------- -------------------- -------------------- Channel ER T$_{1/2}$ E$_{\gamma}$ (keV) I$_{\gamma}$($\%$) \[0.5ex\] 2n $^{204}$Bi 11.22 h 374.7 82 984 59 899.15 99 3n $^{203}$Bi 11.76 h 820.2 30 825.2 14.8 896.9 13.2 4n $^{202}$Bi 1.71 h 422.13 83.7 657.49 60.6 960.67 99.3 5n $^{201}$Bi 1.72 h 629.1 26 936.2 12.2 1014.1 11.6 -------------- ------------ ----------- -------------------- -------------------- : Evaporation residues (ER) from complete fusion in $^9$Be + $^{197}$Au reaction together with half-life (T$_{1/2}$), and energy (E[$_\gamma$]{}) and absolute intensity (I$_{\gamma}$) of prominent gamma-rays [@nndc]. \[table1\] Table \[table1\] gives a summary of expected ERs and their characteristics gamma-rays. Fig. \[fig1\] shows off-line $\gamma$-ray spectra obtained for $^{9}$Be + $^{197}$Au reaction at E$_{lab}$ = 36.6 and 44.7 MeV. The ERs $^{203,202}$Bi have been identified by their characteristic $\gamma$-rays, which are marked in the figure. The identification of gamma-rays was confirmed by half-life measurement and verification of relative yields of multiple gamma-rays of a given residue. From the observed photopeak yield N$_\gamma$, the ER cross section ($\sigma_{x}$) can be calculated as, $$\sigma_{x} = \frac{N_{\gamma} \lambda_{x} t_{irr}}{I_{\gamma} \epsilon_{\gamma} ({e^{-\lambda_{x}t_{1}}-e^{-\lambda_{x}t_{2}}}) N_{P} N_{T} ({1-e^{-\lambda_{x}t_{irr}}}) }$$ where, N$_{P}$ is the number of incident particles, N$_{T}$ is the number of target particles per unit area, $\lambda_{x}$ is a decay constant, t$_{irr}$ is the duration of irradiation, t$_{1}$ (t$_{2}$) is the time since the end of irradiation to the start (end) of counting, I$_{\gamma}$ and $\epsilon_{\gamma}$ are the absolute intensity and the photopeak efficiency of the characteristic $\gamma$-ray, respectively. This equation takes into account the decay during the irradiation (t$_{irr}$) and assumes the uniform beam current. As mentioned earlier, the beam current was recorded in smaller intervals to take care of the fluctuations, and the decay corrections [@ntz] were applied to each interval for computation of $\sigma_{x}$. From the recorded beam charge $Q$, N$_{P}=Q/q_{eq}$ is calculated, where $q_{eq}$ is the equilibrium charge state. The value of q$_{eq}$ is found to be +4 from the theoretical calculation [@gs] and from the prediction of code CHARGE of LISE++ [@lsc], over the range of energy and target assembly thicknesses studied in the present work. \[sec:level1\]Results and Discussions ===================================== E$_{lab}$ (MeV) E$_{c.m.}$ (MeV) $^{202}$Bi (mb) $^{203}$Bi (mb) R $\sigma^{Corr}_{CF}$(mb) ----------------- ------------------ ----------------- ------------------- ------ -------------------------- 33 31.6 - 0.020 $\pm$ 0.004 0.96 0.021 $\pm$ 0.004 34.5 33 - 0.20 $\pm$ 0.01 0.98 0.20 $\pm$ 0.01 35.6 34 0.10 $\pm$ 0.03 1.1 $\pm$ 0.1 0.99 1.2 $\pm$ 0.1 36.6 35 0.70 $\pm$ 0.04 4.5 $\pm$ 0.4 0.99 5.3 $\pm$ 0.4 37.6 36 3.7 $\pm$ 0.2 12 $\pm$ 1 0.99 16 $\pm$ 1 38.6 36.9 12.7 $\pm$ 0.1 21 $\pm$ 1 0.99 34 $\pm$ 1 39.6 37.9 31.4 $\pm$ 0.2 32 $\pm$ 2 0.99 64 $\pm$ 2 39.9 38.2 54.5 $\pm$ 0.3 32 $\pm$ 3 0.99 87 $\pm$ 3 40.6 38.8 69 $\pm$ 1 40 $\pm$ 2 0.99 111 $\pm$ 2 42.7 40.8 155 $\pm$ 8 35 $\pm$ 3 0.99 192 $\pm$ 9 44.7 42.7 281 $\pm$ 3 32 $\pm$ 5 0.98 320 $\pm$ 6 46.7 44.7 328 $\pm$ 6 24 $\pm$ 3 0.96 367 $\pm$ 7 \[csdata\] In the present work, the fusion cross sections are measured down to 18$\%$ below the Coulomb barrier. Measured cross sections of $^{203}$Bi(3n) and $^{202}$Bi(4n) are listed in Table \[csdata\]. Errors shown in the cross sections are statistical. It may be noted that the contribution of evaporation residues $^{204}$Bi (2n) and $^{201}$Bi (5n), and the fission are expected to be small and could not be unambiguously measured at the present level of sensitivity. The measured ER cross sections are compared with PACE2 [@ag] calculations. The PACE2 is a statistical model code, which employs Monte Carlo simulations to calculate the decay of the compound nucleus using the Hauser-Feshbach approach. In the present calculation, Ignatyuk level density prescription [@igk] was used with an asymptotic level density parameter a=A/K, where K is varied between 8-10. At all energies, the angular momentum (${\ell}$) distribution (and hence $\sigma_{CF}$) obtained from the CCFULL calculations (inclusive of couplings - as described later) has been used as input in the PACE2. A comparison of experimentally measured and theoretically calculated excitation functions of evaporation residues $^{202}$Bi (4n) and $^{203}$Bi (3n) in $^{9}$Be + $^{197}$Au system is presented in Fig. \[csplot\]. It is observed that $a$ = A/9 MeV$^{-1}$ gives the best agreement with the experimental data. The statistical model calculations show that neutron evaporation channels are dominant, and exhaust $\approx$99% of CF cross section over most of the experimentally measured energies, consistent with other systems in this mass range [@st; @ast]. As mentioned earlier, $\sigma_{5n}$, $\sigma_{2n}$ and fission could not be measured. Therefore, the contribution of missing channels has been deduced [@vv] using PACE2. The ratio of xn channels (3n,4n) and the complete fusion cross section, R=($\sigma_{3n+4n}^{PACE}$)/$\sigma_{fus}$ is determined using the PACE2 calculations at different energies and the experimental fusion cross section is derived as $\sigma^{Corr}_{CF}$=$\sigma_{3n+4n}^{exp}$/R. The values of R and corrected cross sections ($\sigma^{Corr}_{CF}$) are given in Table \[csdata\]. At higher energies, the correction mainly arises due to missing $\sigma_{5n}$ and fission, while that at lower energies it is due to missing $\sigma_{2n}$. It is important to note that the maximum correction is $\sim4\%$ at the extreme energy points and hence has no significant effect on the conclusions drawn in the present work. At sub-barrier energies, the fusion cross sections are calculated with the CCFULL code modified specially for $^{9}$Be projectile [@kh; @md]. The CCFULL calculations, without incorporating couplings to any inelastic excitation, provide a simple one-dimensional barrier penetration model (1DBPM) for easy reference. The coupled-channel calculations performed using CCFULL are presented in Fig. \[fig:ccfull\]. The potential parameters used in the calculations, namely, V$_{0}$ = 51.94 MeV, r$_{0}$ = 1.17 fm, a$_{0}$ = 0.63 fm, are taken from the Woods-Saxon parametrization of the Akyuz-Winther (AW) potential [@rab]. The calculations include the coupling of both projectile and target excited states. For $^{9}$Be, the ground state spin $\frac{3}{2}^{-}$ with the deformation parameter $\beta_{2}$ = 1.3 [@hjv] and the first two excited states in K = $\frac{3}{2}^{-}$ (band head) ground-state rotational band  [@hn] are taken into consideration. In case of $^{197}$Au, the inelastic excitation to the first excited state at E$_{x}$ = 0.077 MeV is included as a vibrational state with $\beta$ = 0.1 [@xh]. It is evident from Fig. \[fig:ccfull\] that CCFULL output shows good agreement with data at sub-barrier energies, but over-predicts the data at near- and above-barrier energies. It should be mentioned that for $^9$Be projectile, different $\beta_{2}$ values have been used in CCFULL calculations. For example, $\beta_{2}$ = 0.92 in Ref. [@md], the best fit value of Ref. [@hjv] $\beta_{2}$ = 1.1, and $\beta_{2}$ = 1.3 [@csbe; @vivekbe]. In the present case, $\beta_{2}$ = 1.3 was found to describe the data well with the above mentioned AW potential. The suppression factor S = $\sigma^{Corr}_{CF}$/$\sigma^{CC}_{CF}$ for energies above the barrier is shown in the inset of Fig. \[fig:ccfull\](a). The mean value of S=0.61(2), implies $\approx39\pm2$% CF suppression for $^{9}$Be + $^{197}$Au system due to the involvement of the weakly bound projectile. The CCFULL output scaled with S=0.61 is also shown in the same figure (solid line), which matches reasonably well with the measured excitation function. The observed suppression factor is consistent with 25-40$\%$ fusion suppression observed with $^{9}$Be on different heavy targets [@lf1; @lf2; @md2; @yd1]. \[sec:level1\] Comparison of weakly bound projectiles: $\rm x$ + $\rm ^{197}Au$ systems ======================================================================================= To understand the role of projectile structure on fusion involving weakly bound nuclei, where break-up is a dominant channel, a systematic comparison of fusion excitation functions have been carried out for different $x$ + $^{197}$Au combinations. The values of break-up threshold energy for various weakly bound projectiles, calculated using the latest mass tables, are tabulated in Table \[sep\_energy\]. -------------------- ------------------------- ---------------- Nuclei Channel E$_{BU}$ (MeV) \[0.5ex\] $^{9}$Be $\alpha$ + $\alpha$ + n 1.575 $^{8}$Be + n 1.667 $^{7}$Li $\alpha$ + t 2.467 $^{6}$Li $\alpha$ + d 1.473 $^{8}$He $^{6}$He + 2n 2.125 $^{7}$He + n 2.535 $^{6}$He $\alpha$ + 2n 0.975 -------------------- ------------------------- ---------------- : List of dominant break-up channels together with corresponding break-up threshold energy ($E_{BU}$) for weakly bound projectiles considered in the present study. \[sep\_energy\] For comparison of different projectile-target systems, appropriate scaling of cross sections is essential. Scaling methodologies have been extensively discussed in Ref. [@lfc2015]. Canto *et al.* [@lf] introduced the reduced fusion cross section and the reduced energy variables, defined as, $$E_{red} = \frac{E_{c.m.}-V_{B}}{\hbar\omega} ~~~{\rm and} ~~~ \sigma_{red} = \frac{2.E_{c.m.}}{\hbar\omega.R_{B}^{2}} \sigma_{F}\\$$ It is evident that these reduced variables depend on the radius and height of the fusion barrier, and the barrier curvature ($\hbar\omega$), thereby taking into consideration static as well as dynamic effects. The available data for weakly bound projectiles on $^{197}$Au has been analyzed using the above scaling method. In addition, data with projectiles having higher break-up threshold energy, namely, $^4$He (E$_{BU}$=20.578 MeV, for n+$^3$He) and $^{11}$B ((E$_{BU}$= 8.664 MeV, for $\alpha$+$^{7}$Li) is also analyzed in the same framework. It should be mentioned that for both $^6$He and $^4$He, data points with large error bars are not considered in the present analysis, and only a subset of reported data is used [@al]. The barrier parameters used for obtaining reduced variables are listed in Table \[redpar\]. These parameters are obtained from the CCFULL calculations for $^{6,7}$Li [@cs], $^9$Be (present data), and $^{11}$B [@asb]. Those for $^{4,6,8}$He are derived from the fusion cross section data and scaled data presented in Ref. [@al]. --------------------------------- --------------- -------------- --------------------- -- System V$_{B}$ (MeV) R$_{B}$ (fm) $\hbar\omega$ (MeV) \[0.5ex\] $^{11}$B + $^{197}$Au 46.87 11.42 4.65 $^{9}$Be + $^{197}$Au 38.4 11.14 4.72 $^{6}$Li + $^{197}$Au 28.92 11.13 5.13 $^{7}$Li + $^{197}$Au 29.28 11.00 4.86 $^{8}$He + $^{197}$Au 18.65 11.60 3.47 $^{6}$He + $^{197}$Au 19.13 11.12 4.24 $^{4}$He + $^{197}$Au 19.77 10.79 5.39 --------------------------------- --------------- -------------- --------------------- -- : Barrier parameters: V$_{B}$ (MeV), R$_{B}$ (fm), $\hbar\omega$ (MeV) used to obtain $\sigma_{red}$ and $E_{red}$ \[redpar\] Fig. \[ff\_red\](a) shows the reduced fusion excitation functions of different weakly bound projectiles, namely, $^{6,8}$He [@al; @yup], $^{6,7}$Li [@cs] and $^{9}$Be (present work), on $^{197}$Au target. The same data is also shown on a linear scale in Fig. \[ff\_red\](b) for the better visualization of data at above-barrier energies. A comparison of reduced excitation function of weakly bound $^{9}$Be with $^4$He and $^{11}$B (projectiles having higher break-up threshold energy) is shown in Fig. \[ff\_red\](c) and (d) on a logarithmic and linear scale, respectively. The Universal Fusion Function (UFF), which represents Wong’s formula in the absence of coupling effects, given by $$F_{0}(x) = ln[1+exp(2\pi x)]$$ is also plotted in the same figure for reference. All weakly bound projectiles exhibit the expected common feature, namely, enhancement below the barrier and suppression above the barrier (w.r.t UFF). However, it should be noted that the reduced variables are sensitive to R$_b$ and $\hbar\omega$ and hence are model dependent. It is interesting to see that $^9$Be shows the highest sub-barrier fusion enhancement in this group of Z=2-5 projectiles, which include halo nuclei $^{6,8}$He. Even amongst the stable weakly bound nuclei, $\sigma_{red}$ of $^{9}$Be is $\sim$ factor of 2 higher as compared to $^{6}$Li, which has similar E$_{BU}$. Although couplings to transfer and break-up channels are not included in coupled-channel calculations for $^9$Be, the CCFULL calculations including couplings to rotational states can explain the present data at sub-barrier energies (see Fig. \[fig:ccfull\]). Thus the observed large sub-barrier enhancement points to the important role of deformation, even for weakly bound nuclei. The fusion suppression factor measured in the present work for $^{9}$Be + $^{197}$Au system (39$\pm2\%$) is similar to that for $^{6}$Li+$^{197}$Au [@cs] (35$\pm2\%$). Although data for fusion suppression $^{6,8}$He + $^{197}$Au is not reported, it can be seen from Fig. \[ff\_red\](b) that at above-barrier energies CF suppression in case $^{8}$He is less compared to $^{9}$Be and $^6$Li, which is consistent with break-up threshold energies. \[sec:level1\]Summary and Conclusions ===================================== The fusion excitation function in ${^9}$Be + $^{197}$Au reaction has been measured in the energy range 0.82 $\leq$ E$_{c.m.}$/V$_{B}$ $\leq$ 1.16. The cross sections of evaporation residues $^{203}$Bi(3n) and $^{203}$Bi(4n) have been obtained by off-line gamma counting. The measured fusion excitation function is analyzed using theoretical model code CCFULL. The coupled-channel code CCFULL, including coupling to inelastic excitations of projectile and target nuclei, provides an excellent description of the experimental fusion excitation function at sub-barrier energies but under-predicts the same at above barrier energies. The experimental fusion cross section above the barrier is found to be suppressed by $\approx$39(2)% as compared to the coupled-channel calculations. In order to investigate the role of the intrinsic structure of projectile on sub-barrier fusion cross section, a systematic comparison of the reduced excitation functions of different projectile systems $x$ + $^{197}$Au is presented. It is observed that $\sigma_{red}$($^{9}$Be) shows a larger enhancement in the sub-barrier region amongst all Z=2-5 projectiles, namely, $^{4,6,8}$He, $^{6,7}$Li, and $^{11}$B, which indicates the prominent role of the projectile deformation in addition to the weak binding. \[sec:level1\]Acknowledgments ============================= The authors would like to thank the PLF staff for providing the steady and smooth beam during the experiments, and the target lab personnel for their help in the target preparation. L. F. Canto *et al.*, Phys. Rep. [**424**]{}, 1 (2006). N. Keeley *et al.*, Prog. Part. Nucl. Phys. [**59**]{}, 579 (2007). B. B. Back *et al.*, Rev. Mod. Phys. [**86**]{}, 317 (2014). L. F. Canto *et al.*, Phys. Rep. [**596**]{}, 1 (2015) and references therein. N. Takigawa *et al.*, Phys. Rev. C [**47**]{}, R2470 (1993). J. Takahashi *et al.*, Phys. Rev. Lett. [**78**]{}, 30 (1997). R. Rafiei *et al.*, Phys. Rev. C [**81**]{}, 024601 (2010). Jin Lei and Antonio M. Moro, Phys. Rev. Lett. [**122**]{}, 042503 (2019). M. Dasgupta *et al.*, Phys. Rev. Lett. [**82**]{}, 1395 (1999). C. Signorini *et al.*, Eur. Phys. J. A [**10**]{}, 249 (2001). V. Tripathi *et al.*, Phys. Rev. Lett. [**88**]{}, 172701 (2002). K. Hagino *et al.*, Phys. Rev. C [**61**]{}, 037602 (2000). Bing Wang *et al.*, Phys. Rev. C [**90**]{}, 034612 (2014). A. Mukherjee *et al.*, Phys. Lett. B [**636**]{}, 91 (2006). D.J. Hinde *et al.* Phys. Rev. C [**81**]{}, 064611 (2010). K. J. Cook *et al.*, Phys. Rev. Lett. [**122**]{}, 102501 (2019). A. Lemasson *et al.*, Phys. Rev. Lett. [**103**]{}, 232701 (2009). Yu. E. Penionzhkevich *et al.*, Eur. Phys. J. A [**31**]{}, 185 (2007). C. S. Palshetkar *et al.*, Phys. Rev. C [**89**]{}, 024607 (2014). A. Shrivastava *et al.*, Phys. Rev. C [**96**]{}, 034620 (2017). http://www.srim.org http://www.tifr.res.in/$\sim$pell/lamp.html https://www.nndc.bnl.gov/ N. T. Zhang et. al., Phys. Rev. C [**90**]{}, 024621 (2014). G. Schiwietz *et al.*, Nucl. Instr. and Meth. B [**175**]{}, 125 (2001). http://lise.nscl.msu.edu A. Gavron, Phys. Rev. C [**21**]{}, 230 (1980). A.V. Ignatyuk *et al.*, Sov. J. Nucl. Phys. [**21**]{}, 255 (1975). Shital Thakur et. al., Euro. Phys. J. Web of Conferences [**17**]{}, 16017 (2011). A. Shrivastava et. al., Phys. Lett. B [**718**]{}, 931 (2013). V. V. Parkar *et al.*, Phys. Rev. C [**98**]{}, 014601 (2018). K. Hagino *et al.*, Phys. Commun. [**123**]{}, 143 (1999). M. Dasgupta *et al.*, Phys. Rev. C [**70**]{}, 024606 (2004). R. A. Broglia and A. Winther, Elastic and Inelastic Reactions, Heavy Ion Reactions, Heavy Ion Reaction Lecture Notes Vol. I (Benjamin Cummings Redwood City, CA, 1981), p. 114. H. J. Votava *et al.*, Nucl. Phys. A [**204**]{}, 529 (1973). H. Nguyen Ngoc *et al.*, Nucl. Phys. [**42**]{}, 62 (1963). Xiaolong Huang and Chunmei Zhou, Nucl. Data Sheets [**104**]{}, 283 (2005). C.S. Palshetkar *et al.*, Phys. Rev. C [**82**]{}, 044608 (2010). V.V. Parkar *et al.*, Phys. Rev. C [**82**]{}, 054601 (2010). M. Dasgupta *et al.*, Phys. Rev. C [**81**]{}, 024608 (2010). Y. D. Fang *et al.*, Phys. Rev. C [**91**]{}, 014608 (2015). L.F. Canto *et al.*, Phys. Rev. C [**92**]{}, 014626 (2015). L. F. Canto *et al.*, J. Phys. G. Nucl. Part. Phys. [**36**]{} 015109 (2009), Nucl. Phys. A [**821**]{}, 51 (2009). M.S. Basunia *et al.*, Phys. Rev. C [**75**]{}, 015802 (2007).
--- author: - David Merritt title: 'Martin Schwarzschild’s Contributions to Galaxy Dynamics' --- \#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} = \#1 1.25in .125in .25in Introduction ============ The astronomical community’s debt to Martin Schwarzschild derives from much more than his published work, as many of us who were his students, collaborators and friends can testify. Nor did Schwarzschild’s contributions to galaxy dynamics constitute more than a small portion of his scientific output. Nevertheless it would be hard to think of another single figure whose work so influenced the development of many of the fields discussed at this meeting. Those of us who came of scientific age after Schwarzschild’s retirement in 1979 tend to identify his contributions to galaxy dynamics with the remarkable series of papers on elliptical galaxies that began appearing at about the same time. But Schwarzschild’s interest in the structure and dynamics of stellar systems was lifelong; for instance, as early as 1951, he published the first of two papers with L. Spitzer concerning the influence of interstellar clouds on stellar velocities. A number of other papers from this decade dealt with the relation between the chemical composition and kinematics of stars in the Milky Way and other galaxies. The following review focusses on three areas of galaxy dynamics where Schwarzschild’s contributions were particularly fundamental: the masses of stellar systems; the structure of galactic nuclei; and the dynamics of elliptical galaxies. Masses of Stellar Systems ========================= The study of the distribution of mass in external galaxies was still in its infancy when Schwarzschild published his 1954 paper, “Mass Distribution and Mass-Luminosity Ratio in Galaxies.” Here Schwarzschild re-analyzed the kinematical data in three galaxies – M31, M33 and NGC 3115 – for which earlier workers had found significantly different distributions of light and mass. In each galaxy, he showed that the data were in fact consistent with a constant ratio of mass to light, albeit with rather different values in the three systems. In the case of NGC 3115, for instance, Schwarzschild noted that a high central velocity dispersion recently measured by Minkowski implied a large deviation between circular and rotational velocities near the center of this galaxy, thus allowing $M/L$ to remain approximately constant in spite of a low central $v_c$. But this paper also contained at least three, quite novel approaches to what we would now call the “dark matter problem.” First, Schwarzschild estimated the mass of M32 by assuming that its gravitational pull was responsible for the observed asymmetry in rotation velocity and morphology of its larger companion M31. He concluded that the mass-to-light ratio of M32 was of order $200$, in approximate agreement with his value for NGC 3115. Second, Schwarzschild presented a new and elegant method for evaluating the virial theorem, the strip-count formula. He showed that the potential energy of a spherical system could be expressed simply in terms of $S(q)$, the observed number of objects in a strip of unit width that passes a distance $q$ from the projected center. [^1] He applied his technique to the Coma cluster using Zwicky’s galaxy counts and obtained the “bewilderingly high value” of 800 for its mass-to-light ratio. Finally, this paper contained what was probably the first suggestion that white dwarfs, remnants of an earlier generation of star formation, might constitute a signficant fraction of the masses of galaxies. In “Note on the Mass of M92” (1955), Schwarzschild and S. Bernstein used the strip-count formula to obtain one of the first accurate measurements of the mass-to-light ratio of a globular cluster. [^2] Structure of Galactic Nuclei ============================ Schwarzschild’s pivotal role in the development and deployment of the balloon-borne telescopes Stratoscope I and II is well known. [^3] After its two initial flights, Stratoscope II, a 36-inch telescope, was reconfigured for high-definition photography and used to obtain images of galactic nuclei unblurred by the atmosphere. In “An Upper Limit to the Angular Diameter of the Nucleus of NGC 4151” (1968, 1973), Schwarzschild, R. Danielson and B. D. Savage reported that the nucleus of NGC 4151 had still not been resolved and accordingly that only an upper limit could be placed on its diameter, which they estimated at $0.08''$. They were thus able to show that the non-thermal continuum, which provides most of the nuclear light in this Seyfert galaxy, originated in a region much smaller than that associated with the emission lines. The eighth, and final, flight of Stratoscope II was used to obtain high-resolution photographs of M31 and M32. The results for M32, while intriguing, were never published; the observations were made shortly before sunrise while the telescope was gradually descending and the resultant temperature differentials caused a substantial degradation in the quality of the images. But the data seemed to show no evidence for a distinct nucleus at a resolution of $\sim0.5''$, consistent with what we now know about the luminosity profile of this galaxy. Observations taken during the same night of M31 were more successful; in “The Nucleus of M31” (1974), E. S. Light, R. E. Danielson and Schwarzschild presented $0.2''$ resolution photographs that clearly resolved the nucleus, showing it to have a core radius of only $0.48''$. More striking was the observed asymmetry of the nucleus, which was revealed to have a low intensity extension on one side of the bright peak. Light et al. raised the possibility that the offset was a result of non-uniform obscuration by dust, and noted that, in the absence of dust, “the observed asymmetry is an intrinsic property of the nucleus which will probably require a dynamic explanation.” The latter picture is now accepted by most astronomers due to the absence of color variations. Elliptical Galaxy Dynamics ========================== Starting in 1976, when he was 64 years old, Schwarzschild wrote or co-authored a remarkable series of 21 papers on the dynamics of elliptical galaxies. The first of these, a collaboration with M. Ruiz, dates from the “early days” of the field when it was still universally assumed that elliptical galaxies and bulges were rotationally-supported, axisymmetric systems. “An Approximate Dynamical Model for Spheroidal Stellar Systems” (1976) presented a novel approach to the problem of elliptical galaxy modelling. Ruiz and Schwarzschild wrote $f(E,L_z)=f_0e^{-E/\sigma^2}g(L_z)$, and assumed in addition that the density generated by $f$ was constant on spheroids of fixed eccentricity. The two assumptions are mildly inconsistent, as the authors fully realized, but together they permit an extremely elegant derivation of the function $g(L_z)$: one first matches the density profile on the rotation axis, which is independent of $g$, then uses the observed density in the equatorial plane to determine $g(L_z)$. Ruiz (1976) applied the model to the central region of M31, treating the nucleus and bulge as distinct components. The bulge in Ruiz’s model of M31 was tipped out of the disk plane in order to reproduce the observed twist in the isophotes at about $10'$ from the center of this galaxy. Stark (1977) recognized that a coplanar and triaxial bulge could reproduce the twist in M31 equally well. At about the same time, a number of workers began publishing integrated spectra which showed that these objects were rotating much more slowly than expected for centrifugally flattened oblate spheroids. Schwarschild contributed to the emerging view of early-type galaxies as triaxial ellipsoids in two papers with T. B. Williams, “A Photometric Determination of Twists in Three Early-Type Galaxies,” I & II (1979). These studies revealed significant twists in the inner isophotes of three elliptical galaxies, which the authors cautiously interpreted as evidence that “many elliptical galaxies may have a more complicated basic structure than that of axially symmetric cofigurations.” Schwarzschild’s most famous paper from this period is undoubtedly “A Numerical Model for a Triaxial Stellar System in Dynamical Equilibrium” (1979), in which he constructed the first completely self-consistent model of a triaxial galaxy. The approach was at the same time beautifully straightforward and quite novel. Schwarzschild’s insight was to treat individual, time-averaged orbits as building blocks for a galaxy – thus replacing the cumbersome self-consistency equations by a matrix equation that could be solved using standard numerical techniques. In the process, he discovered the four families of regular orbits in triaxial potentials, the boxes and the three types of tubes. His demonstration that most orbits in a non-axisymmetric potential could be regular – i.e. that they respected three effective integrals of the motion – was quite unexpected at the time. Schwarzschild went on, in two subsequent studies, to develop a more complete understanding of these major orbit families. “On the Nonexistence of Three-Dimensional Tube Orbits Around the Intermediate Axis in a Triaxial Galaxy Model” (1979), with G. Heiligman, linked the existence of the tube orbits to the stability of the $1:1$ resonant orbits in the principal planes. The primary motivation for this work was the apparent absence of intermediate-axis tube orbits in the self-consistent triaxial model. The authors showed that the $1:1$ orbit in the $X-Z$ plane [^4] (i.e. the plane perpendicular to the intermediate axis) was generally unstable to vertical perturbations, a circumstance which they noted was “quite plausibly destructive for the existence of $Y$-tube orbits.” A second study with M. Vietri, “Analysis of Box Orbits in a Triaxial Galaxy” (1983) developed the picture of box orbits as perturbations of the stable, long-axis orbit. The key to the analysis was a careful treatment of the second-order terms: these terms were retained in the development of the transverse motion but omitted from the axial motion, thus allowing the equations for the different orders to be solved independently. A remarkable paper from the following year, “Stellar Orbits in Angle Variables” (1984) with S. J. Ratcliff and K. M. Chang, showed how a complete description of a two-dimensional orbit could be obtained in terms of its action-angle variables. This problem currently goes under the name of “torus construction” but it is actually quite old, with antecedents in work of Einstein and Born on semi-classical quantization. Here again, the approach was beautifully direct. The authors asked simply: How must the Cartesian coordinates depend on the angles if the angles are to increase linearly with time? The result was a set of differential equations for $x$ and $y$ as functions of the angles. These equations are nonlinear, and Ratcliff et al. developed an iterative technique for solving them which worked well whenever the initial guess was sufficiently close to the true solution. The slow observed rotation of elliptical galaxies was one of the factors that prompted Schwarzschild to construct his first triaxial model. Real elliptical galaxies probably do have rotating figures, and in 1982 Schwarzschild began investigating the effects of slow figure rotation on the triaxial self-consistency problem. “Retrograde Closed Orbits in a Rotating Triaxial Potential” (1982), with J. Heisler and D. Merritt, reported the existence of the “anomalous” orbits, $1:1$ resonant orbits that are tipped out of the $Y-Z$ plane by Coriolis forces. The anomalous orbits give rise to two families of $X$-tubes that circulate in opposite directions about the long axis of a rotating triaxial figure. In “A Model for Elliptical Radio Galaxies with Dust Lanes” (1982), T. S. van Albada, C. G. Kotanyi and Schwarzschild suggested that the dust lanes of Centaurus A and M84 consisted of matter moving along these anomalous orbits. [^5] Schwarzschild made one attempt at achieving self-consistency in a triaxial model with rapid figure rotation; this initial attempt failed, as Schwarzschild reported at one of the Princeton “Tuesday lunches,” and the work was never published. However a subsequent effort, using a more slowly rotating figure, was successful. In “Triaxial Equilibrium Models for Elliptical Galaxies with Slow Figure Rotation” (1982), Schwarzschild chose a value for the rotation period that was long enough (of order $10^9$ years after scaling) that all four of the major orbit families existed out to the truncation radius of the model. He noted that the two branches of $X$-tubes must be equally populated if such a model is to be eight-fold symmetric, which means that a rotating model will lack any streaming around the long axis. This was another example of how the use of orbits as building blocks could lead to insights about a galaxy’s kinematics that would have been difficult to obtain from the Jeans or Boltzmann equations. In his 1979 self-consistency study, Schwarzschild had found that box orbits alone could not reproduce the mass distribution of his triaxial model, since they tended to place too much mass along the major axis. His solution was to incorporate $X$-tube orbits which avoid the long axis. Schwarzschild noted that solutions incorporating the other major orbit family, the $Z$-tubes, were also likely to exist and that the question of the uniqueness of solutions “is thus left unanswered by the present investigation.” He returned to the uniqueness question in a 1986 paper, “Dynamical Models for Galactic Bars: Truncated Perfect Elliptic Disk.” Schwarzschild considered a strongly truncated, planar mass model that supported only one family of orbits, the boxes, and showed numerically that a self-consistent solution existed and that it was unique. Beyond the truncation radius in this two-dimensional model, tube orbits exist in addition to box orbits, and one might expect to find a certain degree of non-uniqueness in solutions that draw on both orbit families. This was shown to be the case in a study with P. T. de Zeeuw and C. Hunter that appeared the following year, “Nonuniqueness of Self-Consistent Equilibrium Solutions for the Perfect Elliptic Disk” (1987). A further step toward demonstrating non-uniqueness in the three-dimensional problem was taken by Hunter, de Zeeuw, C. Park and Schwarzschild in “Prolate Galaxy Models with Thin-Tube Orbits” (1990). The authors showed that a variety of self-consistent solutions for axisymmetric prolate models could be found by varying the relative occupation numbers of orbits from the two families of thin long-axis tubes. In 1980, R. H. Miller asked Schwarzschild whether he could test the stability of the nonrotating triaxial model. Schwarzschild agreed, and assigned one of his students the task of re-integrating the orbits to provide initial conditions for the $N$-body code. In the process it was discovered that many of the orbits generated different masses in the grid of cells than they had in the original integrations. The discrepancy was eventually traced to the installation of a new computer at the Princeton Computer Center: the differences in the round-off algorithms of the two machines were sufficient to trigger the exponential instability of those orbits that were stochastic, leading to significantly modified trajectories after many orbital periods. Schwarzschild followed up this hint in the following year in a study with J. Goodman, “Semistochastic Orbits in a Triaxial Potential” (1981). Goodman and Schwarzschild tested the stability of box orbits by looking for exponential divergence of nearby trajectories. They noted that a large fraction of the box orbits were in fact chaotic, but that the chaos produced only modest changes in the shapes of the orbits over 50 oscillations. They coined the term “semi-stochasticity” to describe this phenomenon. The chaos was tentatively linked to the linear instability of the short- and intermediate-axis orbits. Schwarzschild’s self-consistent triaxial models from 1979 and 1982 were based on the Hubble density profile, which has a large, constant-density core. It became increasingly clear throughout the 1980’s that the luminosity profiles of many galaxies might increase more steeply at small radii; indeed, Schwarzschild’s own Stratoscope observations of M31 and NGC 4151 had revealed pointlike nuclei in these galaxies. The behavior of box orbits is very sensitive to the central density of a triaxial model, and in 1989 Schwarzschild began to look in detail at the orbits in triaxial models with small or nonexistent cores. His two studies with J. Miralda-Escudé and J. F. Lees – “On the Orbit Structure of the Logarithmic Potential” (1989) and “The Orbital Structure of Galactic Halos” (1992) – revealed that the planar motion in centrally concentrated models is dominated by resonances, which generate families of orbits not seen in models with large cores. Schwarzschild, who was fiercely opposed to opaque terminology, gave these resonant orbits names that evoked their shapes like “banana,” “fish” and “pretzel;” these names have remained in widespread use. He also began to look in these papers at the behavior of orbits in potentials with central point masses representing black holes. While Miller & Smith’s (1981) $N$-body study did not find any strong evidence for instability in Schwarzschild’s triaxial model, a number of examples of dynamical instabilities in other models of hot stellar systems began to be discussed at about this time. In “Orbital Contributions to the Stability of Triaxial Galaxies” (1989), de Zeeuw and Schwarzschild used an adiabatic deformation technique to evaluate the stability to small perturbations of Statler’s (1987) triaxial models based on the perfect ellipsoid. They found that the response of individual box orbits to barlike perturbations was often destabilizing, in the sense that the response density tended to reinforce the original perturbation; a similar mechanism drives the radial-orbit instability in spherical models. In “The Ring Instability in Radially Cold Oblate Models” (1991), the same authors investigated axisymmetric instabilities in oblate models constructed from thin tube orbits. They found that such models were unstable to radial clumping when sufficiently flat. These stability studies provided yet a further demonstration of the usefulness of an orbit-based approach to galaxy dynamics. In one of his last papers, “Self-Consistent Models for Galactic Halos,” Schwarzschild revisited the triaxial self-consistency problem, this time using models based on the singular isothermal mass distribution. Such models are scale-free, which allowed Schwarzschild to construct orbit libraries by scaling the orbits computed at a single energy; the increase in efficiency enabled him to compute orbit libraries for six different choices of the model axis ratios. Schwarzschild found that most of the box orbits in these models were significantly stochastic, a rather different situation than he had been led to expect by his earlier work in two dimensions. He showed that the omission of the stochastic orbits could sometimes preclude a self-consistent solution, implying restrictions on the allowed shapes of isothermal halos. This study demonstrated clearly the importance of chaos in the phase space of realistic triaxial models and opened to door to a wealth of later studies of this fascinating topic. Conclusion ========== It is sometimes said that a scientist’s career is over by the age of 35. One may safely assume that Martin Schwarzschild would have disagreed with this statement; in any case, all of the work cited here was published after that particular milestone had been passed. Without the contributions which Schwarzschild made in the late stages of his career, the field of galaxy dynamics would be an incomparably less rich and exciting one than it is today. I am indebted to the following people who provided details about Martin Schwarzschild’s research or unpublished work, or made helpful comments on the manuscript: C. Hunter, R. Miller, F. Schweizer, J. Sellwood, T. Statler, P. Teuben, S. Tremaine, T. van Albada, P. Vandervoort, T. Williams, and P. T. de Zeeuw. Danielson, R., Savage, B. D. & Schwarzschild, M. 1968, “An Upper Limit to the Angular Diameter of the Nucleus of NGC 4151,” , 154, L117-120 de Zeeuw, T. & Schwarzschild, M. 1989, “Orbital Contributions to the Stability of Triaxial Galaxies,” , 345, 84-100 de Zeeuw, P. T., Hunter, C. & Schwarzschild, M. 1987, “Non-Uniqueness of Self-Consistent Equilibrium Solutions for the Perfect Elliptic Disk,” , 317, 607-636 de Zeeuw, T. & Schwarzschild, M. 1991, “The Ring Instability in Radially Cold Oblate Galaxy Models,” , 369, 57-78 Goodman, J. & Schwarzschild, M. 1981, “Semistochastic Orbits in a Triaxial Potential,” , 245, 1087-1093 Heiligman, G. & Schwarzschild, M. 1979, “On the Nonexistence of Three-Dimensional Tube Orbits Around the Intermediate Axis in a Triaxial Galaxy Model,” , 233, 872-876 Heisler, J., Merritt, D. & Schwarzschild, M. 1982, “Retrograde Closed Orbits in a Rotating Triaxial Potential,” , 258, 490-498 Hunter, C., de Zeeuw, P. T., Park, C. & Schwarzschild, M. 1990, “Prolate Galaxy Models with Thin-Tube Orbits,” , 363, 367-390 Lees, J. F. & Schwarzschild, M. 1992, “The Orbital Structure of Galactic Halos,” , 384, 491-501 Light, E. S., Danielson, R. E. & Schwarzschild, M. 1974, “The Nucleus of M31,” , 194, 257-263 Light, E. S., Danielson, R. E. & Schwarzschild, M. 1974, “The Nucleus of M32,” unpublished manuscript Miller, R. H. & Smith, B. F. 1982, “On the Stability of Schwarzschild’s Triaxial Galaxy Model,” , 257, 103-109 Miralda-Escudé, J. & Schwarzschild, M. 1989, “On the Orbit Structure of the Logarithmic Potential,” , 339, 752-762 Plummer, H. C. 1911, , 61, 460-470 Ratcliff, S. J., Chang, K. M. & Schwarzschild, M. 1984, “Stellar Orbits in Angle Variables,” , 279, 610-620 Ruiz, M. T. 1976, “A Dynamical Model for the Central Region of M31,” , 207, 382-393 Ruiz, M. T. & Schwarzschild, M. 1976, “An Approximate Dynamical Model for Spheroidal Stellar Systems,” , 207, 376-381 Schwarzschild, M. 1954, “Mass Distribution and Mass-Luminosity Ratio in Galaxies,” , 59, 273-284 Schwarzschild, M. 1973, “An Upper Limit to the Angular Diameter of the Nucleus of NGC 4151,” , 182, 357-361 Schwarzschild, M. 1979, “A Numerical Model for a Triaxial Stellar System in Dynamical Equilibrium,” , 232, 236-247 Schwarzschild, M. 1982, “Triaxial Equilibrium Models for Elliptical Galaxies with Slow Figure Rotation,” , 263, 599-610 Schwarzschild, M. 1986, “Dynamical Models for Galactic Bars: Truncated Perfect Elliptic Disk,” , 311, 511-517 Schwarzschild, M. 1993, “Self-Consistent Models for Galactic Halos,” , 409, 563-577 Schwarzschild, M. & Bernstein, S. 1955, “Note on the Mass of M92,” , 122, 200-202 Schwarzschild, M. & Schwarzschild, B. 1959, “Balloon Astronomy,” Sci. Am., 200, 52-59 Spitzer, L. & Schwarzschild, M. 1951, “The Possible Influence of Interstellar Clouds on Stellar Velocities,” , 114, 385-397 Spitzer, L. & Schwarzschild, M. 1953, “The Possible Influence of Interstellar Clouds on Stellar Velocities. II,” , 118, 106-112 Stark, A. A. 1977, “Triaxial Models for the Bulge of M31,” , 213, 368-373 Statler, T. S. 1987, “Self-Consistent Models of Perfect Triaxial Galaxies,” , 321, 113-152 van Albada, T. S., Kotanyi, C. G. & Schwarzschild, M. 1982, “A Model for Elliptical Radio Galaxies with Dust Lanes,” , 198, 303-310 Vietri, M. & Schwarzschild, M. 1983, “Analysis of Box Orbits in a Triaxial Galaxy,” , 269, 487-499 Williams, T. B. & Schwarzschild, M. 1979, “A Photometric Determination of Twists in Early-Type Galaxies, , 227, 56-63 Williams, T. B. & Schwarzschild, M. 1979, “A Photometric Determination of Twists in Early-Type Galaxies. II,” , 41, 209-213 [^1]: Strip counts had long been used to infer the density profiles of star clusters (e.g. Plummer 1911). Schwarzschild was apparently the first to notice that the potential energy could be computed directly from $S(q)$ without first converting it into a density profile. [^2]: Those familiar with Schwarzschild’s legendary tact will be struck by the introduction to this paper, which contains a withering (but accurate) critique of a rival formula for evaluating the virial theorem. [^3]: A wonderfully clear account of the observation of convection cells in the Sun with Stratoscope I was written by Schwarzschild and his wife, Barbara, for [*Scientific American*]{} (1959). [^4]: Here and below, Schwarzschild’s convention is followed in which the $X$ and $Z$ axes are identified with the long and short axes of the triaxial figure. [^5]: Subsequent observations of Centaurus A revealed that the sense of rotation of the stellar body of this galaxy is probably opposite to that of the van Albada et al. model, implying that the outer dust ring has not yet reached a steady state. However a triaxial figure is probably still required to support the inner ring.
--- abstract: | The subgradient projector is of considerable importance in convex optimization because it plays the key role in Polyak’s seminal work — and the many papers it spawned — on subgradient projection algorithms for solving convex feasibility problems. In this paper, we offer a systematic study of the subgradient projector. Fundamental properties such as continuity, nonexpansiveness, and monotonicity are investigated. We also discuss the Yamagishi–Yamada operator. Numerous examples illustrate our results. author: - 'Heinz H. Bauschke[^1],  Caifang Wang[^2], Xianfu Wang[^3],  and Jia Xu[^4]' date: 'March 27, 2014' --- [[**2010 Mathematics Subject Classification:**]{} [Primary 90C25; Secondary 47H04, 47H05, 47H09. ]{} ]{} [**Keywords:**]{} Convex function, firmly nonexpansive mapping, [Fréchet]{} differentiability, [Gâteaux]{} differentiability, monotone operator, nonexpansive mapping, subgradient projector, Yamagishi–Yamada operator. Introduction ============ Throughout this paper, we assume that [equation]{} with inner product ${\left\langle{\cdot},{\cdot} \right\rangle}$ and induced norm $\|\cdot\|$. We also assume that [equation]{} (When $X$ is finite-dimensional, we do not need to explicitly impose continuity on $f$.) Unless stated otherwise, we assume that $s\colon X\to X$ is a *selection* of $\partial f$, i.e., [equation]{} (xX)s(x)f(x) and that $G\colon X\to X$ is the *associated subgradient projector* defined by [equation]{} (xX) Gx = x - s(x), &\ x, & Observe this is well defined because $C\neq\varnothing$ and thus $0\notin\partial f(X\smallsetminus C)$. When we need to exhibit the underlying function $f$ or subgradient selection $s$, we shall write $s_f$, $C_f$ and $G_f=G_{f,s}$ instead of $s$, $C$ and $G$, respectively. The subgradient projector is the key ingredient in Polyak’s seminal work [@Poljak] on subgradient projection algorithms[^5], which have since found many applications; see, e.g., [@bb96], [@MOR], [@Cegielski], [@CL], [@CS], [@CenZen], [@Comb93], [@Comb97], [@CombLuo], [@Polyakbook], [@PolyakHaifa], [@SY], [@YO1], [@YO2], [@YSY], and the references therein. [ *The aim of this paper is to provide a systematic study of the subgradient operator. We review known properties, present basic calculus rules, obtain characterization of strong-to-strong and strong-to-weak continuity, analyze nonexpansiveness, monotonicity, and the decreasing property, and discuss the relationship to the Yamagishi–Yamada operator. Numerous examples illustrate our results.* ]{} The paper is organized as follows. Basic properties are reviewed in Section \[s:prelim\], and basic calculus rules are derived in Section \[s:calc\]. Section \[s:examples\] is a collection of examples. The relationship between strong-to-strong (resp. strong-to-weak) continuity of $G$ and [Fréchet]{} (resp. [Gâteaux]{}) differentiability of $f$ is clarified in Section \[s:contF\] (resp. Section \[s:contG\]). The case when $f$ arises from a quadratic form is investigated in Section \[s:Frank\]. Nonexpansiveness and the decreasing property are studied in Section \[s:nonexp\] and \[s:decrease\], respectively. These properties are illustrated with in Section \[s:pnorm\]. In the final Section \[s:YY\], we provide a sufficient condition for the Yamagishi–Yamada operator to be itself a subgradient projector. Notation and terminology are standard and follow largely [@BC2011] to which we refer the reader if needed. We do write ${\ensuremath{\operatorname{P}}}_f = ({\ensuremath{\operatorname{Id}}}+\partial f)^{-1}$ for the proximity operator (proximal mapping) of $f$. Preliminary results {#s:prelim} =================== Let us record some basic results on subgradient projectors, which are essentially contained already in [@Poljak] and the proofs of which we provide for completeness. \[f:known\] Let $x\in X$, and set $$H = {\big\{{y\in X}~\big |~{{\left\langle{s(x)},{y-x} \right\rangle} + f(x) \leq 0}\big\}}.$$ Then the following hold: 1. \[f:known0\] $f^+(x)+{\left\langle{s(x)},{Gx-x} \right\rangle}=0$. 2. \[f:known1\] ${\ensuremath{\operatorname{Fix}}}G = C\subseteq H$. 3. \[f:known2\] $Gx=P_Hx$. 4. \[f:known3\] $(\forall c\in C)$ ${\left\langle{c-Gx},{x-Gx} \right\rangle}\leq 0$. 5. \[f:known3+\] $(\forall c\in C)$ $\|x-Gx\|^2 + \|Gx-c\|^2 \leq \|x-c\|^2$. 6. \[f:known4\] $f^+(x) = \|s(x)\|\|x-Gx\|$. 7. \[f:known4+\] If $x\notin C$, then $(\forall c\in C)$ $f^2(x)\|s(x)\|^{-2}+\|Gx-c\|^2 \leq \|x-c\|^2$. 8. \[f:known5\] $f^+(x)(x-Gx) = \|x-Gx\|^2s(x)$. 9. \[f:known6\] Suppose that $f$ is [Fréchet]{} differentiable at $x\in X\smallsetminus C$. Then $g = \ln\circ f\colon X\smallsetminus C \to{\ensuremath{\mathbb R}}$ is [Fréchet]{} differentiable at $x$ and $Gx = x - \nabla g(x)/\|\nabla g(x)\|^2$. 10. \[f:known7\] Suppose that $\min f(X)=0$, that $f$ is [Fréchet]{} differentiable on $X$ with $\nabla f$ being Lipschitz continuous with constant $L$, that $x\notin C$, and that there exists $\alpha>0$ such that $f(x)\geq \alpha d_C^2(x)$. Then $d^2_C(Gx) \leq (1-\alpha^2/L^2)d_C^2(x)$. 11. \[f:known8\] Suppose that $\min f(X)=0$, that $x\notin C$, and that there exists $\alpha>0$ such that $f(x)\geq \alpha d_C(x)$. Then $d^2_C(Gx) \leq (1-\alpha^2/\|s(x)\|^2)d_C^2(x)$. Let $z\in X$. \[f:known0\]: This follows directly from the definition of $G$. \[f:known1\]: The equality is clear from the definition of $G$. Assume that $z\in C$. Then ${\left\langle{s(x)},{z-x} \right\rangle} + f(x) \leq f(z) \leq 0$ and hence $z\in H$. \[f:known2\]: Assume first that $x\in C$. Then $x\in{\ensuremath{\operatorname{Fix}}}G\subseteq H$ by \[f:known1\] and hence $Gx=x= P_Hx$. Now assume that $x\notin C$. Then $0<f(x)=f^+(x)$ and $s(x)\neq 0$. Hence, $$P_Hx = x - \frac{\Big({\left\langle{s(x)},{x} \right\rangle}- \big( {\left\langle{s(x)},{x} \right\rangle}-f(x)\big)\Big)^+}{\|s(x)\|^2}s(x) = x - \frac{f^+(x)}{\|s(x)\|^2}s(x) = Gx.$$ \[f:known3\]: In view of \[f:known2\], we have $(\forall h\in H)$ ${\left\langle{h-Gx},{x-Gx} \right\rangle} \leq 0$. Now invoke \[f:known1\]. \[f:known3+\]: This is equivalent to \[f:known3\]. \[f:known4\]: Assume first that $x\in C$. Then $f(x)\leq 0$, i.e., $f^+(x)=0$, and $x=Gx$ by \[f:known1\]. Hence the identity is true. Now assume that $x\notin C$. Then $0<f(x)=f^+(x)$ and $x-Gx = f(x)/\|s(x)\|^2 s(x)$. Taking the norm, we learn that $\|x-Gx\| = f(x)/\|s(x)\|=f^+(x)/\|s(x)\|$. \[f:known4+\]: Combine \[f:known1\], \[f:known3+\], and \[f:known4\]. \[f:known5\]: This follows from \[f:known4\] and the definition of $G$. \[f:known6\]: The chain rule implies that $\nabla g(x) = (1/f(x))\nabla f(x)$. Hence $\|\nabla g(x)\|^2 = \|\nabla f(x)\|^2/f^2(x)$ and thus $x-\nabla g(x)/\|\nabla g(x)\|^2 = x - f(x)/\|\nabla f(x)\|^2 \nabla f(x) = Gx$. \[f:known7\]: Let $c\in C$. Then $\nabla f(c)=0$ and hence $\|\nabla f(x)\| = \|\nabla f(x)-\nabla f(c)\|\leq L\|x-c\|$. Hence $\|\nabla f(x)\|\leq Ld_C(x)$ and therefore, using \[f:known4+\], we obtain $$\|Gx-c\|^2 \leq \|x-c\|^2 - \frac{f^2(x)}{\|\nabla f(x)\|^2} \leq \|x-c\|^2 - \frac{\alpha^2d_C^4(x)}{L^2d_C^2(x)}.$$ Now take the minimum over $c\in C$. \[f:known8\]: Using \[f:known4+\], we have $$d^2_C(Gx) \leq \|Gx-P_Cx\|^2 \leq \|x-P_Cx\|^2 - \frac{f^2(x)}{\|s(x)\|^2} \leq d_C^2(x) - \frac{\alpha^2d_C^2(x)}{\|s(x)\|^2}.$$ The proof is complete. Calculus {#s:calc} ======== We now turn to basic calculus rules. When the proof is a straight-forward verification, we will omit it. It is convenient to introduce the operator ${\ensuremath{\mathcal G}}\colon X{\ensuremath{\rightrightarrows}}X$, defined by $$(\forall x\in X)\quad {\ensuremath{\mathcal G}}x = {\ensuremath{\mathcal G}}_f x = {\big\{{G_s(x)}~\big |~{\text{$s$ is a selection of $\partial f$}}\big\}}.$$ When $G$ is [Gâteaux]{} differentiable outside $C$, then we will identify ${\ensuremath{\mathcal G}}$ with $G$. \[p:calc\] Let $\alpha>0$, let $A\colon X\to X$ be continuous and linear such that $A^*A = A^*A = {\ensuremath{\operatorname{Id}}}$, and let $z\in X$. Furthermore, let $(f_i)_{i\in I}$ be a finite family of convex continuous functions on $X$ such that $\bigcap_{i\in I} C_{f_i}\neq\varnothing$. Then the following hold: 1. \[p:calc1\] Suppose that $g = \alpha f$. Then $C_{g} = C_f$ and ${\ensuremath{\mathcal G}}_g = {\ensuremath{\mathcal G}}_f$. 2. \[p:calc2\] Suppose that $g = f\circ \alpha{\ensuremath{\operatorname{Id}}}$. Then $C_{g} = \alpha^{-1}C_f$ and ${\ensuremath{\mathcal G}}_g = \alpha^{-1}{\ensuremath{\mathcal G}}_f\circ \alpha{\ensuremath{\operatorname{Id}}}$. 3. \[p:calc3\] Suppose that $f\geq 0$ and that $g = f^\alpha$ is convex. Then $C_{g} = C_f$ and ${\ensuremath{\mathcal G}}_g = (1-\alpha^{-1}){\ensuremath{\operatorname{Id}}}+ \alpha^{-1}{\ensuremath{\mathcal G}}_f$. 4. \[p:calc4\] Suppose that $g = f \circ A$. Then $C_g = A^*C_f$ and ${\ensuremath{\mathcal G}}_g = A^*\circ {\ensuremath{\mathcal G}}_f \circ A$. 5. \[p:calc5\] Suppose that $g \colon x\mapsto f(x-z)$. Then $C_g = z+C_f$ and ${\ensuremath{\mathcal G}}_g\colon x\mapsto z+{\ensuremath{\mathcal G}}_f(x-z)$. 6. \[p:calc6\] Suppose that $g=\max_{i\in I} f_i$. Then $C_g = \bigcap_{i\in I} C_{f_i}$ and if $g(x)>0$ and $I(x) = {\big\{{i\in I}~\big |~{f_i(x)=g(x)}\big\}}$, then ${\ensuremath{\mathcal G}}_g(x) = {\big\{{x - g(x)\|x^*\|^{-2}x^*}~\big |~{x^*\in{\ensuremath{\operatorname{conv}}}\bigcup_{i\in I(x)}\partial f_i(x)}\big\}}$. 7. \[p:calc7\] Suppose that $g = f^+$. Then ${\ensuremath{\mathcal G}}_g = {\ensuremath{\mathcal G}}_f$. 8. \[p:calc8\] **(Moreau envelope)** Suppose that $\min f(X)=0$ and that $g = f\Box (1/2)\|\cdot\|^2$ is the Moreau envelope of $f$. Then $C_g = C_f$ and $$(\forall x\in X)\quad G_g(x) = \begin{cases} x - \displaystyle \frac{g(x)}{\|x-{\ensuremath{\operatorname{P}}}_fx\|^2}(x-{\ensuremath{\operatorname{P}}}_fx), &\text{if $f(x)>0$;}\\ x, &\text{if $f(x)=0$.} \end{cases}$$ Let $x\in X$. We shall only prove one inclusion for the subgradient projector as the remaining one is proved similarly. \[p:calc1\]: Since $g(x)\leq 0$ $\Leftrightarrow$ $f(x)\leq 0$, it follows that $C_g = C_f$. Suppose that $f(x)>0$. Since $\alpha s_f(x)\in \partial g(x)$, we obtain $G_fx= x - f(x)\|s_f(x)\|^{-2}s_f(x) =x -g(x)/\|\alpha s_f(x)\|^{-2}(\alpha s_f(x))$. This implies ${\ensuremath{\mathcal G}}_f(x)\subseteq {\ensuremath{\mathcal G}}_g(x)$. \[p:calc2\]: Suppose that $g(x)>0$, i.e., $f(\alpha x)>0$. Then $\alpha^{-1}G_f(\alpha x) = \alpha^{-1}(\alpha x - f(\alpha x)\|s_f(\alpha x)\|^{-2}s_f(\alpha x)) = x - \alpha^{-1}f(\alpha x)\|s_f(\alpha x)\|^{-2}s_f(\alpha x) = x - f(\alpha x)\|\alpha s_f(\alpha x)\|^{-2}(\alpha s_f(\alpha x)) \in {\ensuremath{\mathcal G}}_g(x)$. Hence $\alpha^{-1}{\ensuremath{\mathcal G}}_f(\alpha x)\subseteq {\ensuremath{\mathcal G}}_g(x)$. \[p:calc3\]: Suppose that $g(x)>0$. Then $f(x)>0$ and $\alpha^{-1}(x-G_fx) = \alpha^{-1}f(x)/\|s_f(x)\|^2 s_f(x) = f^\alpha(x)\|\alpha f^{\alpha-1}(x)s_f(x)\|^{-2}\alpha f^{\alpha-1}(x)s_f(x)\in x-{\ensuremath{\mathcal G}}_g(x)$. \[p:calc4\]: We have $x\in C_g$ $\Leftrightarrow$ $f(Ax)\leq 0$ $\Leftrightarrow$ $Ax\in C_f$ $\Leftrightarrow$ $x\in A^*C_f$. Suppose that $g(x)>0$. Then $f(Ax)>0$, $A^*s_f(Ax)\in\partial g(x)$ and $A^*G_f(Ax) = A^*(Ax - f(Ax)\|s_f(Ax)\|^{-2}s_f(Ax)) = x - f(Ax)\|A^*s_f(Ax)\|^{-2}A^*s_f(Ax) \in{\ensuremath{\mathcal G}}_g(x)$. \[p:calc5\]: Suppose that $0<g(x)=f(x-z)$. Then $z+G_f(x-z) = z+ x-z - f(x-z)\|s_f(x-z)\|^{-2}s_f(x-z) \in {\ensuremath{\mathcal G}}_g(x)$. \[p:calc6\]: This follows from the well known formula for the subdifferential of a maximum; see, e.g., [@Penot Proposition 3.38]. \[p:calc7\]: This follows from \[p:calc6\] since $f^+ = \max\{0,f\}$. \[p:calc8\]: This is clear because $g\geq 0$, $\nabla g = {\ensuremath{\operatorname{Id}}}-{\ensuremath{\operatorname{P}}}_f$ (see, e.g., [@BC2011 Proposition 12.29]), and ${\ensuremath{\operatorname{argmin}}}g = {\ensuremath{\operatorname{argmin}}}f$ (see, e.g., [@BC2011 Corollary 17.5]). Examples {#s:examples} ======== In this section, we present several illustrative examples. Suppose that $f = \|\cdot\|^2$. Then $\nabla f = 2{\ensuremath{\operatorname{Id}}}$ and $G = {\ensuremath{\tfrac{1}{2}}}{\ensuremath{\operatorname{Id}}}$. \[ex:huber\] Suppose that $$(\forall x\in X)\quad f(x) = \begin{cases} {\ensuremath{\tfrac{1}{2}}}\|x\|^2, &\text{if $x\in{\ensuremath{\mathrm{ball}}}(0;1)$;}\\ \|x\|-{\ensuremath{\tfrac{1}{2}}}, &\text{otherwise.} \end{cases}$$ Then $G = {\ensuremath{\tfrac{1}{2}}}P_{{\ensuremath{\mathrm{ball}}}(0;1)}$ and $G$ is firmly nonexpansive. Let $x\in X$. Observe that $f = \|\cdot\|\Box(1/2)\|\cdot\|^2$ is the Moreau envelope of the norm. Hence it follows from Proposition \[p:calc\]\[p:calc8\] that $$Gx = x - \frac{f(x)}{\|x-{\ensuremath{\operatorname{P}}}_{\|\cdot\|}x\|^2}(x-{\ensuremath{\operatorname{P}}}_{\|\cdot\|}x)$$ provided that $x\neq 0$, and $Gx=0={\ensuremath{\tfrac{1}{2}}}P_{{\ensuremath{\mathrm{ball}}}(0;1)}x$ if $x=0$. Furthermore, ${\ensuremath{\operatorname{P}}}_{\|\cdot\|} = {\ensuremath{\operatorname{Id}}}- {\ensuremath{\operatorname{P}}}_{\|\cdot\|^*} = {\ensuremath{\operatorname{Id}}}- {\ensuremath{\operatorname{P}}}_{\iota_{{\ensuremath{\mathrm{ball}}}(0;1)}} = {\ensuremath{\operatorname{Id}}}-P_{{\ensuremath{\mathrm{ball}}}(0;1)}$. Thus, ${\ensuremath{\operatorname{Id}}}-{\ensuremath{\operatorname{P}}}_{\|\cdot\|} = P_{{\ensuremath{\mathrm{ball}}}(0;1)}$. Assume now $x\neq 0$. If $0<\|x\|\leq 1$, then $$Gx = x - \frac{{\ensuremath{\tfrac{1}{2}}}\|x\|^2}{\|P_{{\ensuremath{\mathrm{ball}}}(0;1)}x\|^2}P_{{\ensuremath{\mathrm{ball}}}(0;1)}(x) = x - \frac{\|x\|^2}{2\|x\|^2}x = {\ensuremath{\tfrac{1}{2}}}x = {\ensuremath{\tfrac{1}{2}}}P_{{\ensuremath{\mathrm{ball}}}(0;1)}x;$$ and if $1<\|x\|$, then $$Gx = x - \frac{\|x\|-{\ensuremath{\tfrac{1}{2}}}}{\|P_{{\ensuremath{\mathrm{ball}}}(0;1)}x\|^2}P_{{\ensuremath{\mathrm{ball}}}(0;1)}(x) = x - \frac{\|x\|-{\ensuremath{\tfrac{1}{2}}}}{\big\|x/\|x\|\big\|^2}\frac{x}{\|x\|} = {\ensuremath{\tfrac{1}{2}}}\frac{x}{\|x\|} = {\ensuremath{\tfrac{1}{2}}}P_{{\ensuremath{\mathrm{ball}}}(0;1)}x.$$ Now $P_{{\ensuremath{\mathrm{ball}}}(0;1)}$ is firmly nonexpansive, and hence so is ${\ensuremath{\operatorname{Id}}}-P_{{\ensuremath{\mathrm{ball}}}(0;1)}$. It follows that $2G-{\ensuremath{\operatorname{Id}}}= -({\ensuremath{\operatorname{Id}}}-P_{{\ensuremath{\mathrm{ball}}}(0;1)})$ is nonexpansive, and therefore that $G$ is firmly nonexpansive. \[p:maxdist\] Let $(C_i)_{i\in I}$ be a finite family of closed convex subsets of $X$ such that $C = \bigcap_{i\in I} C_i\neq\varnothing$ and $f=\max_{i\in I} d_{C_i}$. Let $x\in X\smallsetminus C$, set $I(x) = {\big\{{i\in I}~\big |~{f(x)=d_{C_i}(x)}\big\}}$, and set $Q(x) = {\ensuremath{\operatorname{conv}}}\{P_{C_i}x\}_{i\in I(x)}$. Then $${\ensuremath{\mathcal G}}(x) = \bigcup_{q(x)\in Q(x)} \left\{x - \frac{f^2(x)}{\|x-q(x)\|^2}\big(x-q(x)\big)\right\} \quad\text{and}\quad Q(x)\subseteq {\ensuremath{\operatorname{conv}}}\big(\{x\}\cup {\ensuremath{\mathcal G}}(x)\big).$$ If $I(x)=\{i\}$ is a singleton, then ${\ensuremath{\mathcal G}}(x)=\{P_{C_i}x\}$. This follows from Proposition \[p:calc\]\[p:calc6\] and the fact that $\nabla d_{C_i}(x) = (x-P_{C_i}x)/d_{C_i}(x)$ when $x\in X\smallsetminus C_i$. \[p:hals\] Let $(C_i)_{i\in I}$ be a finite family of nonempty closed convex subsets of $X$ such that $C = \bigcap_{i\in I}C_i\neq\varnothing$. Let $(\lambda_i)_{i\in I}$ be a family in $\left]0,1\right]$ such that $\sum_{i\in I}\lambda_i = 1$. Let $p\geq 1$ and suppose that $f = \sum_{i\in I} \lambda_id_{C_i}^p$. Set $(\forall x\in X)$ $I(x) = {\big\{{i\in I}~\big |~{x\notin C_i}\big\}}$. Then $$(\forall x\in X)\quad Gx = x - \frac{\sum_{i\in I(x)}\lambda_id_{C_i}^p(x)}{p\big\|\sum_{i\in I(x)}\lambda_id_{C_i}^{p-2}(x)(x-P_{C_i}x)\big\|^2} \sum_{i\in I(x)}\lambda_i d_{C_i}^{p-2}(x)(x-P_{C_i}x)$$ and if $p=2$, we rewrite this as $$(\forall x\in X)\quad Gx = \begin{cases} \displaystyle x - \frac{\sum_{i\in I}\lambda_i\|x-P_{C_i}x\|^2}{2\big\|\sum_{i\in I}\lambda_i(x-P_{C_i}x)\big\|^2}\Big(x-\sum_{i\in I}\lambda_iP_{C_i}x\Big), &\text{if $x\notin C$;}\\ x, &\text{otherwise.} \end{cases}$$ Let $x\in X$, and let $i\in I$. Then $\nabla d_{C_i}(x) = d_C^{-1}(x)(x-P_{C_i}x)$ if $x\notin C_i$ and $0\in\partial d_{C_i}(x)$ otherwise. Hence $$\nabla d_{C_i}^p(x) = pd_{C_i}^{p-2}(x)(x-P_{C_i}x)$$ if $x\notin C_i$, and $0\in\partial d_{C_i}^p(x)$ otherwise. The result follows. \[ex:dCp\] Let $p\geq 1$ and suppose that $f=d_C^p$. Then $G = (1-\tfrac{1}{p}){\ensuremath{\operatorname{Id}}}+ \tfrac{1}{p}P_C$. This follows from Proposition \[p:hals\] when $I$ is a singleton. \[ex:linear\] Suppose that $u\in X$ satisfies $\|u\|=1$, and let $\beta\in{\ensuremath{\mathbb R}}$. Then the following hold: 1. \[ex:linear1\] If $f\colon x\mapsto {\left\langle{u},{x} \right\rangle}-\beta$, then $C = {\big\{{x\in X}~\big |~{{\left\langle{u},{x} \right\rangle}\leq\beta}\big\}}$ and $G\colon x\mapsto x- ({\left\langle{u},{x} \right\rangle}-\beta)^+u$. 2. \[ex:linear2\] If $f\colon x\mapsto |{\left\langle{u},{x} \right\rangle}-\beta|$, then $C = {\big\{{x\in X}~\big |~{{\left\langle{u},{x} \right\rangle}=\beta}\big\}}$ and $G\colon x\mapsto x- ({\left\langle{u},{x} \right\rangle}-\beta)u$. \[ex:linear1\]: Note that $f^+ = d_C$ and hence $G=P_C$ by Proposition \[p:calc\]\[p:calc7\] and Example \[ex:dCp\]. \[ex:linear2\]: Here $f=d_C$ and hence $G=P_C$ by Example \[ex:dCp\]. Using Example \[ex:dCp\], we see that $G$ is linear and that $G=G^*$ provided that $f = d_C^p$, where $p\geq 1$ and $C$ is a subspace. The converse is true as well but this lies beyond the scope of this paper. We now give two examples in which $G$ is positively homogenenous but not necessarily linear. Suppose that $f$ is a norm on $X$, with duality mapping $J=\partial \tfrac{1}{2}f^2$. Then $C=\{0\}$ and $(\forall x\in X\smallsetminus\{0\})$ $Gx = x - f^2(x)\|Jx\|^{-2}Jx$. Let $K$ be a nonempty closed convex cone with polar cone $K^\ominus$, and suppose that $f \colon x\mapsto {\ensuremath{\tfrac{1}{2}}}{\left\langle{x},{P_Kx} \right\rangle}$. Then $G = {\ensuremath{\operatorname{Id}}}-\tfrac{1}{2}P_K = P_{K^\ominus}+\tfrac{1}{2}P_K$. Since $(\forall x\in X)$ $f(x) = {\ensuremath{\tfrac{1}{2}}}\|P_Kx\|^2 = {\ensuremath{\tfrac{1}{2}}}d^2_{K^\ominus}(x)$, it follows that $\nabla f(x) = x-P_{K^\ominus}x = P_Kx$. The formula then follows. A direct verification yields the following result which is well known when $p=2$. Let $Y$ be another real Hilbert space, let $A\colon X\to Y$ be continuous and linear, let $b\in Y$, and let $\varepsilon\geq 0$, and let $p\geq 1$. Suppose that $(\forall x\in X)$ $f(x) = \|Ax-b\|^p-\varepsilon^p$ and that $C = {\big\{{x\in X}~\big |~{\|Ax-b\|\leq\varepsilon}\big\}}\neq\varnothing$. Then $$(\forall x\in X)\quad Gx = \begin{cases} \displaystyle x- \frac{\|Ax-b\|^p -\varepsilon^p}{p\|Ax-b\|^{p-2}\|A^*(Ax-b)\|^2}A^*(Ax-b), &\text{if $\|Ax-b\|>\varepsilon$;}\\ x, &\text{otherwise.} \end{cases}$$ Continuity of $G$ vs [Fréchet]{} differentiability of $f$ {#s:contF} ========================================================= We start with a technical result. \[l:awards\] Let $(x_n)_{\ensuremath{{n\in{\mathbb N}}}}$ be a sequence in $X$ converging weakly to $\bar{x}$ and such that $x_n-Gx_n\to 0$. Suppose that one of the following holds: 1. \[l:awards1\] $x_n\to\bar{x}$. 2. $f$ is bounded on every bounded subset of $X$. Then $\bar{x}\in C$. Because of either [@BC2011 Proposition 16.14] or [@BC2011 Proposition 16.17] there exists $\rho>0$ such that $\sigma := \sup\|\partial f({\ensuremath{\mathrm{ball}}}(\bar{x};\rho))\| < {\ensuremath{+\infty}}$. We thus can and do assume that $$(\forall{\ensuremath{{n\in{\mathbb N}}}})\quad \|s(x_n)\|\leq \sigma.$$ Since $f^+$ is weakly lower semicontinuous, we deduce from Fact \[f:known\]\[f:known4\] that $$f^+(\bar{x}) \leq\varliminf f^+(x_n) \leq \sigma\varliminf \|x_n-Gx_n\|= 0.$$ Hence $f(\bar{x})\leq 0$, i.e., $\bar{x}\in C$. Lemma \[l:awards\]\[l:awards1\] and Fact \[f:known\]\[f:known1\] imply that $G$ is *fixed-point closed* at $\bar{x}$ (see, e.g., also [@Cegielski Theorem 4.2.7] or [@BCW1]), i.e., if $x_n\to\bar{x}$ and $x_n-Gx_n\to 0$, then $\bar{x}=G\bar{x}$. \[p:couleen\] $G$ is continuous at every point in $C$. Let $\bar{x}\in C$, and let $(x_n)_{\ensuremath{{n\in{\mathbb N}}}}$ be a sequence in $X$ converging to $\bar{x}$. The result is clear if $(x_n)_{\ensuremath{{n\in{\mathbb N}}}}$ lies in $C$, so we can and do assume that $(x_n)_{\ensuremath{{n\in{\mathbb N}}}}$ lies in $X\smallsetminus C$. Then $(\forall{\ensuremath{{n\in{\mathbb N}}}})$ $f(x_n) \leq f(\bar{x})-{\left\langle{s(x_n)},{\bar{x}-x_n} \right\rangle} \leq {\left\langle{s(x_n)},{x_n-\bar{x}} \right\rangle} \leq \|s(x_n)\|\|\bar{x}-x_n\|$. Hence $0 < f(x_n)/\|s(x_n)\|\leq \|\bar{x}-x_n\|\to 0$. By Fact \[f:known\]\[f:known4\], $x_n-Gx_n\to 0$. Thus $\lim Gx_n = \lim x_n = \bar{x}=G\bar{x}$ using Fact \[f:known\]\[f:known1\]. The continuity of $G$ outside $C$ is more delicate. [(See, e.g., [@BV Proposition 6.1.4].)]{} \[f:nabla\] The following hold: 1. \[f:nablaF\] $f$ is [Fréchet]{} differentiable at $\bar{x}$ $\Leftrightarrow$ $s$ is (strong-to-strong) continuous at $\bar{x}$. 2. \[f:nablaG\] $f$ is [Gâteaux]{} differentiable at $\bar{x}$ $\Leftrightarrow$ $s$ is strong-to-weak continuous at $\bar{x}$. \[l:131029b\] Suppose that $\bar{x}\in X\smallsetminus C$, that $G$ is strong-to-weak continuous at $\bar{x}$, but $G$ is not strong-to-strong continuous at $\bar{x}$. Then $f$ is not [Gâteaux]{} differentiable at $\bar{x}$. There exists a sequence $(x_n)_{\ensuremath{{n\in{\mathbb N}}}}$ in $X\smallsetminus C$ such that $x_n\to \bar{x}$, $Gx_n{\ensuremath{\:{\rightharpoonup}\:}}G\bar{x}$ yet $Gx_n\not\to G\bar{x}$. It follows that $$x_n-Gx_n{\ensuremath{\:{\rightharpoonup}\:}}\bar{x}-G\bar{x} \quad\text{and}\quad x_n-Gx_n\not\to \bar{x}-G\bar{x}.$$ By Kadec–Klee, $\|x_n-G_n\| \not\to \|\bar{x}-G\bar{x}\|$. Since $\|\cdot\|$ is weakly lower semicontinuous, we assume (after passing to a subsequence and relabeling if necessary) that $$\|\bar{x}-G\bar{x}\| < \eta := \lim_{\ensuremath{{n\in{\mathbb N}}}}\|x_n-Gx_n\|.$$ Using Fact \[f:known\]\[f:known5\], it follows that $$\begin{aligned} s(x_n) &= f(x_n)\frac{x_n-Gx_n}{\|x_n-Gx_n\|^2} {\ensuremath{\:{\rightharpoonup}\:}}f(\bar{x})\frac{\bar{x}-G\bar{x}}{\eta^2} \neq f(\bar{x})\frac{\bar{x}-G\bar{x}}{\|\bar{x}-G\bar{x}\|^2} = s(\bar{x}).\end{aligned}$$ Thus, $s$ is not strong-to-weak continuous at $\bar{x}$. It follows now from Fact \[f:nabla\]\[f:nablaG\] that $f$ is not [Gâteaux]{} differentiable at $\bar{x}$. \[c:131029c\] Let $\bar{x}\in X\smallsetminus C$. Then the following are equivalent: 1. \[c:131029c1\] $f$ is [Fréchet]{} differentiable at $\bar{x}$. 2. \[c:131029c2\] $G$ is (strong-to-strong) continuous at $\bar{x}$. 3. \[c:131029c3\] $f$ is [Gâteaux]{} differentiable at $\bar{x}$ and $G$ is strong-to-weak continuous at $\bar{x}$. “\[c:131029c1\]$\Rightarrow$\[c:131029c2\]”: By Fact \[f:nabla\]\[f:nablaF\], $s$ is continuous at $\bar{x}$. It follows from the definition of $G$ that $G$ is continuous at $\bar{x}$ as well. “\[c:131029c1\]$\Leftarrow$\[c:131029c2\]”: In view of Fact \[f:known\]\[f:known5\], we have $s(x) = f(x)(x-Gx)/\|x-Gx\|^2$ for all $x$ sufficiently close to $\bar{x}$. Hence $s$ is continuous at $\bar{x}$ and therefore $f$ is [Fréchet]{}differentiable at $\bar{x}$ by Fact \[f:nabla\]\[f:nablaF\]. “\[c:131029c1\]$\Rightarrow$\[c:131029c3\]” and “\[c:131029c2\]$\Rightarrow$\[c:131029c3\]”: This is clear since \[c:131029c1\]$\Leftrightarrow$\[c:131029c2\] by the above. “\[c:131029c3\]$\Rightarrow$\[c:131029c2\]”: Suppose to the contrary that $G$ is not strong-to-strong continuous. Then, by Lemma \[l:131029b\], $f$ is not [Gâteaux]{} differentiable at $\bar{x}$ which is absurd. \[c:awards\] $G$ is continuous everywhere if and only if $f$ is [Fréchet]{} differentiable on $X\smallsetminus C$. Combine Proposition \[p:couleen\] with Theorem \[c:131029c\]. Suppose that $X={\ensuremath{\mathbb R}}$ and that $(\forall x\in{\ensuremath{\mathbb R}})$ $f(x)=\max\{-x,x,2x-1\}$. Then $C=\{0\}$ and $f$ is not differentiable at $1$; consequently, by Corollary \[c:awards\], $G$ is not continuous at $1$. It is unrealistic to expect that $G$ is weak-to-weak continuous even when $f$ is [Fréchet]{} differentiable; see [@BCW1 Example 3.2 and Remark 3.3.(ii)]. Continuity of $G$ vs [Gâteaux]{} differentiability of $f$ {#s:contG} ========================================================= In view of Fact \[f:nabla\] and Corollary \[c:awards\], it is now tempting to conjecture that $G$ is strong-to-weak continuous if and only if $f$ is [Gâteaux]{} differentiable on $X\smallsetminus C$. Perhaps somewhat surprisingly, this turns out to be wrong. The counterexample is based on an ingenious construction by Borwein and Fabian [@BF]. \[ex:BF\] [(See [@BF Proof of Theorem 4].)]{} Suppose that $X$ is infinite-dimensional. Then there exists a function $b\colon X\to{\ensuremath{\mathbb R}}$ such that the following hold: 1. \[ex:BF1\] $b$ is continuous, convex and $\min b(X) = b(0) = 0$. 2. \[ex:BF2\] $b$ is [Fréchet]{} differentiable on $X\smallsetminus\{0\}$. 3. \[ex:BF3\] $b$ is [Gâteaux]{} differentiable at $0$, and $\nabla b(0)=0$. 4. \[ex:BF4\] $b$ is not [Fréchet]{} differentiable at $0$. \[ex:131029f\] Let $b$ be as in Example \[ex:BF\]. Then there exists $y\in X$ such that $\nabla b(y)\neq 0$. Suppose that $$(\forall x\in X)\quad f(x) = b(x) - {\left\langle{\nabla b(y)},{x} \right\rangle} -\tfrac{1}{2}\big(b(y)-{\left\langle{\nabla b(y)},{y} \right\rangle}\big).$$ Then the following hold: 1. \[ex:131029f5\] $f$ is [Gâteaux]{} differentiable (but not [Fréchet]{}differentiable) at $0$, and $G$ is not strong-to-weak continuous at $0$. 2. \[ex:131029f6\] $f$ is [Fréchet]{} differentiable on $X\smallsetminus\{0\}$, and $G$ is continuous on $X\smallsetminus\{0\}$. By Example \[ex:BF\]\[ex:BF3\], $0\in{\ensuremath{\operatorname{ran}}}\nabla b$. If $\{0\} = {\ensuremath{\operatorname{ran}}}\nabla b$, then we would deduce that $b$ is constant and therefore [Fréchet]{} differentiable; in turn, this would contradict Example \[ex:BF\]\[ex:BF4\]. Hence $\{0\}\subsetneqq {\ensuremath{\operatorname{ran}}}\nabla b$ and there exists $y\in X$ such that $$\label{e:0308a} v = \nabla b(y)\neq 0.$$ Now set $$g\colon X\to{\ensuremath{\mathbb R}}\colon x\mapsto b(x)-{\left\langle{v},{x} \right\rangle}.$$ Then $$(\forall x\in X)\quad f(x) = g(x)-\tfrac{1}{2}g(y),$$ and $g(0)=b(0)-{\left\langle{v},{0} \right\rangle}=0$ by Example \[ex:BF\]\[ex:BF1\]. Example \[ex:BF\]\[ex:BF3\] and yield $\nabla g(0)=\nabla b(0)-v=-v\neq 0$ while $\nabla g(y)=\nabla b(y)-v=0$. Hence $\min g(X) = g(y)<g(0) = 0$ and therefore $$f(y) = \min f(X) = \min g(X) - \tfrac{1}{2}g(y) = \tfrac{1}{2}g(y)< 0 < 0 - \tfrac{1}{2}g(y) = f(0).$$ Thus $y\in C$ while $0\notin C$. \[ex:131029f5\]: On the one hand, since $b$ is not [Fréchet]{} differentiable at $0$ (Example \[ex:BF\]\[ex:BF4\]), neither is $f$. On the other hand, since $b$ is [Gâteaux]{} differentiable at $0$ (Example \[ex:BF\]\[ex:BF3\]), so is $f$. Altogether, $f$ is [Gâteaux]{} differentiable, but not [Fréchet]{}differentiable, at $0$. Therefore, by Theorem \[c:131029c\], $G$ is not strong-to-weak continuous at $0$. \[ex:131029f6\]: Since $b$ is [Fréchet]{} differentiable on $X\smallsetminus\{0\}$ (Example \[ex:BF\]\[ex:BF2\]), so is $f$. Now apply Theorem \[c:131029c\]. $G$ as an “accelerated mapping” =============================== In this section, we consider the case when $f$ is a power of a quadratic form. \[s:Frank\] \[p:0308b\] Suppose that $f\colon x\mapsto \sqrt{{\left\langle{x},{Mx} \right\rangle}^p}$, where $p\geq 1$ and $M\colon X\to X$ be continuous, linear, self-adjoint, and positive. Then $G$ is continuous everywhere and $$(\forall x\in X)\quad Gx = \begin{cases} \displaystyle x - \frac{{\left\langle{x},{Mx} \right\rangle}}{p\|Mx\|^2}Mx, &\text{if $Mx\neq 0$;}\\ x, &\text{if $Mx=0$.} \end{cases}$$ Assume first that $p=1$. Since $M$ has a unique positive square root, i.e., there exists[^6] $B\colon X\to X$ such that $B$ is continuous, linear, self-adjoint, and positive, and $\ker B = \ker M$. Hence $(\forall x\in X)$ $f(x)= \sqrt{{\left\langle{x},{Mx} \right\rangle}} = \|Bx\|$ so $f$ is indeed convex and continuous. If $x\in X\smallsetminus\ker M = X\smallsetminus \ker B$, then $f$ is [Fréchet]{} differentiable at $x$ with $\nabla f(x) = B^*Bx/\|Bx\| = Mx/\|Bx\|$; hence, $$Gx = x - \frac{\|Bx\|}{\|Mx\|^2/\|Bx\|^2}\frac{Mx}{\|Bx\|} = x - \frac{\|Bx\|^2}{\|Mx\|^2}Mx = x - \frac{{\left\langle{x},{Mx} \right\rangle}}{\|Mx\|^2}Mx$$ and $G$ is continuous everywhere by Corollary \[c:awards\]. If $p>1$, then the result follows from the above and Proposition \[p:calc\]\[p:calc3\]. \[ex:Frank\] Let $A\colon X\to X$ be linear, self-adjoint, and nonexpansive. Suppose that $(\forall x\in X)$ $f(x) = \sqrt{{\left\langle{x},{x-Ax} \right\rangle}}$. Then $G$ is continuous everywhere and $$(\forall x\in X)\quad Gx= \begin{cases} \displaystyle x - \frac{{\left\langle{x},{x-Ax} \right\rangle}}{\|x-Ax\|^2}(x-Ax), &\text{if $Ax\neq x$;}\\ x, &\text{if $Ax=x$.} \end{cases}$$ Use Proposition \[p:0308b\] with $M={\ensuremath{\operatorname{Id}}}-A$ and $p=1$. Let $A\colon X\to X$ be linear, nonexpansive, and self-adjoint. In [@BDHP], the authors study the accelerated mapping[^7] of $A$, i.e., $$x \mapsto t_xAx+ (1-t_x)x, \quad \text{where } t_x = \begin{cases} \displaystyle \frac{{\left\langle{x},{x-Ax} \right\rangle}}{\|x-Ax\|^2}, &\text{if $x\neq Ax$;}\\ 1, &\text{otherwise.} \end{cases}$$ In view of the Example \[ex:Frank\], the accelerated mapping of $A$ is precisely the subgradient projector $G$ of the function $x\mapsto \sqrt{{\left\langle{x},{x-Ax} \right\rangle}}$. Now suppose that $X = \ell^2({\ensuremath{\mathbb N}})$, let $(e_n)_{\ensuremath{{n\in{\mathbb N}}}}$ be the standard orthonormal basis of $X$, and suppose that $$A\colon X\to X\colon x\mapsto \sum_{\ensuremath{{n\in{\mathbb N}}}}\tfrac{n}{n+1}{\left\langle{e_n},{x} \right\rangle}e_n.$$ Then $G$ is continuous (Example \[ex:Frank\]); however, $G$ is neither linear nor uniformly continuous (see the [@BDHP Remark following Lemma 3.8]). Nonexpansiveness ================ We now discuss when $G$ is (firmly) nonexpansive or monotone. \[s:nonexp\] \[p:nonexp\] Suppose that $f$ is [Gâteaux]{} differentiable on $X\smallsetminus C$ and that $G_f$ is firmly nonexpansive. Then $G_g$ is likewise in each of the following situations: 1. \[p:nonexp1\] $\alpha >0$, and $g=f\circ \alpha{\ensuremath{\operatorname{Id}}}$ is convex. 2. \[p:nonexp2\] $f\geq 0$, $\alpha\geq 1$, and $g=f^\alpha$ is convex. 3. $A\colon X\to X$ is continuous and linear, $AA^*=A^*A={\ensuremath{\operatorname{Id}}}$, and $g=f\circ A$. 4. $z\in X$ and $g\colon x\mapsto f(x-z)$. The analogous statement holds when $G_f$ is assumed to be nonexpansive. This follows from the corresponding items in Proposition \[p:calc\], which do preserve (firm) nonexpansiveness. On the real line, we obtain a simpler test. \[p:changeclock\] Suppose that $X={\ensuremath{\mathbb R}}$ and that $f$ is twice differentiable on $X\smallsetminus C$. Then $G$ is monotone. Moreover, $G$ is (firmly) nonexpansive if and only if $$(\forall x\in{\ensuremath{\mathbb R}})\quad f(x)f''(x) \leq \big(f'(x)\big)^2.$$ By Corollary \[c:awards\], $G$ is continuous. Let $x\in{\ensuremath{\mathbb R}}\smallsetminus C$. Then $G(x) = x-f(x)/f'(x)$ and hence $G'(x) = f(x)f''(x)/(f'(x))^2\geq 0$. It follows that $G$ is increasing on $X\smallsetminus C$ and hence on ${\ensuremath{\mathbb R}}$. Furthermore, $G$ is (firmly) nonexpansive if and only if $G'(x)\leq 1$, which gives the remaining characterization. Suppose that $X={\ensuremath{\mathbb R}}$, let $\alpha>0$, and suppose that $(\forall x\in{\ensuremath{\mathbb R}})$ $f(x)=x^n-\alpha$, where $n\in\{2,4,6,8,\ldots\}$. Then $G$ is firmly nonexpansive. If $x\in{\ensuremath{\mathbb R}}\smallsetminus C$, then $(f'(x))^2 -f(x)f''(x)= nx^{n-2}(\alpha n + x^n-\alpha)>0$ and we are done by Proposition \[p:changeclock\]. Suppose that $X={\ensuremath{\mathbb R}}$ and that $f\colon x\mapsto\exp(|x|)-1$. Then $(\forall x\in X)$ $G(x) = x-{\ensuremath{\operatorname{sgn}}}(x)(1-\exp(-|x|))$ and $G'(x)=1-\exp(-|x|)\in \left[0,1\right[$. It follows that $G$ is firmly nonexpansive[^8]. Suppose that $X={\ensuremath{\mathbb R}}$ and that $f \colon x\mapsto \exp(x^2)-1$. Then $G$ is not (firmly) nonexpansive. Indeed, we compute $(f'(x))^2-f(x)f''(x)= 4x^2\exp(x^2)+2\exp(x^2)-2\exp(2x^2)$, which strictly negative when $|x|>1.2$. Now apply Proposition \[p:changeclock\]. Suppose that $X={\ensuremath{\mathbb R}}$ and that $f$ is twice differentiable, that $\min f(X)=0$, that $g = f\Box (1/2)|\cdot|^2$, and that $2ff'' \leq (2+f'')(f')^2$. Then $G_g$ is firmly nonexpansive. We start by observing a couple of facts. First, $$g' = {\ensuremath{\operatorname{Id}}}-{\ensuremath{\operatorname{P}}}_f.$$ Write $y={\ensuremath{\operatorname{P}}}_f(x)$. Then $x=y+f'(y)$ and hence implicit differentiation gives $1= y'(x)+f''(y)y'(x)=y'(x)(1+f''(y(x)))$. Hence $y'=1/(1+f''(y(x)))$ and thus $$g''(x)=\big({\ensuremath{\operatorname{Id}}}-{\ensuremath{\operatorname{P}}}_f\big)'(x) = 1- \frac{1}{1+f''({\ensuremath{\operatorname{P}}}_f(x))} = \frac{f''\big({\ensuremath{\operatorname{P}}}_f(x)\big)}{1+f''\big({\ensuremath{\operatorname{P}}}_f(x)\big)}.$$ In view of Proposition \[p:changeclock\] and because $g(x)=f({\ensuremath{\operatorname{P}}}_f(x)) + (1/2)(x-{\ensuremath{\operatorname{P}}}_f(x))^2$ we must verify that $gg''\leq (g')^2$, i.e., $$\label{e:0317a} \frac{\big(f({\ensuremath{\operatorname{P}}}_f(x)) + {\ensuremath{\tfrac{1}{2}}}(x-{\ensuremath{\operatorname{P}}}_f(x))^2 \big)f''\big({\ensuremath{\operatorname{P}}}_f(x)\big)}{1+f''\big({\ensuremath{\operatorname{P}}}_f(x)\big)} \leq \big(x-{\ensuremath{\operatorname{P}}}_f(x)\big)^2.$$ Again writing $y={\ensuremath{\operatorname{P}}}_f(x)$ gives $x-{\ensuremath{\operatorname{P}}}_f(x)=f'(y)$ and so see that is equivalent to $$\label{e:0317b} \frac{\big(f(y) + {\ensuremath{\tfrac{1}{2}}}(f'(y))^2 \big)f''(y)}{1+f''(y)} \leq \big(f'(y)\big)^2.$$ However, holds by our assumption on $f$. We conclude this section with a result on the range of ${\ensuremath{\operatorname{Id}}}-G$. We have ${\ensuremath{\operatorname{ran}}}({\ensuremath{\operatorname{Id}}}-G)\subseteq{\ensuremath{\operatorname{cone}}}{\ensuremath{\operatorname{ran}}}\partial f \subseteq({\ensuremath{\operatorname{rec}}}C)^\ominus$. Let $y^*\in\partial f(y)$, let $c\in C$, and let $x\in{\ensuremath{\operatorname{rec}}}C$. Then $(c+nx)_{\ensuremath{{n\in{\mathbb N}}}}$ lies in $C$. Hence $(\forall n\geq 1)$ $0\geq f(c+nx)\geq f(y)+{\left\langle{y^*},{c+nx-y} \right\rangle}$ and thus $${\left\langle{y^*},{x} \right\rangle} \leq \frac{{\left\langle{y^*},{y-c} \right\rangle}-f(y)}{n}\to 0 \quad\text{as $n\to{\ensuremath{+\infty}}$.}$$ It follows that $y^*\in({\ensuremath{\operatorname{rec}}}C)^\ominus$. Therefore, ${\ensuremath{\operatorname{ran}}}({\ensuremath{\operatorname{Id}}}-G)\subseteq {\ensuremath{\operatorname{cone}}}{\ensuremath{\operatorname{ran}}}\partial f\subseteq ({\ensuremath{\operatorname{rec}}}C)^\ominus$. The decreasing property {#s:decrease} ======================= We say that $f$ has the *decreasing property* if $$(\forall x\in X) \quad \sup f({\ensuremath{\mathcal G}}x)\leq f(x).$$ To verify this, it suffices to consider points outside $C$. \[p:plumber\] If $(\forall x\in X)$ $Gx \in {\ensuremath{\operatorname{conv}}}(\{x\}\cup C)$, then $f$ has the decreasing property. Let $x\in X\smallsetminus C$. Then there exists $c\in C$ and $\lambda\in[0,1]$ such that $Gx = (1-\lambda)x+\lambda c$. It follows that $f(Gx) \leq (1-\lambda)f(x)+\lambda f(c) \leq (1-\lambda) f(x)\leq f(x)$. \[l:plumber\] Let $(x,y,z)\in{\ensuremath{\mathbb R}}^3$ be such that $x\neq z$ and $(z-y)(x-y)\leq 0$. Then $y\in{\ensuremath{\operatorname{conv}}}\{x,z\}$. Suppose first that $z<x$. If $y>x$, then $(z-y)(x-y)>0$ because it is the product of two strictly negative numbers. Similarly, if $y<z$, then $(z-y)(x-y)>0$. We deduce that $y\in[z,x]$. Analogously, when $x<z$, we obtain that $y\in[x,z]$. In either case, $y\in{\ensuremath{\operatorname{conv}}}\{x,z\}$. \[c:plumber\] Suppose that $X={\ensuremath{\mathbb R}}$. Then $f$ has the decreasing property. Let $x\in{\ensuremath{\mathbb R}}\smallsetminus C$. Then $x\neq P_Cx$ and, by Fact \[f:known\]\[f:known3\], $(P_Cx-Gx)(x-Gx)\leq 0$. Lemma \[l:plumber\] thus yields $Gx\in{\ensuremath{\operatorname{conv}}}\{x,P_Cx\}$. Hence $Gx\in {\ensuremath{\operatorname{conv}}}(\{x\}\cup C)$, and we are done by Proposition \[p:plumber\]. The next example shows that the decreasing property is not automatic. Suppose that $X={\ensuremath{\mathbb R}}^2$, that $C_1 ={\ensuremath{\mathbb R}}\times\{0\}$, that $C_2 = {\big\{{(\xi,\xi)\in X}~\big |~{\xi\in{\ensuremath{\mathbb R}}}\big\}}$, and that $f=\max\{d_{C_1},d_{C_2}\}$. Then $f$ does not have the decreasing property. Set $x=(2,1)$. Then, using Proposition \[p:maxdist\], we obtain that $Gx=(2,0)$ and $f(x)=1<\sqrt{2}=f(Gx)$. We now illustrate that the sufficient condition of Proposition \[p:plumber\] is not necessary: \[ex:ell1\] Suppose that $X={\ensuremath{\mathbb R}}^2$ and that $(\forall x=(x_1,x_2)\in{\ensuremath{\mathbb R}}^2)$ $f(x)=|x_1|+|x_2|$. Then $f$ has the decreasing property, $G^2x=(0,0)$ yet $Gx\notin {\ensuremath{\operatorname{conv}}}\{(0,0),x\}$ for almost every $x\in{\ensuremath{\mathbb R}}^2$. Furthermore, $G$ is not monotone. Observe that $C=\{(0,0)\}$. Let $I=\{1,2,3,4\}$ and consider the four halfspaces $(C_i)_{i\in I}$ with normal vectors $(1,1)$ and $(1,-1)$ with $(0,0)$ in their boundaries, and with the two boundary hyperplanes $H_1$ and $H_2$. Then $f= \sqrt{2}\max_{i\in I} d_{C_i} = \sqrt{2}\max\{d_{H_1},d_{H_2}\}$ by Example \[ex:linear\]\[ex:linear1\]. Proposition \[p:maxdist\] implies that $G$ is the projector onto the farther hyperplane on ${\ensuremath{\mathbb R}}^2\smallsetminus S$, where $S = ({\ensuremath{\mathbb R}}\times\{0\})\cup(\{0\}\times{\ensuremath{\mathbb R}})$. It is thus clear that $Gx\notin {\ensuremath{\operatorname{conv}}}\{(0,0),x\}$ and that $f(Gx)\leq f(x)$ for every $x\in{\ensuremath{\mathbb R}}^2\smallsetminus S$. When $x\in S$, one checks directly that $f(Gx)\leq f(x)$. Hence $f$ has the decreasing property. Finally, let $x=(-1,3)$ and $y=(1,3)$. Then $Gx=(1,1)$ and $Gy=(-1,1)$ and hence ${\left\langle{x-y},{Gx-Gy} \right\rangle}=-4<0$ so $G$ is not monotone. Using the decreasing property, one obtains a sufficient condition for *infeasibility*: Suppose that $X={\ensuremath{\mathbb R}}$ and we find a point $x$ such that $f(Gx)>f(x)$. Then $C$ must be empty because of Corollary \[c:plumber\]. For instance, suppose that $f\colon x\mapsto x^2+1$. Then $$\label{e:chaotic} (\forall x\in{\ensuremath{\mathbb R}}\smallsetminus\{0\})\quad Gx = (x^2-1)/(2x).$$ Now set $x = 1/2$. Then $Gx=-3/4$ and $f(Gx)=25/16>5/4=f(x)$. Suppose that $X={\ensuremath{\mathbb R}}$ and that $f$ is differentiable on $X\smallsetminus C$. Then $$(\forall x\in{\ensuremath{\mathbb R}}\smallsetminus C)\quad Gx = x - \frac{f(x)}{\big(f'(x)\big)^2}f'(x) = x - \frac{f(x)}{f'(x)}$$ is the same as the Newton operator for finding a zero of $f$! It is known since the 19th century that the concrete instance exhibits chaotic behaviour; see, e.g., [@Milnor Problem 7-a on page 72]. The decreasing property is preserved in certain cases: \[p:decalc\] Suppose that $f$ has the decreasing property. Then the following hold: 1. \[p:decalc1\] If $\alpha>0$, then $\alpha f$ has the decreasing property. 2. \[p:decalc2\] If $\alpha\geq 1$, then $(f^+)^\alpha$ has the decreasing property. Let $x\in X\smallsetminus C$. \[p:decalc1\]: Then $(\alpha f){\ensuremath{\mathcal G}}_{\alpha f}(x)=(\alpha f){\ensuremath{\mathcal G}}_f(x) \leq \alpha f(x)=(\alpha f)(x)$ by Proposition \[p:calc\]\[p:calc1\]. \[p:decalc2\]: Set $g=(f^+)^\alpha$ and $\beta=1/\alpha$. Then $0<\beta\leq 1$ and ${\ensuremath{\mathcal G}}_g(x) = (1-\beta)x + \beta{\ensuremath{\mathcal G}}_f(x)$ by Proposition \[p:calc\]\[p:calc3\]. Hence $\sup g({\ensuremath{\mathcal G}}_gx) \leq (1-\beta)g(x)+\beta\sup g({\ensuremath{\mathcal G}}_fx)$. On the other hand, $\sup g({\ensuremath{\mathcal G}}_f(x)) \leq g(x)$ by definition of $g$. Altogether, $\sup g ({\ensuremath{\mathcal G}}_gx)\leq g(x)$, i.e., $g$ is decreasing. The following result is complementary to the decreasing property. \[p:striconv\] Suppose that $f$ is strictly convex at $x\in X$ and $f(x)>0$. Then $f(Gx)>0$. Recall that $f$ is strictly convex at $x$ if $(\forall y\in X\smallsetminus\{x\})$ $(\forall\lambda\in{\ensuremath{\left]0,1\right[}})$ $f((1-\lambda)x+\lambda y)<(1-\lambda)f(x)+\lambda f(y)$. Arguing as in [@BV proof of Proposition 5.3.4.(a)], we see that ${\ensuremath{\tfrac{1}{2}}}{\left\langle{s(x)},{Gx-x} \right\rangle} = {\left\langle{s(x)},{({\ensuremath{\tfrac{1}{2}}}x+{\ensuremath{\tfrac{1}{2}}}Gx)-x} \right\rangle} \leq f({\ensuremath{\tfrac{1}{2}}}x + {\ensuremath{\tfrac{1}{2}}}Gx)-f(x)< {\ensuremath{\tfrac{1}{2}}}f(x)+{\ensuremath{\tfrac{1}{2}}}f(Gx)-f(x) = {\ensuremath{\tfrac{1}{2}}}(f(Gx)-f(x))$. Therefore, $f(Gx)>f(x)+{\left\langle{s(x)},{Gx-x} \right\rangle} = 0$ using Fact \[f:known\]\[f:known0\]. \[r:striconv\] Suppose that $f$ is strictly convex. Then Proposition \[p:striconv\] shows that iterating $G$ starting at a point outside $C$ will never reach $C$ in finitely many steps. This is clearly illustrated by Example \[ex:dCp\], which shows that the function $d_C$, even though it is neither strictly convex nor differentiable everywhere, performs best because $G=P_C$ yields a solution after just one step. The subgradient projector of $(x_1,x_2)\mapsto |x_1|^p+|x_2|^p$ {#s:pnorm} ============================================== The following result complements Example \[ex:ell1\]. \[p:ellp\] Suppose that $X={\ensuremath{\mathbb R}}^2$ and that $f\colon (x_1,x_2)\mapsto |x_1|^p + |x_2|^p$, where $p>1$, and let $x=(x_1,x_2)\in{\ensuremath{\mathbb R}}^2\smallsetminus\{(0,0)\}$. Then $$\label{e:ellpG} Gx = \left( x_1 - \frac{\big(|x_1|^p+|x_2|^p\big)|x_1|^{p-1}{\ensuremath{\operatorname{sgn}}}(x_1)}{p\big(|x_1|^{2p-2}+|x_2|^{2p-2}\big)}, x_2 - \frac{\big(|x_1|^p+|x_2|^p\big)|x_2|^{p-1}{\ensuremath{\operatorname{sgn}}}(x_2)}{p\big(|x_1|^{2p-2}+|x_2|^{2p-2}\big)}\right)$$ and the following hold: 1. \[p:ellp1\] If $p\geq 2$, then $f(x)\geq f(Gx)\geq (1-2p^{-1})^pf(x)$. 2. \[p:ellp2\] If $1<p\leq 2$, then $f(x)\geq f(Gx)\geq 2^{-1}(1-p^{-1})^pf(x)$. 3. \[p:ellp3\] If $1<p<2$, then $G$ is not monotone. The formula is a direct verification, and \[p:ellp1\]&\[p:ellp2\] hold when $x_1=0$ or $x_2=0$. We thus assume that $x_1\neq 0$ and $x_2\neq 0$. \[p:ellp1\]: Note that $$f(Gx)=|x_1|^p\big|1-c_1\big|^p + |x_2|^p\big|1-c_2\big|^p, \quad\text{where}\quad c_i = \frac{\big(|x_1|^p+|x_2|^p\big)|x_i|^{p-2}}{p\big(|x_1|^{2p-2}+|x_2|^{2p-2}\big)}.$$ If $i\in\{1,2\}$ and $m\in\{1,2\}$ is such that $|x_m|=\max\{|x_1|,|x_2|\}$, then $c_i \leq (2|x_m|^p|x_m|^{p-2})p^{-1}(|x_m|^p+0)^{-1}= 2/p$. Hence $1\geq 1-c_i\geq 1-2p^{-1}\geq 0$ and the inequalities follow. \[p:ellp2\]: We assume that $|x_1|\leq|x_2|$, the other case is treated analogously. Set $t = |x_1/x_2|$, $$c_1 = t - \frac{t^{2p-1}+t^{p-1}}{p\big(1+t^{2p-2}\big)} \quad\text{and}\quad c_2 = 1- \frac{1+t^p}{p\big(1+t^{2p-2}\big)},$$ and check that $$\label{e:piday4} f(Gx) = |x_2|^p\big(|c_1|^p + |c_2|^p\big).$$ Since $p-2\leq 0$, we have $t^{p-2}\geq 1$ and hence $$\label{e:piday5} 1 \geq c_2 \geq 1 - \frac{1+t^p}{p\big(1+t^p\big)} = 1- \tfrac{1}{p} \geq 0.$$ Thus $c_2\geq 0$. We now claim that $$\label{e:piday1} |c_1|+c_2 \leq 1.$$ This will imply $\max\{|c_1|,|c_2|\}\leq 1$; hence $\max\{|c_1|^p,|c_2|^p\}\leq 1$, $$f(Gx) \leq |x_2|^p\big(|c_1|+|c_2|\big)\leq |x_2|^p\leq f(x),$$ and the decreasing property of $f$ follows. Observe that is equivalent to \[e:piday2\] $$\begin{aligned} c_1+c_2 &\leq 1 \label{e:piday2a}\\ -c_1+c_2&\leq 1 \label{e:piday2b}\end{aligned}$$ and hence to \[e:piday3\] $$\begin{aligned} t &\leq \frac{(1+t^p)(1+t^{p-1})}{p(1+t^{2p-2})}\label{e:piday3a}\\ \frac{t^{p-1}(1+t^p)}{p(1+t^{2p-2})} &\leq t+ \frac{1+t^p}{p(1+t^{2p-2})}. \label{e:piday3b}\end{aligned}$$ Now check that holds by using $t^{p-1}\leq 1$ and, for , the convexity of $h\colon \xi\mapsto 1+\xi^p$, which implies $h(t)\geq h(1)+h'(1)(t-1)$, i.e., $pt \leq 1+t^p$. Furthermore, using , and the assumption that $|x_2|\geq|x_1|$, we obtain $$f(Gx) \geq c_2^p|x_2|^p \geq \big(1-\tfrac{1}{p}\big)^p|x_2|^p \geq \big(1-\tfrac{1}{p}\big)^p \frac{|x_1|^p+|x_2|^p}{2} = \frac{\big(1-\tfrac{1}{p}\big)^p}{2} f(x).$$ \[p:ellp3\]: Consider the points $y=(1,\xi)$ and $z=(-1,\xi)$, where $\xi>0$. Then $y-z=(2,0)$ and $$Gy = \left( 1 - \frac{1+\xi^p}{p(1+\xi^{2p-2})}, \xi - \frac{(1+\xi^p)\xi^{p-1}}{p(1+\xi^{2p-2})}\right)$$ and $$Gz = \left( -1 + \frac{1+\xi^p}{p(1+\xi^{2p-2})}, \xi - \frac{(1+\xi^p)\xi^{p-1}}{p(1+\xi^{2p-2})}\right).$$ It follows that $${\left\langle{Gy-Gz},{y-z} \right\rangle} = 4 \left( 1 - \frac{1+\xi^p}{p(1+\xi^{2p-2})}\right)<0 \quad \text{as $\xi\to{\ensuremath{+\infty}}$}$$ because $\lim_{\xi\to{\ensuremath{+\infty}}} (1+\xi^p)p^{-1}/(1+\xi^{2p-2}) = \lim_{\xi\to{\ensuremath{+\infty}}} (2p-2)^{-1}\xi^{2-p}={\ensuremath{+\infty}}$ using l’Hôpital’s rule. Therefore, $G$ is not monotone. The operator $G$ of Proposition \[p:ellp\] seems to defy an easy analysis. It would be interesting to obtain complete characterizations in terms of $p$ of the following, increasingly more restrictive, properties: $G$ is monotone; ${\ensuremath{\operatorname{Id}}}-G$ is nonexpansive; $G$ is firmly nonexpansive. With the help of Maple it is possible to check the following statements: 1. If $p\in\{2,4,6\}$, then $G$ is firmly nonexpansive and hence monotone. 2. If $p\in\{8,10,12\}$, then $G$ is not firmly nonexpansive; however, ${\ensuremath{\operatorname{Id}}}-G$ is nonexpansive and $G$ is monotone[^9]. Suppose first that $p\in\{2,4,6\}$. Then $G$ is firmly nonexpansive $\Leftrightarrow$ $N=2G-{\ensuremath{\operatorname{Id}}}$ is nonexpansive $\Leftrightarrow$ $(\forall x\in X)$ $Jx$ is nonexpansive, where $Jx$ is the Jacobian of $N$ at $x$ $\Leftrightarrow$ $(Jx)^*Jx\preceq {\ensuremath{\operatorname{Id}}}$ $\Leftrightarrow$ ${\ensuremath{\operatorname{Id}}}-(Jx)^*Jx$ is positive semidefinite. The last condition leads to checking three inequalities using the principal minor criterion for positive semidefiniteness. Dividing by appropriate powers of $x_1$ and $x_2$, this reduces to checking whether three polynomials in one variable are positive. Sturm’s Theorem (see, e.g., [@Prasolov Theorem 1.4.3]), which is implemented in Maple and Mathematica, combinded with [@Prasolov Theorem 1.1.2] finally complete the verification. Now suppose that $p\in\{8,10,12\}$. The approach just outlined shows that $G$ is not firmly nonexpansive. Note the implications: $G$ is monotone $\Leftarrow$ $N={\ensuremath{\operatorname{Id}}}-G$ is nonexpansive $\Leftrightarrow$ $(\forall x\in X)$ $Jx$ is nonexpansive, where $Jx$ is the Jacobian of $N$ at $x$ $\Leftrightarrow$ $(Jx)^*Jx\preceq {\ensuremath{\operatorname{Id}}}$ $\Leftrightarrow$ ${\ensuremath{\operatorname{Id}}}-(Jx)^*Jx$ is positive semidefinite, which is checked using again Sturm’s Theorem. $G$ and the Yamagishi–Yamada operator {#s:YY} ===================================== In this last section we study the accelerated version of $G$ proposed by Yamagishi and Yamada in [@YY]. For fixed $L>0$ and $r>0$, we assume in addition that [equation]{} , and that [equation]{} and we set [equation]{} (xX)(x) = -. By [@YY Lemma 1], we have $$\label{e:ftheta} f \geq \theta.$$ The Yamagishi–Yamada operator [@YY] is [equation]{} ZXX, defined at $x\in X$ by [equation]{} \[e:Z\] Zx= x, &\ x - f(x), &\ x - (f(x)+(-)\^2), & Note that if $f(x)\leq 0$ or $\theta(x)\leq 0$, then $Zx=Gx$. We now prove that if $X={\ensuremath{\mathbb R}}$, then $Z$ is itself a subgradient projector. \[t:caifang\] Suppose that $X={\ensuremath{\mathbb R}}$ and that $f$ is also twice differentiable. Then for every $x\in {\ensuremath{\mathbb R}}$, can be rewritten as $$\label{e:Z1} Zx= \begin{cases} x, &\text{if $f(x)\leq 0$;}\\[+5mm] \displaystyle x - \frac{1}{f'(x)}\,f(x), &\text{if $f(x)>0$ and $|f'(x)|\leq\sqrt{2L\rho}$;}\\[+5mm] \displaystyle x - \frac{1}{f'(x)} \,\left(f(x)+\left(\frac{|f'(x)|}{\sqrt{2L}}-\sqrt{\rho}\right)^2\right), &\text{if $f(x)>0$ and $|f'(x)|>\sqrt{2L\rho}$.} \end{cases}$$ Set $D={\big\{{x\in X}~\big |~{\theta(x)\leq 0}\big\}}$ and assume that ${\ensuremath{\operatorname{bdry}}}D \subseteq X\smallsetminus C$. Then $D$ is a closed convex superset of $C$, and $Z$ is a subgradient projector of a function $y$, defined as follows. On $D$, we set $y$ equal to $f$. The set ${\ensuremath{\mathbb R}}\smallsetminus D$ is empty, or an open interval, or the disjoint union of two open intervals. Assume that $I$ is one of these nonempty intervals, and let $q$ be defined on $I$ such that $$(\forall x\in I)\quad q'(x) = \frac{1}{x-Zx}.$$ Now set $d=P_D(I)\in D\smallsetminus C$ and $$(\forall x\in I)\quad y(x) = \frac{f(d)}{e^{q(d)}}e^{q(x)}.$$ The so-constructed function $y\colon{\ensuremath{\mathbb R}}\to{\ensuremath{\mathbb R}}$ is convex, and it satisfies $Z=G_y$. It is easy to check that is the same as . Let $x\in{\ensuremath{\mathbb R}}$ such that $f(x)>0$ and $\theta(x)\geq 0$, and set $$\label{e:z} z(x) = \frac{|f'(x)|}{\sqrt{2L}}-\sqrt{\rho} = \frac{{\ensuremath{\operatorname{sgn}}}\big(f'(x)\big)f'(x)}{\sqrt{2L}}-\sqrt{\rho} = \sqrt{\theta(x)+\rho}-\sqrt{\rho}\geq 0.$$ Then $$\label{e:dz} z'(x) = \frac{{\ensuremath{\operatorname{sgn}}}\big(f'(x)\big)f''(x)}{\sqrt{2L}}.$$ Using the convexity of $f$, , , and , we obtain \[e:angela\] $$\begin{aligned} 0 &\leq f''(x)\big(f(x)-\theta(x)\big)\\ &= f''(x)\left( f(x) - \left(\frac{|f'(x)|}{\sqrt{2L}}+\sqrt{\rho}\right) \left(\frac{|f'(x)|}{\sqrt{2L}}-\sqrt{\rho}\right)\right)\\ &= f''(x)\left(f(x)+z(x)\left( z(x)-\frac{2|f'(x)|}{\sqrt{2L}}\right)\right)\\ &=f''(x)\big(f(x)+z^2(x)\big) - f'(x)2z(x)\frac{{\ensuremath{\operatorname{sgn}}}\big(f'(x)\big)f''(x)}{\sqrt{2L}}\\ &=f''(x)\big(f(x)+z^2(x)\big) - f'(x)\big(2z(x)z'(x)\big). \end{aligned}$$ Because $x-Zx= (f(x)+z^2(x))/f'(x)$ is continuous, it is clear that there is an antiderivative $q$ on $I$ such that $$\label{e:dq} q'(x) = \frac{1}{x-Zx} = \frac{f'(x)}{f(x)+z^2(x)}.$$ Calculus and now result in \[e:ddq\] $$\begin{aligned} q''(x) &= \frac{f''(x)\big(f(x)+z^2(x)\big) - f'(x)\big(f'(x)+2z(x)z'(x)\big)}{\big(f(x)+z^2(x)\big)^2}\\ &= \frac{f''(x)\big(f(x)-\theta(x)\big) - \big(f'(x)\big)^2}{\big(f(x)+z^2(x)\big)^2}.\end{aligned}$$ Observe that $y$ is clearly continuous everywhere. Furthermore, $y'(x) = \frac{f(d)}{e^{q(d)}} e^{q(x)}q'(x)$ and hence, using , and again , we obtain $$\begin{aligned} y''(x) &= \frac{f(d)}{e^{q(d)}}\Big( e^{q(x)}\big(q'(x)\big)^2 + e^{q(x)}q''(x)\Big)\\ &= \frac{f(d)}{e^{q(d)}} e^{q(x)}\Big( \big(q'(x)\big)^2 + q''(x)\Big)\\ &= y(x) \frac{f''(x)\big(f(x)-\theta(x)\big)}{\big(f(x)+z^2(x)\big)^2}\\ &\geq 0.\end{aligned}$$ Hence $y$ is convex on $I$. As $x\in I$ approaches $d$, we deduce (because $d\notin C$, i.e., $f(d)>0$) that $q'(x)\to f'(d)(f(d)+z^2(d))^{-1} \to f'(d)/f(d)$ and hence that $y'(x)\to f(d)/e^{q(d)}e^{q(d)}f'(d)/f(d) = f'(d)$. It follows that $y$ is convex on ${\ensuremath{\mathbb R}}$. Finally, if $x\notin D$, then $G_y(x) = x - y(x)/y'(x) = x-1/q'(x) = x - (x-Zx)=Zx$. Consider Theorem \[t:caifang\] and assume that $f\colon x\mapsto x^2-1$, that $L=3$, and that $\rho=1$. Then turns into $$Zx= \begin{cases} x, &\text{if $|x|\leq 1$;}\\[+3mm] \displaystyle \frac{x^2+1}{2x}, &\text{if $1<|x|\leq\sqrt{6}/2$;}\\[+3mm] \displaystyle \frac{x^2+2\sqrt{6}|x|}{6x}, &\text{if $|x|>\sqrt{6}/2$.} \end{cases}$$ Hence $D= \big[-\sqrt{6}/2,\sqrt{6}/2\big]$. Using elementary manipulations, we obtain $$(\forall x\in {\ensuremath{\mathbb R}}\smallsetminus D)\quad q(x) = \tfrac{6}{5}\ln\big(\tfrac{5}{6}|x|-\tfrac{\sqrt{6}}{3}\big);$$ consequently, the function $y$, given by $$(\forall x\in{\ensuremath{\mathbb R}}) \quad y(x) = \begin{cases} x^2-1, &\text{if $|x|\leq\sqrt{6}/2$;}\\[+3mm] \displaystyle\frac{72^{1/5}}{6}\big(5|x|-2\sqrt{6}\big)^{6/5}, &\text{if $|x|>\sqrt{6}/2$,} \end{cases}$$ satisfies $G_y = Z$ by Theorem \[t:caifang\]. Acknowledgments {#acknowledgments .unnumbered} =============== HHB was partially supported by a Discovery Grant and an Accelerator Supplement of the Natural Sciences and Engineering Research Council of Canada (NSERC) and by the Canada Research Chair Program. CW was partially supported by a grant from Shanghai Municipal Commission for Science and Technology (13ZR1455500). XW was partially supported by a Discovery Grant of NSERC. JX was partially supported by NSERC grants of HHB and XW. [999]{} H.H. Bauschke and J.M. Borwein, On projection algorithms for solving convex feasibility problems, *SIAM Review* 38(3) (1996), 367–426. H.H. Bauschke, J. Chen, and X. Wang, A projection method for approximating fixed points of quasi nonexpansive mappings without the usual demiclosedness condition, *Journal of Nonlinear and Convex Analysis* 15 (2014), 129–135. H.H. Bauschke and P.L. Combettes, *Convex Analysis and Monotone Operator Theory in Hilbert Spaces*, Springer, 2011. H.H. Bauschke and P.L. Combettes, A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert space, *Mathematics of Operations Research* 26 (2001), 248–264. H.H. Bauschke, F. Deutsch, H. Hundal, and S.-H. Park, Accelerating the convergence of the method of alternating projections, *Transactions of the AMS* 355(9) (2003), 3433–3461. J.M. Borwein and M. Fabián, On convex functions having points of Gateaux differentiability which are not points of Fréchet differentiability, *Canadian Journal of Mathematics* 45(6) (1993), 1121–1134. J.M. Borwein and J.D. Vanderwerff, *Convex Functions*, Cambridge University Press, 2010. A. Cegielski, *Iterative Methods for Fixed Point Problems in Hilbert Spaces*, Springer, 2012. Y. Censor and A. Lent, Cyclic subgradient projections, *Mathematical Programming* 24 (1982), 233–235. Y. Censor and A. Segal, Sparse string-averaging and split common fixed points, in “Nonlinear Analysis and Optimization I”, A. Leizarowitz, B.S. Mordukhovich, I. Shafrir, and A.J. Zaslavski (editors), *Contemporary Mathematics* 513 (2010), 125–142. Y. Censor and S.A. Zenios, *Parallel Optimization*, Oxford University Press, 1997. P.L. Combettes, The foundations of set theoretic estimation, *Proceedings of the IEEE* 81(2) (1993), 182–208. P.L. Combettes, Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections, *IEEE Transactions on Image Processing* 6 (1997), 493–506. P.L. Combettes and J. Luo, An adaptive level set method for nondifferentiable constrained image recovery, *IEEE Transactions on Image Processing* 11 (2002), 1295–1304. J.-L. Goffin, Subgradient optimization in nonsmooth optimization (including the Soviet revolution), in *Optimization Stories*, [Documenta Mathematica]{} book series vol. 6 (2012), 277–290. , *Introductory Functional Analysis with Applications*, Wiley, 1989. J. Milnor, *Dynamics in One Complex Variable*, third edition, Princeton University Press, 2006. J.-P. Penot, *Calculus Without Derivatives*, Springer, 2013. B.T. Polyak, Minimization of unsmooth functionals, *U.S.S.R. Computational Mathematics and Mathematical Physics* 9 (1969), 14–29. (The original version appeared in *Akademija Nauk SSSR. [Ž]{}urnal Vy[č]{}islitel’ noĭ Matematiki i Matematičeskoĭ Fiziki* 9 (1969), 509–521.) B.T. Polyak, *Introduction to Optimization*, Optimization Software, 1987. B.T. Polyak, Random algorithms for solving convex inequalities, in *Inherently Parallel Algorithms in Feasibility and Optimization and their Applications*, D. Butnariu, Y. Censor, and S. Reich (editors), pages 409–422, Elsevier 2001. V.V. Prasolov, *Polynomials*, Springer, 2004. K. Slavakis and I. Yamada, The adaptive projected subgradient method constrained by families of quasi-nonexpansive mappings and its application to online learning, *SIAM Journal on Optimization* 23 (2013), 126–152. I. Yamada and N. Ogura, Adaptive projected subgradient method for asymptotic minimization of sequence of nonnegative convex functions, *Numerical Functional Analysis and Optimization* 25 (2004), 593–617. I. Yamada and N. Ogura, Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings, *Numerical Functional Analysis and Optimization* 25 (2004), 619–655. I. Yamada, K. Slavakis, and K. Yamada, An efficient robust adaptive filtering algorithm based on parallel subgradient projection techniques, *IEEE Transactions on Signal Processing* 50 (2002), 1091–1101. M. Yamagishi and I. Yamada, A deep monotone approximation operator based on the best quadratic lower bound of convex functions, *IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences* E91–A (2008), 1858–1866. [^1]: Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: `heinz.bauschke@ubc.ca`. [^2]: Department of Mathematics, Shanghai Maritime University, China. E-mail: `cfwang@shmtu.edu.cn`. [^3]: Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: `shawn.wang@ubc.ca`. [^4]: Mathematics, University of British Columbia, Kelowna, B.C. V1V 1V7, Canada. E-mail: `jia.xu@ubc.ca`. [^5]: See also [@Goffin] for a historical account. [^6]: See, e.g., [@Kreyszig Theorem 9.4-2], where this is stated in a complex Hilbert space; however, the proof works unchanged in our real setting as well. [^7]: In fact, the operator $A$ in [@BDHP] need not necessarily be self-adjoint. [^8]: Since $G$ is monotone by Proposition \[p:changeclock\], its antiderivative $x\mapsto \tfrac{1}{2}x^2 - |x|-\exp(-|x|)$ is convex — although this does not look like convex function on first glance! It is interesting to do this also for other instances of $f$. [^9]: Experiments with Maple suggest that this pattern may hold true for every even integer greater than or equal $8$.
--- abstract: 'The microcanonical effective partition function, constructed from a Feynman-Hibbs potential, is derived using generalized ensemble theory. The form of the effective Hamiltonian is amenable to Monte Carlo simulation techniques and the relevant Metropolis function is presented. Using the derived expression for the microcanonical effective partition function, the low-temperature entropy of a proton in an anharmonic potential is numerically evaluated and compared with the exact quantum mechanical canonical result.' author: - 'Jonathan L. Belof' - Brian Space date: Submitted on title: Microcanonical Effective Partition Function for the Anharmonic Oscillator --- The Green’s function for the quantum dynamic propagator, $$\begin{aligned} G(x^\prime, t^\prime; x, t) = {\langle x^\prime|e^{-i\hat{H}(t^\prime-t)/\hbar}|x\rangle} \label{eq:greens_function}\end{aligned}$$ can be expressed in it’s path integral form[@feynman], after Trotter factorization and making use of the resolution of the identity, as:[@schulman] $$\begin{aligned} G(x^\prime, t^\prime; x, t) = \lim_{P \rightarrow \infty} \int\limits_{-\infty}^{\infty} dx_1...dx_{P-1} \left(\frac{m}{2\pi i \hbar \epsilon}\right)^{P/2} \nonumber \\ \times e^{\frac{i\epsilon}{\hbar} \int\limits_t^{t^\prime} d\tau\, \left\{ \frac{1}{2}m \left( \frac{dx}{d\tau} \right)^2 - V[x(\tau)] \right\} } \label{eq:path_integral_qm}\end{aligned}$$ where the time interval $\epsilon = (t^\prime - t)/P$ and the path from $x \rightarrow x^\prime$ has been discretized among $P$ points. Analytically continuing Eq. (\[eq:path\_integral\_qm\]) *via* the substitution $\beta = it/\hbar$ and letting the initial time $t = 0$ results in $$\begin{aligned} G(x^\prime, -i\beta\hbar; x) = \lim_{P \rightarrow \infty} \int\limits_{-\infty}^{\infty} dx_1...dx_{P-1} \left(\frac{m}{2\pi \hbar^2 \beta}\right)^{P/2} \nonumber \\ \times e^{-\frac{1}{\hbar} \int\limits_0^{\beta \hbar} d\tau\, \left\{ \frac{1}{2}m \left( \frac{dx}{d\tau} \right)^2 + V[x(\tau)] \right\} } \label{eq:path_integral_statmech}\end{aligned}$$ It can be shown that the canonical partition function $Q(N,V,\beta$) results from taking the trace of expression (\[eq:path\_integral\_statmech\]), where the paths propagate from $x\rightarrow x$: $$\begin{aligned} Q(N,V,\beta) = \int\limits_{-\infty}^{\infty} dx\, G(x, -i\beta\hbar; x) \nonumber \\ = \int\limits_{-\infty}^{\infty} dx\, \lim_{P \rightarrow \infty} \int\limits_{-\infty}^{\infty} dx_1...dx_{P-1} \left(\frac{m}{2\pi \hbar^2 \beta}\right)^{P/2} \nonumber \\ \times e^{-\frac{1}{\hbar} \int\limits_0^{\beta \hbar} d\tau\, \left\{ \frac{1}{2}m \left( \frac{dx}{d\tau} \right)^2 + V[x(\tau)] \right\} } \label{eq:path_integral_partition_function}\end{aligned}$$ and the integration is done over all possible closed paths that start and end at $x$. A great deal of applied research has proceeded from the approximation whereby Eq. (\[eq:path\_integral\_partition\_function\]) is closed for a finite Trotter number $P$. Indeed, in such a form expression (\[eq:path\_integral\_partition\_function\]) then looks very much like a classical partition function (albeit with a $\beta$-dependent harmonic term) and can be numerical evaluated by many-dimensional integration techniques such as Monte Carlo. An alternative to this discretized approach is to write Eq. (\[eq:path\_integral\_partition\_function\]) as a Fourier expansion around the path and numerically solve for the Fourier coefficients. However, a third traditional approach is to approximate the integrals over $\int\limits_{-\infty}^{\infty} dx_1...dx_{P-1}$ *via* a variational principle[@feynman_hibbs] the result of which is then expressed as an exponential of an *effective potential* $\widetilde{W}(x)$: $$\begin{aligned} Q(N,V,\beta) \approx \widetilde{Q}(N,V,\beta) = \int\limits_{-\infty}^{\infty} dx\, \sqrt{\frac{m}{2\pi \hbar^2 \beta}} e^{-\beta \widetilde{W}(x)} \label{eq:Q}\end{aligned}$$ where $\widetilde{Q}(N,V,\beta)$ is the *canonical effective partition function*. At low-temperature we hope to capture the quantum effects present in Eq. (\[eq:path\_integral\_partition\_function\]) (to what degree depends crucially on $\widetilde{W}$) and in the high-temperature limit $\widetilde{Q}$ is equivalent to the classical partition function. In it’s original formulation,[@feynman_hibbs] it can be shown that $\widetilde{W}(x)$ satisfies a variational principle if taken to be a Gaussian-smeared potential, $$\begin{aligned} \widetilde{W}(x) = \int\limits_{-\infty}^{\infty} dy\, \frac{1}{\sqrt{2\pi a^2}} e^{\frac{(x-y)^2}{2a^2}} U(y) \label{eq:W}\end{aligned}$$ where in it’s first approximation the Gaussian width $a^2 = \beta \hbar^2/12m$ and the same fixed-width approximation is made here. Techniques that improve upon the fixed-width approximation have been previously made.[@feynman_kleinert; @kleinert; @cowley] The Taylor series expansion of expression (\[eq:W\]) yields the familiar terms commonly used in molecular simulation. The canonical effective partition function has been of significant value in numerical statistical mechanics since it includes quantum fluctuations while preserving the readily understood mathematical structure of the classical partition function. In the simplest approximation of a fixed-Gaussian width of $\beta \hbar^2/12m$ the effective approach provides accuracy amenable to the semiclassical regime. Curiously, while the path integral expression for the microcanonical partition function has been derived[@freeman; @lawson], to our knowledge the microcanonical effective partition function has not been reported in the literature. While the canonical ensemble is quite natural for the study of many physical systems, there are cases where the microcanonical ensemble is more convenient since the thermodynamic energy can be fixed. Our intention here is to develop the microcanonical effective partition function such that quantum fluctuations may be included in, for instance, $NVE$ Monte Carlo[@creutz; @ray] simulations. We derive the microcanonical effective partition function through application of generalized ensemble theory[@sack; @guggenheim; @tiller] which allows us to relate the constant energy shell ensembles *directly* to other ensembles in which thermal energy can flow between system and bath. Of immediate interest is the Laplace transform relationship between the canonical and microcanonical partition functions; this relation between ensembles has been long utilized in semiclassical theory, however to our knowledge it has not been employed in the context of effective potentials. We begin by demonstrating the following example: $$\begin{aligned} Q(N,V,\beta) = \int\limits_0^{\infty} dE\, e^{-\beta E} \Omega(N,V,E)\end{aligned}$$ since the energy spectrum may always be shifted such that the lower bound is zero. In solving for the microcanonical partition function, the inverse Laplace transform yields: $$\begin{aligned} \Omega(N,V,E) = \frac{1}{2\pi i} \oint d\beta\, e^{\beta E} Q(N,V,\beta) \nonumber \\ = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{1}{2\pi i} \int\limits_{\gamma - i\infty}^{\gamma + i\infty} d\beta\, e^{\beta(E - H)}\end{aligned}$$ where $d\Gamma$ is the phase space differential form $(N! \, h)^{-1} dx\, dp$. Since $\beta = \sigma + i\tau$ and no singularity is present in the right-half of the complex plane, we may take the contour vertically through $\gamma = 0$. Since $\rm Re(\beta) = 0$ along the integration we may make the substitution $\beta = -i\tau$: $$\begin{aligned} \Omega(N,V,E) = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{1}{2\pi} \int\limits_{-\infty}^{\infty} d\tau\, e^{i\tau(H - E)} \nonumber \\ = \int\limits_{-\infty}^{\infty} d\Gamma\, \delta(H - E) \label{eq:delta_function}\end{aligned}$$ which is the microcanonical partition function, as it should be. Another simple example is the quantum harmonic oscillator: $$\begin{aligned} \Omega_{HO} &=& \frac{1}{2\pi i} \oint d\beta\, e^{\beta E} Q_{HO} = \frac{1}{2\pi} \int\limits_{-\infty}^{\infty} d\tau\, e^{-i\tau E} \frac{e^{\frac{1}{2}i\tau \hbar \omega}}{1 - e^{i \tau \hbar \omega} } \nonumber \\ &=& \frac{1}{2\pi} \int\limits_{-\infty}^{\infty} d\tau\, e^{i\tau\left(\frac{1}{2} \hbar \omega - E\right)} \sum_n e^{i\tau \hbar \omega n} \nonumber \\ &=& \sum_n \delta \left[\hbar \omega \left( n + \frac{1}{2} \right) - E \right] \label{eq:nve_qho}\end{aligned}$$ where we note that in this case we have integrated over the quantum mechanical partition function, resulting in a discrete series over the eigenspectrum rather than a phase space integral. Along similar lines, we wish to construct the microcanonical effective partition function $\widetilde{\Omega}(N,V,E)$ from $\widetilde{Q}(N,V,\beta)$ through use of the same Laplace structure. Proceeding in this manner, $$\begin{aligned} \widetilde{\Omega}(N,V,E) = \frac{1}{2\pi i} \oint d\beta\, e^{\beta E} \widetilde{Q}(N,V,\beta) \nonumber \\ = \frac{1}{2\pi i} \oint d\beta\, e^{\beta E} \int\limits_{-\infty}^{\infty} dx\, \sqrt{\frac{m}{2\pi \hbar^2 \beta}} e^{-\beta \widetilde{W}} \nonumber \\\end{aligned}$$ Using Eq. (\[eq:W\]), the Gaussian smeared version of the anharmonic potential $U_{AHO} = \frac{1}{2}kx^2 + \frac{1}{4}gx^4$ is exactly integrable and yields $$\begin{aligned} \widetilde{W}_{AHO} = U_{AHO} + \frac{\beta \hbar^2}{24m}\left(k + 3gx^2\right) + {\left(\frac{\beta \hbar^2}{24m}\right)}^2 3g \\ = U_{AHO} + W_1(\beta) \label{eq:aho_smeared}\end{aligned}$$ for the effective potential. With the momentum integration having been undone, the inverse Laplace transform now becomes: $$\begin{aligned} \widetilde{\Omega}_{AHO}(E) = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{1}{2\pi i} \oint d\beta\, e^{\beta \left[E - H - W_1\left(\beta\right)\right]} \nonumber \\ = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{1}{2\pi i} \oint d\beta\, e^{\beta\left(E - H\right)} e^{-\beta^2 \left[ \frac{\hbar^2}{24m}\left( k + 3gx^2 \right) \right] } e^{-\beta^3 \left(\frac{\hbar^2}{24m}\right)^2 3g } \nonumber \\ = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{1}{2\pi i} \oint d\beta\, e^{a\beta - b\beta^2 -c\beta^3}\end{aligned}$$ where the contour integral can be approximated by the method of steepest descent to yield $$\begin{aligned} = \frac{1}{\sqrt{2\pi}} \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{ e^{a\beta_0 -b\beta_0^2 -c\beta_0^3} }{\left( 2b + 6c\beta_0 \right)^{\frac{1}{2}} } \label{eq:sd_aho_nve_fh}\end{aligned}$$ where the saddle point, $\beta_0$, $$\begin{aligned} \beta_0 = \frac{2b - \sqrt{4b^2 + 12ac}}{6c} \label{eq:sad_point}\end{aligned}$$ Recalling that the following substitutions have been made, $$\begin{aligned} a = E - H \nonumber \\ b = \frac{\hbar^2}{24m} \left( k + 3gx^2 \right) \nonumber \\ c = \left(\frac{\hbar^2}{24m}\right)^2 3g \nonumber\end{aligned}$$ we arrive at the final expression for the microcanonical effective partition function for the anharmonic system $$\begin{aligned} \widetilde{\Omega}_{AHO}(E) = \int\limits_{-\infty}^{\infty} d\Gamma\, \frac{e^{-\frac{3\left(H - E\right)^2 }{4b} - \frac{c\left( H - E\right)^3}{8b^3}}}{\sqrt{2\pi\left( 2b - \frac{6ac}{2b} \right)}} \label{eq:aho_nve_fh}\end{aligned}$$ ![The configurational distribution function, $\Omega(x)$, for $k=1$, $g=2$.[]{data-label="fig:omega_dist"}](dist_g_2.ps){width="3.3"} We may note some interesting features of the microcanonical effective partition function for this anharmonic system. The phase space distribution function is dominated by a Gaussian distribution in $H-E$, the Gaussian width being determined by the quantum mechanical factor $b$. In the classical limit of vanishing $b,c$ the Gaussian distribution narrows to approach the classical microcanonical distribution $\delta(H-E)$. Shown in Fig. \[fig:omega\_dist\] is the $k=1$ and $g=2$ distribution function $\Omega(x)$ (the momentum having been integrated) for several energy values corresponding to the low temperature regime. Numerical evalution of Eq. (\[eq:aho\_nve\_fh\]) has been performed to obtain the entropy of a proton in an anharmonic well for $k=1$ and $g=4,40,200$. These values for the anharmonicity of the potential have been chosen since they are among those available for the calculated eigenspectrum of Ref. () and were also used by Feynman and Kleinert in Ref. () to illustrate the improvements possible by extending the original fixed-width Gaussian formalism. We calculate the cumulative entropy $\Sigma$ by integrating Eq. (\[eq:aho\_nve\_fh\]): $$\begin{aligned} \Sigma(E) = \int\limits_{0}^{E} dE^\prime\, \Omega(E^\prime) \label{eq:cumulative_sigma} \\ S = k \ln \Sigma(E) \label{eq:cumulative_entropy}\end{aligned}$$ where we note that, in the thermodynamic limit, the microcanonical entropy $k \ln \Omega(E)$, the level density entropy $k \ln \omega(E)$ and the cumulative entropy $\ln k \Sigma(E)$ are all equivalent to within an additive constant.[@huang] However, as was done in Ref. , a comparison will be made with the canonical partition function for the exact quantum mechanical system, and the discrete sum over levels presented by the canonical partition function of $N=1$ anharmonic oscillators necessitates comparison with the cumulative entropy. In Fig. \[fig:entropy\_figure\], the Feynman-Hibbs microcanonical entropy, denoted “NVE FH”, is compared with the exact quantum mechanical entropy found from the canonical ensemble, $S = k \ln Q + E/T$. The quantum mechanical $Q$ has been calculated by direct summation of the Boltzmann factors for the first 9 levels[@montroll] of the eigenspectrum. Also shown for comparison is the canonical entropy found *via* the nearly exact method of Feynman and Kleinert[@feynman_kleinert] (in this method the width of the Gaussian smear is not held fixed) denoted “NVT FK”, the standard canonical Feynman-Hibbs method of Ref. () (where the Gaussian width is held fixed, as it is also in the current work) denoted “NVT FH”, and the classical canonical entropy. With respect to the accuracy of the method, we note several features from Fig. \[fig:entropy\_figure\]. First, we point out that it is well known that the higher-order FK method will reproduce the exact quantum mechanical result with near perfect accuracy even at very low temperature and for strong anharmonicity – especially where the more standard NVT FH method will fail.[@feynman_kleinert] In contrast, the method presented here can be seen as the microcanonical analog of the less accurate NVT FH method. Interestingly, however, this method agrees quite well with both the exact quantum and NVT FK results at low temperature and even for the strong anharmonicity value of $g=200$ (where the commonly used NVT FH method fails) and yet yields poor agreement at higher temperatures and also for the relatively harmonic $g=4$. This appears to be due to the fact that the integral expressed in Eq. (\[eq:aho\_nve\_fh\]) is only well sampled when the energy distribution is broadened (*i.e.* under non-classical conditions) since it becomes increasingly difficult to sample the distribution approaching a delta function with high accuracy – as the distribution narrows to the Dirac delta function the saddle point approximation breaks down. Expression (\[eq:aho\_nve\_fh\]) may also be derived for various other intermolecular potentials, and can be evaluated by multidimensional phase space integration techniques such as Hybrid Monte Carlo.[@duane; @mehlig] The Hybrid Monte Carlo (HMC) technique makes use of an $NVE$ molecular dynamics integrator, with a large non-energy conserving timestep, to sample the phase space integral *via* a Metropolis accept/reject scheme based upon the full Hamiltonian. Unlike a traditional canonical Monte Carlo scheme where only the configurational part of the integral is sampled, in HMC the momentum integration is performed explicitly by the algorithm through a randomized resampling of the momenta from an equilibrium distribution – a useful aspect for sampling Eq. (\[eq:aho\_nve\_fh\]) given that the momentum dependence in this equation cannot be analytically integrated out. Such methodology may prove practical for sampling the microcanonical ensemble for atomic and molecular systems, where the HMC algorithm would proceed with a modified Metropolis function based upon Eq. (\[eq:aho\_nve\_fh\]): $$\begin{aligned} \frac{Pr(i\rightarrow j)}{Pr(j\rightarrow i)} = e^{-\left( \epsilon_j - \epsilon_i \right)} \frac{\zeta_i}{\zeta_j}\label{eq:hmc_metropolis}\end{aligned}$$ where $$\begin{aligned} \epsilon_i = \frac{3}{4b_i} \left(H_i - E \right)^2 + \frac{c}{8b_i^3} \left(H_i - E \right)^3 \nonumber \\ \zeta_i = \sqrt{2\pi\left( 2b_i - \frac{6a_i c}{2b_i} \right) } \nonumber\end{aligned}$$ The authors acknowledge funding from the U.S. Department of Energy, Basic Energy Sciences (Grant No. DE0GG02-07ER46470). Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344. [17]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [**]{} (, , ) @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{}
--- author: - Weikun Zhen and Sebastian Scherer bibliography: - 'mybib.bib' title: A Unified 3D Mapping Framework using a 3D or 2D LiDAR --- Introduction ============ ![*Left*: A customized DJI M100 drone carrying a rotating Hokuyo laser scanner. *Right*: A customized DJI M600 drone equipped with a rotating VLP-16.](figures/m100.png "fig:"){height="1.3in"} ![*Left*: A customized DJI M100 drone carrying a rotating Hokuyo laser scanner. *Right*: A customized DJI M600 drone equipped with a rotating VLP-16.](figures/m600.png "fig:"){height="1.3in"} \[fig:robot\] ​As one of the most fundamental problems of autonomous robots, LiDAR-based SLAM has been an active research area for many years. Recent advancements in LiDAR sensing and SLAM techniques have led to the fast growth of robot applications in many industrial fields such as autonomous inspection of civil engineering facilities. Usually, specialized systems are developed according to the particular requirements in different situations. For example, different Unmanned Aerial Vehicles (UAVs) (see Figure \[fig:robot\]) are used to inspect tunnels in our project depending on the scale of the operating environment. The robots are equipped with different LiDARs and typically different SLAM algorithms would be required. However, maintaining multiple algorithms needs significant effort which is especially undesirable in the field. Consequently, in this work we propose a *unified* mapping framework that handles multiple types of LiDARs including (1) a fixed 3D LiDAR, (2) a rotating 3D/2D LiDAR. The problem can be defined as follows: given a sequence of laser scans collected from a LiDAR sensor of any type, the algorithm will compute the motion of the sensor and build a 3D map in the meantime. As stated before, our work is based on the Cartographer SLAM [@carto] which contains a foreground localization component and a background SPG refinement component. Originally designed to work with stationary 3D LiDARs, it doesn’t generalize to rotating 2D LiDARs since directly accumulating 2D scans using the IMU in the localization component will introduce distortion to the map. To accommodate this problem, we apply a different localization method that has two major advantages. First, every single scan is matched to the map to compute a more accurate pose than pure IMU-based methods. Second, the pose is computed regardless of the LiDAR types, allowing the framework to be generalizable across different platforms. With a unified framework, identical parameter-tuning strategies can be shared between systems which significantly simplifies the set-up procedure of multiple platforms during field tests. Additionally, we show that only a few parameters need to be adjusted when switching platforms such as local map resolution, number of accumulated scan and so on. More details will be discussed in the experiments. The rest of this paper is structured as follows. Section \[sec:relatedwork\] summarizes the related work on LiDAR-based SLAM problem. The proposed method is described in detail in Section \[sec:approach\]. Experiments and results are presented in Section \[sec:experiments\]. Finally, conclusions and insights are discussed in Section \[sec:conclusion\]. Related Work ============ \[sec:relatedwork\] There has been a vast of research on LiDAR-based SLAM over past decades. Classic probabilistic approaches such as Kalman filters [@castellanos2012mobile] [@montemerlo2002fastslam] and particle filters [@dellaert1999monte] [@doucet2000rao] [@gmapping] infer the distribution of the robot state and the map based on measurements which are characterized by sensor noise models. [@thrun2002robotic] does a comprehensive review on the techniques. Those work establishes the theoretical fundamentals of the SLAM problem and has achieved great success in robustness, accuracy and efficiency. However, most of these approaches are limited to using fixed 2D LiDARs to solve the planar SLAM problem. Although in principle these algorithms are generalizable to 3D, the computational cost could become intractable as the dimension increases. In 3D situations, 2D LiDARs may be mounted on a rotating motor [@bosse2009continuous] [@zlot2014efficient] [@zhang2014loam] or a spring [@bosse2012zebedee] to build 3D maps. The additional degree of freedom significantly enlarges the sensor FOV, which, on the other hand, makes sequential scan matching impossible due to a lack of overlap. To account for this issue, a smooth continuous trajectory [@anderson2013towards] [@tong2013gaussian] may be used to represent robot motion instead of a set of pose nodes. However, the smooth motion assumption does not always hold true. More recently, as 3D ranging technology becomes widely used, methods to achieve real-time, large-scale and low-drift SLAM have emerged using accurate 3D LiDARs. [@martin2014two] developed a Differential Evolution-based scan matching algorithm that is shown to be of high accuracy in three dimensional spaces and contains a loop-closure algorithm which relies on surface features and numerical features to encode properties of laser scans. [@zhang2014loam] extract edge and planar features from laser scans and then adopt an ICP method [@chen1992object] for feature registration. An extension is presented in their later work [@zhang2015visual] where visual data is fused with range data to further reduce drifts. Although they do not compute the loop-closure, the generated map is of high accuracy even after travelling for several kilometers. [@carto] introduced the Cartographer SLAM where a local odometry relying on scan matching estimates the poses and meanwhile an SPG is updated and optimized regularly to refine pose estimates and generate consistent maps. Although existing methods vary in specific techniques, most share a similar pipeline, which estimates the pose using ICP or its variants as front-end while solves an SPG or trajectory optimization problem as the back-end. Approach {#sec:approach} ======== Localization ------------ ![Pipeline of the localization algorithm that takes the sensor readings and a distance map as input and outputs the pose estimate. ](figures/local_pipeline.PNG){width="65.00000%"} \[fig:local\] The localization module (shown in Figure \[fig:local\]) combines an Error State Kalman Filter (ESKF) with a Gaussian Particle Filter (GPF) to estimate robot states inside a prior map. The GPF, originally proposed by [@bry2012state], converts raw laser scans to a pose measurement, which frees the ESKF from handling 2D or 3D range data directly. This is a key factor that ensures compatibility. More specifically, the ESKF (illustrated in Figure \[fig:eskf\]) numerically integrates IMU measurements to predict robot states and uses a pseudo pose measurement to update the prediction. In the GPF illustrated in Figure \[fig:gpf\], a set of particles (pose hypotheses) are sampled according to the prediction, then weighted, and finally averaged to find the posterior belief. By subtracting the prediction from the posterior belief, the pseudo pose measurement is recovered and used to update the ESKF. Finally, we refer the readers to our previous work [@eskf] for more details. ![An illustration of the ESKF in 2D. The circle denotes the robot with a bar indicating its orientation. The dashed ellipse represents the position uncertainty. Here the orientation uncertainty is omitted for simplicity.](figures/eskf.png){width="70.00000%"} \[fig:eskf\] ![An illustration of the GPF in 2D. Circles and ellipses share the same meaning as in Figure \[fig:eskf\]. Differently, the darker color means a higher weight, namely higher probability of a hypothesis to be true.](figures/gpf.png){width="75.00000%"} \[fig:gpf\] Submaps and Distance Maps ------------------------- ![The pipeline modified based on Cartographer to manage submaps and distance maps. Modifications are highlighted in green. ](figures/submap_pipeline.PNG){width="70.00000%"} \[fig:graph\] A local occupancy grid map is defined as a submap by Cartographer. Since a different localization method is used, we need to adjust the submap management scheme so that the submaps can be accessed by the localization module. As shown in Figure \[fig:graph\], there exist two stages of scan accumulation. On the first stage, $N$ scans are accumulated to form a 3D scan, then matched and inserted into the active submaps. The active submaps include a matching submap (blue) and a growing submap (yellow). The formed 3D scan is matched and inserted into both submaps. On the second stage, if $M$ 3D scans are inserted into the matching submaps, the growing submap is switched to be the new matching submap and the old matching submap is erased. Meanwhile, a new submap (orange) is created and starts growing. During the two stages, whenever a 3D scan is formed or a new submap is created, new pose nodes are added to the SPG. The adjustments are done by adding an octomap [@hornung2013octomap] beside the original grid map. The formed 3D scan is inserted into the octomap and corresponding distance map is updated immediately. The octomap library provides an efficient method to detect changes so that the distance map can be computed efficiently. Additionally, updating octomap and distance map uses multi-threading techniques to avoid time delay caused by the distance map conversion. Experiments =========== \[sec:experiments\] In this section, experiments conducted in simulation and real-world are discussed. For simulation, we focus on the rotating 2D LiDAR payload which is believed to be the most challenging case compared with other types of configurations. For the real-world tests, different configurations including stationary 3D LiDAR, rotating 2D/3D LiDAR are used. Simulated Tunnel Test --------------------- The main task of the simulated robot is to fly through a train car tunnel (see Figure \[fig:tunnel\]) and map its interior. The fly-through takes about 15min since we keep a relatively low velocity ($1.13$m/s at maximum) and frequently dwell so that enough laser points are collected to build submaps. Moving too fast will result in unstable localization since the submap is not well observed. This issue can be addressed using a 3D LiDAR which quickly scans many points from the environment. ![The tunnel is of size 108m$\times$5.8m$\times$7m ($l\times w\times h$) and is built based on the DOE-PUREX nuclear tunnel wherein 8 train cars are loaded with radioactive waste. The simulated robot shares the identical sensing setup with the DJI M100. The rotating Hokuyo LiDAR is inserted from Gazebo and IMU measurements are generated by adding white noise and biases to the ground truth.](figures/tunnel_sim.PNG){width="60.00000%"} \[fig:tunnel\] \[t\] ![*Up:* Built tunnel maps along the flight through test. *Down:* Comparison of the ground truth (blue) and the estimated trajectory (red). ](figures/tunnel_map.PNG "fig:"){width="90.00000%"} ![*Up:* Built tunnel maps along the flight through test. *Down:* Comparison of the ground truth (blue) and the estimated trajectory (red). ](figures/trajectory.PNG "fig:"){width="99.00000%"} \[fig:tunnel\_map\] \[t\] ![A plot of pose estimation errors. The rotation error is computed by $e_{\text{rotation}} = \log\left(q_{\text{estimated}}^{-1}\cdot q_{\text{groundtruth}}\right)$, where $q\in S(3)$ is a unit quaternion. The $\log(\cdot)$ function maps a unit quaternion to an angle axis vector in $so(3)$.](figures/error.PNG "fig:"){width="90.00000%"} \[fig:error\] The built map is visualized in Figure \[fig:tunnel\_map\] (voxel filtered with resolution 0.1m). In simulation, we are able to compare the estimated poses with the ground truth. From Figure \[fig:error\], the maximum position error in three axes is observed to be 2.0m, 0.37m and 1.11m near the end of the flight. Particularly, drift along $x$-axis is the largest, which is because the number of points on train cars to estimate $x$ is relatively small than that on side walls or ceiling to estimate $y$ and $z$. In other words, $x$-axis is under-constrained. The total traversed distance is $165$m and the translational drift rate is $1.34\%$. The rotation estimation, differently, is more consistent and the averaged error in roll, pitch and yaw are $0.14^{\circ},\; 0.15^{\circ},\;0.24^{\circ}$. There are error peaks in yaw due to occasional low quality scan matching but can be quickly recovered. The rotational drift rate is $0.003^{\circ}$/m. Real World Test --------------- ![Three tests are conducted in indoor and outdoor environments. *Left*: Test with a fixed 3D LiDAR in a hallway loop. *Middle*: Test with a rotating 3D LiDAR around the CMU patio. *Right*: Test with a rotating 2D LiDAR in a corridor loop.[]{data-label="fig:results"}](figures/results.PNG){width="99.00000%"} Real-world experiments are carried out on multiple platforms: (1) fixed VLP-16 (10Hz, range 100m) with an i7 (2.5Ghz) computer, (2) rotating VLP-16, (3) rotating Hokuyo (40Hz, range 30m) with a 2.32GHz ARM processor. The first experiment is conducted inside a corridor loop (see Figure \[fig:results\] left). In this test, the VLP-16 is put horizontally and the rotating speed is set to be zero so that the LiDAR is stationary. We found that although the VLP-16 measures 3D structures, its $20^{\circ}$ FOV is still not enough to reliably estimate height. The main reason is that inside the narrow corridor, most laser points come from side walls instead of the ground and ceiling. As a result, a larger drift in height is observed when the robot revisits the origin and more time is needed to detect the loop-closure. The second test is carried out around the patio on the CMU campus. Again the VLP-16 is used and the motor rotating speed is set to be 30 rpm. Since this is a larger area, the distance map used for localization has a coarser resolution of 0.3m and is constrained within a 40m$\times$40m$\times$40m bounding box. In the last test, the robot maps a narrow hallway with 1.1m width at minimum. To ensure enough laser points are collected, the robot is manually carried and moved slowly ($\approx 0.5$m/s). This time only small drifts in height is observed before closing the loop. This is because by rotating the LiDAR, the robot obtains wider FOV, which could significantly improve the mapping performance. It is important to point out that only a few parameters (listed in Table \[tab:param\]) are changed in the above 3 cases. For localization related parameters, $\sigma_a$ and $\sigma_g$ characterize the noise level of IMU. Distance map resolution are chosen according to the scale of environment. The maximum range of distance map sets a limit on the distance the robot will see. For mapping related parameters, $M$ and $N$ are as described in Figure \[fig:graph\] and $I$ governs how often the background SPG got optimized. \[t\] \[tab:param\] [p[5.5cm]{}p[5.5cm]{}]{} Localization related Params & Mapping related Params\ accelerator noise $\sigma_a$ & \# of scans per accumulation $N$\ gyroscope noise $\sigma_g$ & \# of scans per submap $M$\ distance map resolution & \# of scans per optimization $I$\ max range of distance map\ Conclusions =========== \[sec:conclusion\] In this paper, the proposed algorithm is shown to allow different LiDAR configurations to be handled in a unified framework with only a few parameters need to be tuned, which simplifies the development and application process. Some key insights obtained from the experiments are: - The FOV of a LiDAR matters. A fixed 3D LiDAR is simple to set up but has quite a limited vertical FOV, which results in unreliable height estimation. In our experiments, the LiDAR has to be pitched down to capture more ground points. A rotating LiDAR has significantly wider FOV and is observed to be more robust to different environments. However, the rotating motor has to be carefully designed to ensure continuous data streaming and accurate synchronization. For example, an expensive slip-ring mechanism is needed to achieve continuous rotation. - Moving speed is critical in the case of rotating 2D LiDAR. In our tests, a low speed is necessary so that the laser scanner can accumulate enough points to update a submap. Moving too fast may lead to unstable localization. In the case of 3D LiDAR, a low moving speed is not a crucial requirement. - The choice of submap resolution will affect memory usage, computational complexity and mapping accuracy. From our experience, low resolution submap has low memory usage and is faster to query data from. However, that will sacrifice the map accuracy. On the other hand, higher resolution consumes more memory but doesn’t necessarily improve the map accuracy. Therefore, the resolution has to be chosen carefully through trial and error. Acknowledge =========== The authors are grateful to Sam Zeng, Yunfeng Ai and Xiangrui Tian for helping with the UAV development and the mapping experiments, to Matthew Hanczor and Alexander Baikovitz for building the DOE-PUREX tunnel model. This work is supported by the Department of Energy under award number DE-EM0004478.
--- abstract: 'We present the results of an aperture masking interferometry survey for substellar companions around [67]{} members of the young ($\sim$8–200Myr) nearby ($\sim$5–86pc) AB Doradus, $\beta$ Pictoris, Hercules-Lyra, TW Hya, and Tucana-Horologium stellar associations. Observations were made at near infrared wavelengths between 1.2–3.8$\mu$m using the adaptive optics facilities of the Keck II, VLT UT4, and Palomar Hale Telescopes. Typical contrast ratios of $\sim$100–200 were achieved at angular separations between $\sim$40–320mas, with our survey being 100% complete for companions with masses below $\sim$0.25[$M_\odot$]{} across this range. We report the discovery of a $0.52 \pm 0.09$[$M_\odot$]{} companion to HIP14807, as well as the detections and orbits of previously known stellar companions to HD16760, HD113449, and HD160934. We show that the companion to HD16760 is in a face-on orbit, resulting in an upward revision of its mass from $M_2 \sin i \sim 14$[$M_{\rm J}$]{} to $M_2 = 0.28 \pm 0.04$[$M_\odot$]{}. No substellar companions were detected around any of our sample members, despite our ability to detect companions with masses below 80[$M_{\rm J}$]{} for 50 of our targets: of these, our sensitivity extended down to 40[$M_{\rm J}$]{} around 30 targets, with a subset of 22 subject to the still more stringent limit of 20[$M_{\rm J}$]{}. A statistical analysis of our non-detection of substellar companions allows us to place constraints on their frequency around $\sim$0.2–1.5[$M_\odot$]{} stars. In particular, considering companion mass distributions that have been proposed in the literature, we obtain an upper limit estimate of $\sim$9–11% for the frequency of 20–80$M_{\rm J}$ companions between 3–30AU at 95% confidence, assuming that their semimajor axes are distributed according to $d\mathcal{N}/da \propto a^{-1}$ in this range.' author: - 'Evans, T.M., Ireland, M.J., Kraus, A.L. , Martinache, F. , Stewart, P., Tuthill, P.G., Lacour, S. , Carpenter, J.M. and Hillenbrand, L.A.' title: 'Mapping the Shores of the Brown Dwarf Desert III: Young Moving Groups' --- Introduction ============ In the past few years, direct imaging surveys have begun to build up a picture of the mass and semimajor axis distributions of substellar companions at separations beyond $\sim$20–30AU . Meanwhile, statistical analyses of radial velocity results have tended to focus on objects with masses below $\sim$10$M_{\rm J}$ out to separations of $\sim$3AU [@2008PASP..120..531C; @2010Sci...330..653H]. However, given the observational biases of radial velocity and direct imaging surveys, the separation range of $\sim$3–30AU has remained relatively unexplored. Aperture masking interferometry is a direct detection technique that is well suited for detecting substellar companions with masses of $\sim$10$M_{\rm J}$ and semimajor axes within $\sim$30AU around young, nearby stars. For instance, it has been used to conduct surveys for substellar companions around members of the Upper Scorpius [@2008ApJ...679..762K] and Taurus-Auriga [@2011ApJ...731....8K] associations, as well as to measure the dynamical mass of the brown dwarf companion to GJ 802b [@2008ApJ...678..463I], show that CoKu Tau/4 is a binary system rather than a transitional disk [@2008ApJ...678L..59I], and place limits on possible companions existing within 10AU of HR8799 [@2011ApJ...730L..21H]. Recently, the technique has also produced the first direct detection of a young exoplanet still undergoing formation within the transitional disk of LkCa15 (Kraus & Ireland, submitted) and a similar detection of an object within the gap of the T Cha disk . This paper presents the results of an aperture masking survey of [67]{} members of the AB Doradus (AB Dor), $\beta$ Pictoris ($\beta$ Pic), Hercules-Lyra (Her-Lyr), Tucana-Horologium (Tuc-Hor), and TW Hya (TWA) moving groups. At least 49 of our targets have been observed previously as part of deep imaging surveys, but these observations have typically been sensitive to different orbital separations than those that are probed here. We chose our targets based on their youth (8–200Myr) and proximity (5–86pc). The former ensures that any substellar companions are still glowing relatively brightly at infrared wavelengths following their recent formation, while the latter allows smaller absolute separations to be explored for a given angular separation. The paper is organized as follows. In Section \[sec:am\], we provide a brief overview of the aperture masking technique. In Section \[sec:sample\], we describe our survey sample. In Section \[sec:obsdatared\] we summarize the observations that were made and how the data were reduced. In Section \[sec:binaryfitting\], we explain how we searched for companions to the target stars in the reduced data and how we derived the survey detection limits. In Section \[sec:results\] we report our results, including the detections and orbits for stellar companions around HIP14807, HD16760, HD113449, and HD160934. However, no substellar companions were detected, and in Section \[sec:analysis\] we present a statistical analysis of this null result before concluding in Section \[sec:conclusion\]. Aperture Masking {#sec:am} ================ The aperture masking technique works by placing an opaque, perforated mask at or near the pupil plane of a telescope (@fizeau1868; @1891Natur..45..160M; more recently, see @2000PASP..112..555T [@2006SPIE.6272E.103T; @2010SPIE.7735E..56T]). This converts the single aperture into a multi-element interferometer. Each pair of holes in the mask acts as an interferometric baseline, resulting in an interferogram being projected onto the image plane. The complex visibility $V$ [@michelson1890] of the source brightness distribution $S$ is sampled by taking the 2D Fourier transform of the measured interferogram $I$. This follows from the Van Cittert-Zernike Theorem, which states that the normalized complex visibility is equal to the Fourier transform of the brightness distribution: $$\begin{aligned} V &=& \frac{\tilde{S}}{S_0} \label{eq:vczt}\end{aligned}$$ where the tilde denotes the Fourier transform and $S_0$ is the total source flux. Since the detected image is the convolution of the instrumental point spread function (PSF) and the source brightness distribution, this leads to: $$\begin{aligned} V &=& \frac{\tilde{I}}{S_0\,\tilde{P}} \label{eq:vis}\end{aligned}$$ where $\tilde{P}$ denotes the Fourier transform of the PSF. In practice, the PSF is measured by observing an unresolved calibrator star, i.e. a point source, which has unit complex visibility $V=1$. In this study, we used non-redundant aperture masks, with each baseline pair corresponding to a unique point on the spatial frequency plane. We used masks with 7, 9, and 18 holes, giving 21, 36, and 153 independent baselines, respectively. Hole diameters and transmission fractions are provided in Table \[table:masks\]. The subaperture configurations on the masks were designed to provide a uniform and isotropic sampling of the complex visibility function, with the specific mask chosen to observe a given target depending on the target’s brightness and the expected sources of systematic error. For example, although the 18 hole masks had slightly longer baselines than the 7 or 9 hole masks, they had lower total throughput with a broader PSF. This meant that they could only be used with narrow band filters, which restricted their use to brighter targets. To identify faint companions around our targets, we used a quantity derived from the complex visibility known as the closure phase $\Theta$ [@1958MNRAS.118..276J; @1986Natur.320..595B]. The closure phase is obtained by adding the complex visibility phases around a closure triangle of subapertures. Explicitly, if we denote the measured complex visibility phase between the $i$th and $j$th subapertures as $\varphi_{ij}$, the intrinsic complex visibility phase as $\phi_{ij}$, and a phase error due to atmospheric and instrumental effects across the $i$th aperture as $\eta_{i}$, then we have: $$\begin{aligned} \varphi_{ij} &=& \phi_{ij} + \eta_i - \eta_j \nonumber \\ \varphi_{ij} &=& \phi_{jk}+\eta_j - \eta_k \nonumber \\ \varphi_{ij} &=& \phi_{ki}+\eta_k - \eta_i \end{aligned}$$ Importantly, the diameter of the mask holes are chosen to ensure that the wavefront phase variations across each subaperture are approximately constant so that they can be neglected. Combining aperture masking with adaptive optics allows subaperture diameters that are larger than the atmospheric Fried parameter and exposure times that are longer than the atmospheric coherence time to be used, providing a greater through-put of photons. It follows that the $\eta_i$ terms cancel out when we take the closure phase sum: $$\begin{aligned} \Theta_{ijk} &=& \phi_{ij}+\phi_{jk}+\phi_{ki} \label{eq:cp}\end{aligned}$$ where $\Theta_{ijk}$ is the closure phase of the triangle $ijk$. The independence of the closure phase quantity from major sources of systematic error allows us to achieve the full interferometric resolution according to the Michelson criterion, which is equal to $\lambda/2B$, where $\lambda$ is the observing wavelength and $B$ is the longest baseline on our mask. This is the smallest angular separation for which two point sources would be fully resolved. Given that the longest baseline of the masks used in this study span nearly the entire telescope aperture, this corresponds to angular scales of roughly half the single-aperture diffraction limit. Survey Sample {#sec:sample} ============= In 2007, our group initiated a search for close, faint companions around young, nearby stars using the aperture masking facilities installed on the 5.1m Hale telescope at Palomar Observatory in California. In subsequent years (2007–2011) the survey was extended and made use of similar facilities installed on the 10m Keck II telescope at Keck Observatory in Hawaii and the 8.2m VLT UT4 telescope at the VLT Observatory in Chile. Our final survey sample consisted of [67]{} proposed members of the AB Dor [@2004ApJ...613L..65Z], $\beta$ Pic [@2001ApJ...562L..87Z], Her-Lyr [@2004AN....325....3F], Tuc-Hor [@2001ApJ...559..388Z], and TWA [@1997Sci...277...67K] moving groups. A concise summary of the sample is provided in Table \[table:mg\] while the full list is given in Table \[table:sample\]. Figure \[fig:sptmasses\] shows the sample members binned according to spectral type and masses. In selecting our targets, we noted that many of the moving group members have already been identified as binary systems. The presence of a binary companion within $\sim$1 of a target star reduces the ability of aperture masking to detect additional companions, because the interferograms will overlap. Also, similar brightness companions at separations of $\sim$1.5–3 can prevent the adaptive optics system from achieving a stable lock on the target. For these reasons, we chose not to include any targets in our sample that were known to be affected by such issues. We also emphasize the difficulty of assigning moving group membership to individual stars. Consequently, it is possible that not all objects in our sample are necessarily young. In particular, the moving group memberships of nine of our targets (HD89744, HD92945, GJ466, EKDra, HIP30030, TWA-21, TWA-6, TWA-14, TWA-23) were either unable to be confirmed or else ruled to be unlikely by [@2008hsf2.book..757T] using a dynamical convergence analysis. Furthermore, the existence of Her-Lyr as a genuine moving group is not yet as well-established as the others. To investigate how sensitive our statistical analysis presented in Section \[sec:mc\] is to the uncertain membership of these targets, we repeated the calculations separately with them included and then removed from the sample. Observations and Data Reduction {#sec:obsdatared} =============================== We observed our program objects over the course of [twelve]{} observing runs using the facility adaptive optics imagers at Palomar (PHARO), Keck (NIRC2), and VLT (CONICA) between April 2007 and April 2011. Each camera has aperture masks installed at (Palomar, VLT) or near (Keck) the pupil stop wheels. The central wavelengths and bandpass widths for each filter used are listed in Table \[table:filters\] and details of our observations are summarized in Table \[table:observations\]. Observing conditions varied widely, but we attempted to match the observations to the appropriate conditions. Most of our brighter targets were observed through clouds or marginal seeing as they were the only ones we could lock the AO system on, while our fainter targets were typically observed under better conditions. Our observing strategy has been described previously in [@2008ApJ...679..762K]. Each observation consisted of 1–3 target-calibrator pairs, usually with $\sim$10–20 frames per block. We tried to choose calibrators with optical and near-infrared brightnesses that were similar to those of the target, rather than calibrators that were necessarily brighter. This was done due to concerns about the magnitude-dependence of non-common path errors in the adaptive optics system. For targets of brightness $R \lesssim 7$, calibrators were chosen from the stable radial velocity stars of [@2002ApJS..141..503N]. For fainter stars, we could not explicitly choose calibrators that had been vetted for close binaries, so we simply chose nearby 2MASS sources with similar colors and brightnesses. In all cases, we tried to select calibrators that appeared to be single and unblended in the 2MASS images, as well as close to the target on the sky ($\lesssim$10deg for the Nidever et al sources and $\lesssim$3deg for the 2MASS sources). In addition to reducing overhead times, using nearby calibrators helped to minimize residual wavefront errors introduced by long telescope slews. Data reduction was performed using our group’s custom-written IDL pipeline (for further details, see @2006ApJ...650L.131L, @2008ApJ...678..463I, and @2008ApJ...679..762K). Complex visibilities were extracted by Fourier-inverting the cleaned data cubes and sampling the $uv$-plane at points corresponding to the mask baselines. Calibration was performed by subtracting the calibrator complex visibility phases from the complex visibility phases of the science targets. Binary Model Fitting {#sec:binaryfitting} ==================== We used the same method as [@2008ApJ...679..762K; @2011ApJ...731....8K] to search for companions to our targets over the separation range 20–320mas. We only used closure phases in our binary model fitting, discarding the visibility amplitudes as these are more affected by systematic errors. The parameters we fit for were the angular separation $\rho$ between the primary and companion, the position angle $\theta$ of the companion, and the brightness contrast ratio $C=f_p/f_c$, where $f_p$ and $f_c$ were the fluxes of the primary and companion, respectively. Fitting was performed by initially fixing a high contrast ratio of $C=250$ and generating the corresponding model closure phases for each point on a grid of angular separations spanning $20 < \rho < 320$mas and position angles spanning $0<\theta<360$deg. The point on the $\rho$–$\theta$ grid giving the lowest $\chi^2$ for the measured closure phase values was then taken to be the starting point for a steepest-descent search in which all three model parameters ($C$, $\rho$, $\theta$) were allowed to vary. The initial grid search ensured that the final minimum reached corresponded to the global minimum. The binary fit was considered to be bona fide if it passed a 99.5% detection criterion, which has been explained in [@2008ApJ...679..762K; @2011ApJ...731....8K]. This was done by generating 10000 artificial closure phase data sets with Fourier plane sampling that was identical to that of the measured data. Each artificial closure phase was randomly sampled from a Gaussian distribution with a mean of zero, corresponding to an unresolved point source, and the same variance as the corresponding measured value. A best-fit companion contrast $C$ was then obtained for each set of artificial closure phases using $\chi^2$ minimization at each point on the $\rho$–$\theta$ grid. Once again, by searching over the entire grid we ensured that the global minimum was identified. A 99.9% detection threshold was then defined separately for five contiguous annuli (20–40mas, 40–80mas, 80–160mas, 160–240mas, 240–320mas), corresponding to the 0.1th percentile of the best-fit contrasts obtained for the artificial data sets within that annulus. In other words, if the target was a point source instead of a binary, there was only a 0.1% chance that the measured closure phases would give a best-fit contrast lower than the threshold value in the annulus corresponding to the best-fit separation. This corresponds to a $5 \times 0.1\%=0.5\%$ false alarm probability across the full 20–320mas range. Therefore, if the best-fit model satisfied this condition, the detection was considered to be real at 99.5% confidence. It was important to ensure that any high probability ($>$99.5%) detections were not caused by companions around one of the calibrators rather than around the science target. A small number of such false alarms ($\sim$5) did occur during the course of our analysis. Such cases were usually quite straightforward to identify by systematically repeating the calibration and binary fitting, excluding one calibrator at a time. Given that the calibrators did not have known ages, but were likely to be $\sim$Gyr old, any companions detected around them were almost certainly not substellar, and so they were not considered further. Results {#sec:results} ======= Using the method described in Section \[sec:binaryfitting\], we identified stellar companions to four of our AB Dor targets (HIP14807, HD16760, HD113449, HD160934) and report our best-fit binary solutions in Table \[table:binariesdetected\]. Of these, the companion to HIP14807 is a new discovery while the companions to HD16760, HD113449, and HD160934 are the same as those discovered independently using radial velocity . We describe the detected companions in Sections \[sec:hip14807\]–\[sec:hd160934\], and present our full survey detection limits in Section \[sec:generallimits\]. HIP14807 {#sec:hip14807} -------- A companion was clearly detected in our Keck observations of HIP14807 on 2009 November 21 (MJD 55156.2) at an angular separation of $\rho = 28.74 \pm 0.19$mas with a contrast ratio of $C = 3.00 \pm 0.06$ in the CO filter. Assuming a system age of $110 \pm 40$Myr, interpolation of the NextGen isochrones of gives an estimated companion mass of $0.52 \pm 0.09$[$M_\odot$]{}, which includes the uncertainty in the age and distance, as well as the uncertainty in the fitted contrast. The companion was also detected at high confidence in the Palomar data from 2007 November 29 (MJD 54433.1), with a fitted contrast ratio of $C=10.15 \pm 3.71$. However, this error bar is neither symmetric nor realistic, as there is a strong degeneracy between contrast and separation for small separations in aperture masking data sets. This is illustrated in Figure \[fig:hip14807degen\] (see also Figure 7 in @2006ApJ...649..389P, Table 2 in @2008ApJ...678..463I, and Figure 2 in ). A fuller discussion of this degeneracy is provided in Section 2.1 of [@2009ApJ...695.1183M]. In cases such as these, quick data sets were taken with only one or two calibration observations. As a result, the quoted error bars are not necessarily accurate at the few tens of a percent level, because the dispersion between calibrators is used to estimate the errors in the closure phases. Despite this, global orbital fitting to multiple aperture masking data sets has been performed successfully by using a single contrast for all epochs, with the resulting astrometric fits being consistent with those obtained using other techniques, and having reduced $\chi^2$ of order unity (eg. @2009ApJ...699..168D). For these reasons, we redid the binary fitting to the MJD 54433.1 data using a prior on the contrast determined from the other well-constrained fit to the MJD 55156.2 data. The results of this revised fit are given in Table \[table:binariesdetected2\]. HD16760 {#sec:hd16760} ------- HD16760 is unusual because it shows signs of being both young and old. Its youth is implied by its high lithium abundance, as well as its physical association with the active star HIP12635 and a common proper motion with the AB Dor group [@2004ApJ...613L..65Z; @2008hsf2.book..757T]. Its old age is implied by its low $v \sin i$ value ($2.8 \pm 1.0$ kms${}^{-1}$, ; $0.5 \pm 0.5$ kms${}^{-1}$, @2009ApJ...703..671S) and its low Ca H & K activity index ($\log R^{\prime}_{HK} =-4.93$, @2009ApJ...703..671S; $\log R^{\prime}_{HK} =-5.0 \pm 0.1$, ), which is consistent with field dwarfs ($\log R^{\prime}_{HK}=-4.99\pm 0.07$, @2008ApJ...687.1264M) and inconsistent with high probability members of the 625Myr Hyades cluster ($\log R^{\prime}_{HK}=-4.47 \pm 0.09$, @2008ApJ...687.1264M) and other young stars (see Tables 5–8 of @2008ApJ...687.1264M). However, we note that this system is not the only example of a binary pair showing contradictory age indicators: when examining the activity consistency of known binary pairs, [@2008ApJ...687.1264M] identified a similar case of an inactive primary with an active companion (HD137763 A/B). Previous radial velocity measurements have shown that HD16760 possesses a close companion , for which Sato and coworkers derived a minimum mass $M_2 \, \sin i$ value of $13.13 \pm 0.56$[$M_{\rm J}$]{}, while Bouchy and coworkers obtained a similar value of $14.3 \pm 0.9$[$M_{\rm J}$]{}. We clearly detected this companion in our Keck data from 2008 December 23 (MJD 54823.2), 2009 August 6 (MJD 55049.6) and 2009 November 20 (MJD 55155.3) (Table \[table:binariesdetected\]). Taking the weighted mean of the well-constrained isochrone mass estimates, we obtain a mass of $M_2 = 0.28 \pm 0.04$[$M_\odot$]{} for the companion, which includes the uncertainty in the age, distance and fitted contrasts. This places it well within the stellar mass range. Meanwhile, due to degeneracy between contrast and separation (see Section \[sec:hip14807\]), combined with mediocre data quality, the separation derived from the MJD 55155.3 K band data was inconsistent with the separation derived from the J and H band data taken on the same night. For this reason, we obtained a further observation the following night with the CO filter, which has a very similar bandpass to the Kcont filter (see Table \[table:filters\]; we had intended to use the Kcont filter, but there was a mix-up in the filter selections). The binary parameters derived from this follow-up observation are in close agreement with the values obtained from the J and H band observations. We also repeated the fit to the degenerate K band data with a prior on the contrast obtained by combining the fitted contrasts to the other K band epochs. The system properties derived from this re-analysis are reported in Table \[table:binariesdetected2\], and agree with the values obtained for the J and H band data sets. We note, however, that the calibration error added in quadrature to obtain a reduced $\chi^2$ of unity for this fit was 1.8 degrees. This is unusually large and suggests that the quoted errors for the inferred parameters are likely to be underestimated somewhat. Using our multi-epoch data, we were able to derive an orbital solution for the companion. To do this, we fixed the values for the time of periastron $T_0$, orbital period $P$, orbital eccentricity $e$, and argument of periastron $\omega$ published for the radial velocity orbit from @2009ApJ...703..671S. We were not able to fit for the orbital inclination $i$ using our aperture masking astrometry data because we only measure the axis ratio of the visual orbit, and this varies with the cosine of the inclination. Hence, we are not sensitive to small changes in $i$ when $i$ is near zero, as is the case here. Instead, we combined the model-dependent mass estimate obtained from the aperture masking results with the value for $M_2 \, \sin i$ derived from the radial velocity results to calculate $i= 2.6 \pm 0.5$deg. Then with these parameters held fixed, we inferred values for the longitude of the ascending node $\Omega$ and semimajor axis $a$ by fitting to the aperture masking astrometry listed in Tables \[table:binariesdetected\] and \[table:binariesdetected2\]. The final orbital solution is reported in Table \[table:systemparameters\], and plotted in Figure \[fig:hd16760orbit\]. Lastly, we note that the rotational velocity of the primary is revised upwards from $v \sin i \sim 0.5$–$4$ kms${}^{-1}$ to $v \sim $20–25 kms${}^{-1}$, a value that is more in line with other members of AB Dor. However, the low Ca H & K emission of HD16760 remains unexplained. The only reason we might expect to see an inclination dependence in the strength of $\log R^{\prime}_{HK}$ is if the Ca H & K emission varies with latitude on a star, such that polar areas show little emission compared to equatorial regions. We are not aware of any model that would predict this effect. HD113449 {#sec:hd113449} -------- We detected a companion around HD113449 in six of our data sets taken at Palomar and Keck between 2007 Apr 6 and 2010 Apr 25 (Table \[table:binariesdetected\]). Four of these data sets (MJD 54196.3, MJD 54634.2, MJD 54821.7, MJD 553311.4) were well-constrained, and taken together imply a companion mass of $0.51 \pm 0.01$[$M_\odot$]{} based on the NextGen isochrones of , including the uncertainty in the age, distance and fitted contrasts. The other two data sets, however, gave fits that were degenerate in contrast and separation, as has been described above. For the first of these (MJD 54197.6), we repeated the analysis with a prior on the separation taken from the well-constrained fit to the previous night’s data (see footnote in Table \[table:binariesdetected\]). For the second case (MJD 54252.1), the analysis was repeated with a prior on the contrast taken from the well-constrained solutions obtained for the four other H band data sets (see Table \[table:binariesdetected2\]). The companion we report here was first announced by [@2009AIPC.1094..788C; @2010RMxAC..38...34C] subsequent to the commencement of our survey. Using radial velocity measurements, those authors obtained a value of $F(M_1,M_2,i)=0.0467 \pm 0.0006$[$M_\odot$]{} for the spectroscopic mass function and estimated a secondary-to-primary mass ratio of $q=0.57 \pm 0.05$. In addition, using astrometry measurements made with the VLT-I they obtained a value of $i=57\pm 3^\circ$ for the inclination, $\Omega = 124 \pm 4^\circ$ for the longitude of the ascending node and $a = 0.750 \pm 0.030$AU for the semimajor axis. We computed an orbital solution for the companion using our aperture masking astrometry (Tables \[table:binariesdetected\] and \[table:binariesdetected2\]), allowing $i$, $a$, and $\Omega$ to vary as free parameters in our fitting, while holding $P$, $T_0$, $e$ and $\omega$ fixed at the values determined Cusano and coworkers. However, we found that we could not obtain a reasonable $\chi^2$ value with the period of $P=215.9$ days reported by those authors. Instead, an acceptable fit was made when we allowed the period to be a free parameter, obtaining $P=216.9$ days. The best-fit parameters are reported in Table \[table:systemparameters\] and the corresponding orbit is plotted in Figure \[fig:hd113449orbit\]. In particular, our fitted value of $\Omega=202.0 \pm 1.6^\circ$ does not agree with the value of $\Omega = 124 \pm 4^\circ$ reported in [@2010RMxAC..38...34C], but as the details of those VLT-I observations are not given, we cannot make a further comparison. Lastly, the dynamical mass of the system ($M_{\rm{tot}}=1.10 \pm 0.09$[$M_\odot$]{}) appears to be underestimated by $\sim 2\sigma$ when compared to the isochrone-determined masses ($M_1 = 0.84 \pm 0.08$[$M_\odot$]{} and $M_2 = 0.51 \pm 0.01$[$M_\odot$]{}). As the orbital period is $\sim$1 year and the astrometric semimajor axis is comparable to the parallax, examining this discrepancy in more detail would require refitting to the raw HIPPARCOS data. HD160934 {#sec:hd160934} -------- We detected a companion around HD160934 in our Palomar data taken on 2008 June 23 (MJD 54640.3) and Keck data taken on 2010 April 26 (MJD 55312.6) and 2011 April 23 (MJD 55674.6). The binary solutions are all in excellent agreement (Table \[table:binariesdetected\]). We obtain a value of $0.54 \pm 0.01$[$M_\odot$]{} for the companion mass by combining the estimates from each epoch. This companion was first reported by , who identified HD160934 as a spectroscopic binary with an estimated period of $\sim$17.1 years and an eccentricity of $e$$\sim$0.8. However, these were preliminary values based on limited phase sampling, and a period of approximately half this is also consistent with the data. In fact, this shorter period is the one preferred by , who repeated the fit to the same data with a small number of more modern radial velocity measurements. Furthermore, in addition to our values presented in Table \[table:binariesdetected\], relative astrometry measurements have been published by [@2007AA...463..707H] and [@2007ApJ...670.1367L]. Using the combined data set, which is summarized in Table \[table:hd160934\_allastrom\], we performed a least-squares orbital fit and report the results in Table \[table:systemparameters\]. The solution is plotted in Figure \[fig:hd160934orbit\]. In order to achieve a reduced $\chi^2$ of 1.0, we had to add an extra position angle error of 0.4 degrees to all data, which may indicate a small position angle calibration mismatch between the three instruments used in this fit. Of these parameters, only $T_0$ has an uncertainty that would be significantly changed by the addition of radial velocity data, which have not been made available to us because at least one new paper including those data is in preparation (Montes, private communication). However, we can combine the semiamplitude of the radial velocity curve published in ($K_1$=7.39$\pm$0.22kms$^{-1}$) with our orbital fit and the HIPPARCOS parallax of 30.2$\pm$2mas [@2007ASSL..350.....V] to obtain a mass of 0.48$\pm$0.06$M_\odot$ for the companion. This value is consistent with the one derived above using isochrones, at the level of the uncertainties. Although the binary orbit is not taken into account in computing the HIPPARCOS parallax, the period is several times longer than the length of the HIPPARCOS mission and the system was near apastron at the time of observations, so we do not expect the orbital photocenter motion to have a significant effect on the measured parallax. As the parallax uncertainty dominates our mass uncertainty, we have repeated the orbital calculation at several fixed parallax values as given in Table \[table:systemparameters\_hd60934\_b\]. According to the NextGen models of , plausible ages for the companion range from $\sim$50Myr through to the zero-age main sequence. Therefore, the dynamical mass does not allow us to place a strong constraint on the system age, but the lower range of allowed values is compatible with the age of AB Dor. Survey Detection Limits {#sec:generallimits} ----------------------- We list our detection limits in Table \[table:detlims\], corresponding to the 99.9% threshold values for each of the separation annuli, as defined in Section \[sec:binaryfitting\]. These were translated into upper limits for companion masses by first converting the contrast ratios into absolute companion magnitudes using the distances listed in Table \[table:sample\]. Then combining these intrinsic luminosities with the assumed ages listed in Table \[table:mg\], we determined the corresponding companion mass by interpolating an appropriate set of isochrones: specifically, the DUSTY isochrones [@2000ApJ...542..464C] for objects with $1400 \, \rm{K} \, \lesssim T_{\rm{eff}} \lesssim 2800 \, \rm{K}$ and the NextGen isochrones for objects with $T_{\rm{eff}} \gtrsim 2800\,\rm{K}$. For the four targets with detected companions (HIP14807, HD16760, HD113449, HD160934) we quote the limits obtained for the residual closure phases. It should be emphasized that the mass limits quoted in Table \[table:detlims\] inherit the systematic errors of the models used to compute them . For instance, [@2007ApJ...655..541M] have shown that the predicted luminosities are highly dependent on the treatment of initial conditions, with “cold start” models generating luminosities that can be orders of magnitudes fainter than those obtained by the “hot start” DUSTY models over Gyr time scales. However, objects in the mass range that our survey is sensitive to would most likely have formed by the gravitational collapse of instabilities in the stellar disk, a process that is more akin to the hot start scenario. Indeed, recent observational evidence appears to favor the hot start models down to masses of $\sim$10[$M_{\rm J}$]{} [eg. @2010Sci...329...57L] or even suggest that they could even overpredict the luminosity of such objects [@2009ApJ...692..729D; @2010ApJ...721.1725D]. In the latter case, the values quoted in Table \[table:detlims\] would be conservative estimates. With these considerations in mind, Figure \[fig:detlims\] shows the detection limits plotted in terms of equivalent companion mass as a function of angular separation. Due to the heterogeneous nature of our observations, which were made using different instruments with different filters, we have divided the targets into three groups in these plots. The top panel shows the detection limits for our older AB Dor ($\sim$110Myr) and Her-Lyr ($\sim$200Myr) targets, the middle panel shows the detection limits for our younger $\beta$ Pic ($\sim$12Myr) and Tuc-Hor ($\sim$30Myr) targets, and the bottom panel shows the detection limits for the TWA ($\sim$8Myr) targets. The TWA targets have been plotted on their own because all but three of them were observed during the same observing run at VLT using the L${}^\prime$ filter with a 7-hole mask. Substellar Companion Frequencies {#sec:analysis} ================================ We have used our detection limits listed in Table \[table:detlims\] to constrain the frequency of $\sim$20–80$M_{\rm J}$ companions in $\sim$3–30AU orbits around 0.2–1.5[$M_\odot$]{} stars. To do this, we employed the same methodology as [@2006AJ....132.1146C], [@2007ApJ...670.1367L], [@2008ApJ...674..466N], [@2009ApJS..181...62M], and . We present a brief outline of the approach here, but the reader may consult those works for further details. Mathematical Framework ---------------------- Firstly, if we denote the outcome of our survey of $N_s$ stars as the set $\{ d_j \}$, where $d_j$ is equal to zero if no companion was detected around the $j$th star or equal to one if a companion was detected, then the likelihood that the fraction of stars with companions is equal to $f$ is given by: $$\begin{aligned} P\left( f | \{ d_j \}\right) &=& \frac{\mathcal{L} \left( \{ d_j \} | f \right) \, P(f)}{\int^1_0{\mathcal{L}\left( \{ d_j \} | f \right) \, P(f) \, df}} \label{eq:bayestheorem}\end{aligned}$$ where $\mathcal{L} \left( \{ d_j \} | f \right)$ is the likelihood of our data and $P(f)$ is the prior probability that the underlying companion frequency is equal to $f$. We adopt an ignorant prior of $P(f)=1$. The fact that we did not detect any 20–80$M_{\rm J}$ companions allows us to place an upper limit $f_u$ on their frequency by integrating Equation \[eq:bayestheorem\], such that: $$\begin{aligned} \alpha &=& \frac{ \int^{f_u}_{0}{ \mathcal{L}\left(\{ d_j \} | f \right) \, df} }{ \int^1_0{\mathcal{L}\left(\{ d_j \} | f \right) \, df} }\label{eq:ci2}\end{aligned}$$ where $\alpha$ is a fraction giving the confidence of our limit (eg. $\alpha=0.95$ corresponds to a confidence of 95%). Using Poisson statistics, it can be shown that a null result implies: $$\begin{aligned} \mathcal{L}\left(\{ d_j \} | f \right) &=& \prod^{N_s}_{j=1}{e^{-f p_j}}\end{aligned}$$ where $p_j$ is the probability that a substellar companion would have been detected around the $j$th star if there had been one present. Monte Carlo Analysis {#sec:mc} -------------------- The next task is to determine values for each of the $p_j$ terms, and we did this using a Monte Carlo (MC) approach. For each target star in our sample, we generated 10000 hypothetical companions, each with a mass $M_2$ and angular separation $\rho$. The companion masses were either obtained by randomly sampling from an appropriate distribution (see Section \[sec:mdist\] below) or else they were set to a fixed value (see Section \[sec:distindependent\] below). To obtain the angular separations, we had to properly take into account the companion orbital eccentricities, phases, and orientations. We did this using the approach described by [@2006ApJ...652.1572B] in their Appendices 1 and 2. As with the companion masses, this required either randomly sampling these properties from appropriate distributions (see Sections \[sec:edist\] and \[sec:adist\] below) or else setting them to fixed values (see Section \[sec:distindependent\] below). Having generated 10000 hypothetical companions with masses and angular separations for each of the targets in our sample, we then consulted the detection limits in Table \[table:detlims\]. Companions with masses that fell above the minimum detectable mass in the corresponding separation annulus for their target star were counted as detections. The $p_j$ value for each target was thus given by the number $x_j$ of such detections divided by the total number of hypothetical companions generated, i.e. $p_j=x_j/10\,000$. Equipped with the $p_j$ values, we were then able to calculate an estimate for the companion frequency upper limit $f_u$ at some level of confidence $\alpha$ by integrating Equation \[eq:ci2\]. Mass Distributions {#sec:mdist} ------------------ The distribution of substellar companion masses in the separation range $\sim$3–30AU is not yet constrained by observations. To accomodate this uncertainty, we have repeated our MC analysis separately for three different assumed forms for the mass distribution. For the first of these, we extrapolated to 20–80$M_{\rm J}$ the power law distribution that has been uncovered by the Keck radial velocity survey for companions with masses $M_2 < 10$[$M_{\rm J}$]{} and periods $P$$<$2000 days [@2008PASP..120..531C], given by: $$\begin{aligned} \frac{d\mathcal{N}}{dM_2} &\propto & M_2^{-1.31} \label{eq:mpowerlaw}\end{aligned}$$ where $d\mathcal{N}$ is the number of objects with masses in the interval $[M_2, M_2+dM]$. The second distribution that we used was the universal mass function proposed by [@2009ApJS..181...62M] for companions to solar mass stars, suggested by those authors for companion masses between 0.01$M_\odot$ and 1.0$M_\odot$ and semimajor axes between 0AU and 1590AU. It is given by: $$\begin{aligned} \frac{d \mathcal N }{dq} &\propto& q^{-0.39} \label{eq:qpowerlaw}\end{aligned}$$ where $q$ is the secondary-to-primary mass ratio. The last distribution that we used was a log-normal paramaterization proposed by [@2008ApJ...679..762K], derived using an ad hoc physical model of binary formation. It is given by: $$\begin{aligned} \frac{d \mathcal N }{dq} &\propto& \frac{1}{q}\exp{\left[-\frac{1}{2} \left( \frac{\log_{10}q}{\sigma} \right)^2 \right]} \label{eq:qlognormal}\end{aligned}$$ and we used the authors’ proposed value of $\sigma=0.428$. Eccentricity Distributions {#sec:edist} -------------------------- As with masses, the distribution of substellar companion orbital eccentricities in the semimajor axis range $\sim$3–30AU is not yet constrained by observations. We chose to draw companion eccentricities from a distribution of the form: $$\begin{aligned} \frac{d \mathcal N }{de} &\propto& 2e \label{eq:edist}\end{aligned}$$ which, as noted in Appendix 2 of [@2006ApJ...652.1572B], is an approximation that has been derived from physical considerations. However, to test how sensitive our results were to the distribution of companion eccentricities, we repeated all of our MC analyses for two limiting cases: (1) fixing the orbital eccentricity of all hypothetical companions to $e=0.9$; (2) fixing all hypothetical companion eccentricities to $e=0$. Semimajor Axis Distributions {#sec:adist} ---------------------------- We drew substellar companion semimajor axes from an inverse power law of the form: $$\begin{aligned} \frac{d \mathcal N }{da} &\propto& a^{-1} \label{eq:adist}\end{aligned}$$ over the separation range 3–30AU. This distribution was also used by [@2009ApJS..181...62M] for $a>30$AU (see their Appendix 2 for a discussion) and it is consistent with recent results for stellar binaries between $\sim 5$–$500$AU [eg. @2008ApJ...679..762K; @2011ApJ...731....8K]. Furthermore, in the event that $>$10$M_{\rm J}$ objects can form by the same mechanism as lower-mass gas giant planets, Equation \[eq:adist\] is a reasonable extrapolation from the results of [@2008PASP..120..531C], who found a nearly-log-flat distribution for $<$10$M_{\rm J}$ gas giant planets at separations $a<3$AU. Distribution-independent Approach {#sec:distindependent} --------------------------------- In addition to assuming specific forms for the distribution of companion properties, we repeated the analysis with them set to fixed values. This allowed us to obtain conservative upper limit estimates for the companion frequencies. For instance, the less massive a companion is, the more difficult it is to detect because it is fainter and hence the required contrasts are higher. Therefore, by setting all of our hypothetical substellar companions to have some mass $M^\prime_2$, the subsequent result we obtain from the MC analysis is an upper limit on the frequency of all companions with masses $M_2 \geq M^\prime_2$. Similarly, our ability to detect companions varied with angular separation (Figure \[fig:detlims\]), which is related to the semimajor axis of the companion via the distance to the system and the orientation of the orbit. Now suppose we fix the semimajor axes of the hypothetical companions to a certain value $a^\prime$ and repeat the MC analysis for values over some interval $a^\prime$$\in$$[a_1,a_2]$. Then the maximum value of $f_u$ obtained from these analyses is the most conservative upper limit estimate for the frequency of companions with semimajor axes on that interval. We present the results of these distribution-independent calculations in Section \[sec:mcresults\], as well as the results obtained by assuming the specific distribution forms described in Sections \[sec:mdist\]–\[sec:adist\]. Previous Imaging Observations {#sec:previmaging} ----------------------------- Ideally, when performing the calculations described above, we would like to combine the results of our aperture masking survey with those of other imaging surveys targeting wider angular scales. This would allow us to put tighter constraints on the companion frequencies across a larger range of separations. To this end, we identified 49 of our targets that have previously been observed as part of published direct imaging surveys and list these in Table \[table:imassumpt\]. For each of these targets, we quote the inner separation angle that was probed by the imaging observations as well as the corresponding sensitivity of the observations. In most cases, these values were taken directly from the published survey limits, but when these were not provided explicitly we attempted to make conservative estimates. We also list each of the sensitivities in Table \[table:imassumpt\] as an equivalent minimum detectable companion mass, calculated by interpolating the DUSTY isochrones of [@2000ApJ...542..464C] in the same manner outlined in Section \[sec:generallimits\]. We incorporated these limits into our analysis described in Section \[sec:mc\] by treating hypothetical companions as “detected” (i.e. by increasing $x_j$ by 1) whenever they came within the detectability range of the previous imaging observations (i.e. if they had separation and masses above the values quoted in Table \[table:imassumpt\]). In the next sections, we present the results obtained from this combined approach (aperture masking + previous imaging) together with the results obtained using the aperture masking limits alone. Calculated $p_j$ Values {#sec:pjvalues} ----------------------- The $p_j$ values calculated separately for each of the three companion mass distributions that we considered (Equations \[eq:mpowerlaw\], \[eq:qpowerlaw\], and \[eq:qlognormal\]) are plotted in ascending order in Figure \[fig:pjvals\] for the case of a companion orbital eccentricity distribution given by $d \mathcal{N} /de \propto 2e$ (Equation \[eq:edist\]) and semimajor axis distribution given by $d \mathcal{N} / da \propto a^{-1}$ (Equation \[eq:adist\]). In this figure, we immediately see the advantage of combining our aperture masking results with the results from imaging surveys: the overall effect is roughly equivalent to an upwards shift of the $p_j$ values by $\sim$10–30% for the majority of targets. Calculated $f_u$ Values {#sec:mcresults} ----------------------- ### Assuming $d\mathcal{N}/da \propto a^{-1}$ {#sec:mcresults1} In Table \[table:mcresults\], we present 95% confidence (i.e. $\alpha=0.95$ in Equation \[eq:ci2\]) upper limit estimates $f_u$ for the frequency of 20–80$M_{\rm J}$ substellar companions in the separation range 3–30AU, with companion semimajor axes randomly drawn from the inverse power law distribution $d \mathcal{N} / da \propto a^{-1}$ (Equation \[eq:adist\]). Also presented are calculations made separately for each permutation of the companion mass and eccentricity distributions described in Sections \[sec:mdist\] and \[sec:edist\], respectively, as well as for different fixed companion masses of 20$M_{\rm J}$, 40$M_{\rm J}$, and 60$M_{\rm J}$ (see Section \[sec:distindependent\]). All calculations reported in Table \[table:mcresults\] are reasonably robust to the different assumptions made for the companion eccentricities, with the calculated upper limits only differing by $\lesssim$1–2% depending on whether all companion eccentricities are fixed to $e=0$ or $e=0.9$, or if they are drawn randomly from a distribution of the form $d \mathcal{N} /de \propto 2e$ (Equation \[eq:edist\]). When the previous imaging observations are incorporated into the calculations and the ages and distances listed in Tables \[table:mg\] and \[table:sample\] are used, the upper limit estimates vary between 9–12%, depending on which form is assumed for the distribution of companion masses, but irrespective of what is assumed about the orbital eccentricities. When the previous imaging observations are not included in the analysis, the equivalent limits vary between 13–19%. For fixed companion masses of 20[$M_{\rm J}$]{}, 40[$M_{\rm J}$]{} and 60[$M_{\rm J}$]{}, the upper limit estimates vary between 15–16%, 11–12% and 9–10%, respectively, when the imaging observations are included, and between 25–27%, 18–20% and 14–15%, respectively, when the imaging observations are not included. To investigate how sensitive our results are to the uncertainties in the distances and ages of our targets, we repeated the above calculations using the $1\sigma$ upper limits for the ages and distances of each target provided in Tables \[table:mg\] and \[table:sample\]. For instance, instead of using a distance of 28pc and an age of 110Myr for PW And, we used $28+7=35$pc and $110+40=150$Myr, respectively. Assuming upper values for the ages and distances in this way results in a downward revision of our sensitivities to faint companions at smaller separations. Therefore, we had to re-calculate the survey detection limits presented in Table \[table:detlims\] before repeating the analysis described in Sections \[sec:mc\]–\[sec:previmaging\]. Depending on which of the companion mass distributions is used, the upper limit estimates obtained from this analysis vary between 11–15% when the imaging observations are included, and between 17–24% when the imaging observations are not included. For fixed companion masses of 20[$M_{\rm J}$]{}, 40[$M_{\rm J}$]{} and 60[$M_{\rm J}$]{}, when the imaging observations are included the calculated upper limits vary between 20–23%, 14–15% and 11–12%, respectively, and when the imaging observations are not included they vary between 38–40%, 24–25% and 18–20%, respectively. We also investigated how sensitive our results are to the 9 targets of less certain membership identified in Section \[sec:sample\] (i.e. HD89744, HD92945, GJ466, EKDra, HIP30030, TWA-21, TWA-6, TWA-14, TWA-23) by removing them and the 7 Her-Lyr targets from the analysis. When all 16 of these targets are removed and we randomly sample the companion masses, the upper limit estimates vary between 12–15% when the imaging observations are included and between 16–23% when the imaging observations are not included, depending on which of the three companion mass distributions from Section \[sec:mdist\] is used. For fixed companion masses of 20[$M_{\rm J}$]{}, 40[$M_{\rm J}$]{} and 60[$M_{\rm J}$]{}, the upper limit estimates vary between 18–20%, 14–15% and 12–13%, respectively, when the imaging observations are included, and between 31–32%, 22–23% and 17–19%, respectively, when the imaging observations are not included. ### $d\mathcal{N}/da$ distribution–independent We repeated all of the calculations presented in the previous section for fixed companion semimajor axes between 3–30AU. As before, when the imaging observations were included in the analysis, the upper limit estimates only change by $\sim$1–2% over the entire 3–30AU depending on which assumption is made for the companion eccentricities. However, when the aperture masking observations are used on their own this variation increases to $\sim$5–10% over the range 3–10AU and becomes as high as $\sim$20% over the 10–30AU range (Figure \[fig:mcecc\]). In Figure \[fig:mc1\] we plot the calculated upper limit estimates obtained for fixed companion masses and randomly-sampled companion masses, while holding the semimajor axes fixed at successive values between 3–30AU using a step size of 0.5AU and randomly drawing the companion eccentricities from a distribution of the form $d \mathcal{N} /de \propto 2e$ (Equation \[eq:edist\]). On their own, the aperture masking results place the tightest constraints over the $\sim$3–10AU semimajor axis range, with upper limit estimates of 20%, 16% and 13% for fixed companion masses of 20[$M_{\rm J}$]{}, 40[$M_{\rm J}$]{} and 60[$M_{\rm J}$]{}, respectively. With the imaging observations included in the analysis, these limits improve to 19%, 13% and 10%, respectively. At larger separations between 10–30AU, our upper limit estimates are 12%, 9%, and 8%, respectively, for the same companion masses when we include the imaging observations, while the companion frequencies are poorly constrained by the aperture masking observations alone. Meanwhile, the right panel in Figure \[fig:mc1\] shows the results that were obtained when we sampled the companion masses from each of the three distributions given in Section \[sec:mdist\]. Over the 10–30AU semimajor axis range, the aperture masking observations on their own constrain the frequency of 20–80[$M_{\rm J}$]{} companions to be less than 16%, 15% or 13% at 95% confidence, depending on whether the mass power law (Equation \[eq:mpowerlaw\]), mass ratio power law (Equation \[eq:qpowerlaw\]) or mass ratio log-normal paramaterization (Equation \[eq:qlognormal\]) is assumed for the companions. These constraints improve to 13%, 12% and 10%, respectively, when the imaging observations are included in the analysis. At wider separations between 10–30AU, the equivalent values obtained when the aperture masking observations are combined with the previous imaging observations are 9%, 9% and 8%, respectively. Finally, Figures \[fig:mc2\] and \[fig:mc3\] have been included for completeness. They are the same as Figure \[fig:mc1\] except that they show respectively the results obtained when upper values for the target ages and distances are used as described in Section \[sec:mcresults1\], and the results obtained when the 9 targets of less certain membership identified in Section \[sec:sample\] and the Her-Lyr targets are not included in the calculations. Implications for Formation Theories {#sec:implications} ----------------------------------- A well-known result from radial velocity surveys is the discovery of a “brown dwarf desert” at separations $\lesssim$3AU, where $\lesssim$0.5–1% of solar-like stars are found to possess a 13–75[$M_{\rm J}$]{} companion [@2000PASP..112..137M; @2006ApJ...640.1051G] compared with $\sim$10% possessing a 0.3–10[$M_{\rm J}$]{} companion [@2008PASP..120..531C] and $\sim$13% possessing a $>$0.1[$M_\odot$]{} stellar companion . Meanwhile, at wider separations, imaging surveys have started to place upper limits on the frequency of substellar companions: - [@2006AJ....132.1146C] obtained a 95% confidence upper limit of 12.1% on the frequency of 13–73[$M_{\rm J}$]{} companions between 25–100AU. - [@2007ApJ...670.1367L] obtained a 95% confidence interval of $1.9^{+8.3}_{-1.5}$% for the frequency of 13–75[$M_{\rm J}$]{} companions between 25–250AU. - [@2009ApJS..181...62M] obtained a 95% confidence interval of $3.2^{+7.3}_{-2.7}$% for the frequency of 13–75[$M_{\rm J}$]{} companions between 28–1590AU. This is consistent with the results of [@2011ApJ...731....8K], who obtained a lower bound of $3.9^{+2.6}_{-1.2}$% for the frequency of substellar companions over the range 5–5000AU by combining the results of their aperture masking survey of Taurus-Auriga members with previous direct imaging results. - By jointly analyzing the results from three of the largest and deepest surveys for substellar companions to date [@2005ApJ...625.1004M; @2007ApJS..173..143B; @2007ApJ...670.1367L], [@2010ApJ...717..878N] obtained 95% confidence upper limits of $<$20% and $<$5% for the frequency of companions with masses between 10–15[$M_{\rm J}$]{} in the ranges 13–600AU and 40–200AU, respectively. The aperture masking survey reported in this paper has allowed us to place similar constraints on the frequency of 20–80[$M_{\rm J}$]{} companions over the 3–30AU separation range (Sections \[sec:mdist\]–\[sec:adist\]). These results are broadly in line with expectations from current models of substellar companion formation. Firstly, population synthesis models predict that core accretion only produces companions with masses up to $\sim$10[$M_{\rm J}$]{} [@2004ApJ...604..388I], or else, if objects are formed with masses above 20[$M_{\rm J}$]{}, then these are extremely rare . Unsurprisingly, the observational studies outlined above provide no evidence to the contrary, despite the aperture masking surveys in particular probing the range of separations where core accretion is expected to be most efficient. Indeed, 20–80[$M_{\rm J}$]{} companions are much more likely to form by either gravoturbulent fragmentation during the initial collapse of the molecular cloud [eg. @2009MNRAS.392..590B; @2009ApJ...703..131O] or by the fragmentation of gravitational instabilities in the protostellar disk once the initial free-fall collapse of the molecular cloud has ended [eg. @2009MNRAS.396.1066C; @2009MNRAS.392..413S]. For the gravoturbulent fragmentation scenario, the low frequencies of substellar companions deduced for separations $\lesssim$200AU from observational studies is in qualitative agreement with the hydrodynamical simulations of [@2009MNRAS.392..590B] who found that the separation of binary pairs consisting of a stellar primary and a very low-mass secondary increases strongly with decreasing mass ratio. Meanwhile, the disk fragmentation mechanism is not expected to occur within $\sim$40–70AU of the primary, where radiative cooling time scales are too long for the disk to be Toomre unstable [eg. @2007ApJ...662..642R; @2009ApJ...695L..53B]. Alternatively, 20–80[$M_{\rm J}$]{} objects might form by gravitational disk instabilities at separations beyond $\sim$40–70AU and then migrate inwards. [@2009MNRAS.392..413S] considered this for the case of a massive disk extending between 40–400AU around a 0.7[$M_\odot$]{} star. However, they found that when low mass ($<$80[$M_{\rm J}$]{}) companions did form at closer separations, they were subsequently scattered outwards by dynamical interactions with more massive companions in the same disk, leading to a brown dwarf desert that extended out to $\sim$100–200AU. Again, the low occurence of 20–80[$M_{\rm J}$]{} companions inferred from observational studies over this separation range is consistent with such a scenario, though the constraints are not yet tight enough to make a definitive statement. Conclusion {#sec:conclusion} ========== This paper has presented the results of an aperture masking survey of [67]{} young nearby stars for substellar companions. Our detection limits extend down to $\sim$40[$M_{\rm J}$]{} for 30 of our targets, and of these, we are sensitive down to $\sim$20[$M_{\rm J}$]{} or less for a subset of 22. Although we did not uncover any substellar companions, we detected four stellar companions. One of these, a $0.52 \pm 0.09$[$M_\odot$]{} companion to HIP14807, is a new discovery. We have also shown that the companion to HD16760 is on a low inclination orbit with a mass of $0.28 \pm 0.04$[$M_\odot$]{}, much higher than the minimum mass of $M_2 \, \sin i \sim $13–14[$M_{\rm J}$]{} inferred from radial velocity measurements. If we do not make any assumptions about the distribution of companion masses or semimajor axes, we calculate that the frequency of 20–80[$M_{\rm J}$]{} companions is less than $\sim$19% in the range 3–10AU and less than $\sim$12% in the range 10–30AU at 95% confidence. If, however, we assume that the semimajor axes of 20–80[$M_{\rm J}$]{} companions are distributed according to $d\mathcal{N}/da \propto a^{-1}$ and that their masses are distributed according to a log-normal parameterization of the secondary-to-primary mass ratio, this limit becomes $\sim$9% over the 3–30AU separation range. Similar values of $\sim$10% and $\sim$11% are obtained if we assume instead that the companion masses or secondary-to-primary mass ratios, respectively, are distributed according to power laws. These results are consistent with models that predict a low occurrence of substellar companions relative to stellar companions at these separations, possibly hinting at the extension of the brown dwarf desert beyond $\sim$3AU. M.I. was the recipient of the Australian Research Council postdoctoral fellowship (project number DP0878674). A.K. was previously supported by a NASA/Origins grant to Lynne Hillenbrand and is currently supported by a NASA Hubble Fellowship grant. This work was also partially supported by the National Science Foundation under Grant Numbers 0506588 and 0705085. This work made use of data products from 2MASS, which is a joint project of the University of Massachusetts and IPAC/Caltech, funded by NASA and the NSF. Our research has also made use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station (http://www.nofs.navy.mil/data/fchpix/). We recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. [88]{} natexlab\#1[\#1]{} , J. E., [Haniff]{}, C. A., [Mackay]{}, C. D., & [Warner]{}, P. J. 1986, , 320, 595 , I., [Chabrier]{}, G., [Allard]{}, F., & [Hauschildt]{}, P. H. 1998, , 337, 403 —. 2002, , 382, 563 , S. A. 2007, , 669, 1167 , M. R. 2009, , 392, 590 , B. A., [Close]{}, L. M., [Masciadri]{}, E., [Nielsen]{}, E., [Lenzen]{}, R., [Brandner]{}, W., [McCarthy]{}, D., [Hartung]{}, M., [Kellner]{}, S., [et al.]{} 2007, , 173, 143 , A. C. 2009, , 695, L53 , F., [H[é]{}brard]{}, G., [Udry]{}, S., [Delfosse]{}, X., [Boisse]{}, I., [Desort]{}, M., [Bonfils]{}, X., [Eggenberger]{}, A., [et al.]{} 2009, , 505, 853 , A., [Jayawardhana]{}, R., [Khavari]{}, P., [Haisch]{}, Jr., K. E., & [Mardones]{}, D. 2006, , 652, 1572 , R. P., [Wright]{}, J. T., [Marcy]{}, G. W., [Fischer]{}, D. A., [Vogt]{}, S. S., [Tinney]{}, C. G., [Jones]{}, H. R. A., [Carter]{}, B. D., [Johnson]{}, J. A., [McCarthy]{}, C., & [Penny]{}, A. J. 2006, , 646, 505 , J. C., [Eikenberry]{}, S. S., [Brandl]{}, B. R., [Wilson]{}, J. C., & [Hayward]{}, T. L. 2005, , 130, 1212 , J. C., [Eikenberry]{}, S. S., [Smith]{}, J. J., & [Cordes]{}, J. M. 2006, , 132, 1146 , G., [Baraffe]{}, I., [Allard]{}, F., & [Hauschildt]{}, P. 2000, , 542, 464 , G., [Lagrange]{}, A., [Bonavita]{}, M., [Zuckerman]{}, B., [Dumas]{}, C., [Bessell]{}, M. S., [Beuzit]{}, J., [Bonnefoy]{}, M., [et al.]{} 2010, , 509, A52 , C. J. 2009, , 396, 1066 , A., [Butler]{}, R. P., [Marcy]{}, G. W., [Vogt]{}, S. S., [Wright]{}, J. T., & [Fischer]{}, D. A. 2008, , 120, 531 , F., [Guenther]{}, E. W., [Esposito]{}, M., & [Gandolfi]{}, D. 2010, in Revista Mexicana de Astronomia y Astrofisica, vol. 27, Vol. 38, Revista Mexicana de Astronomia y Astrofisica Conference Series, 34–34 , F., [Guenther]{}, E. W., [Esposito]{}, M., [Mundt]{}, M., [Covino]{}, E., & [Alcal[à]{}]{}, J. M. 2009, in American Institute of Physics Conference Series, Vol. 1094, American Institute of Physics Conference Series, ed. [E. Stempels]{}, 788–791 , R. M., [Skrutskie]{}, M. F., [van Dyk]{}, S., [Beichman]{}, C. A., [Carpenter]{}, J. M., [Chester]{}, T., [Cambresy]{}, L., [Evans]{}, T., [et al.]{} 2003, [2MASS All Sky Catalog of point sources.]{}, ed. [Cutri, R. M., Skrutskie, M. F., van Dyk, S., et al]{} , R., [Jilinski]{}, E., & [Ortega]{}, V. G. 2006, , 131, 2609 , T. J., [Liu]{}, M. C., [Bowler]{}, B. P., [Cushing]{}, M. C., [Helling]{}, C., [Witte]{}, S., & [Hauschildt]{}, P. 2010, , 721, 1725 , T. J., [Liu]{}, M. C., & [Ireland]{}, M. J. 2009, , 692, 729 —. 2009, , 699, 168 , A. & [Mayor]{}, M. 1991, , 248, 485 , E. D., [Lawson]{}, W. A., [Stark]{}, M., [Townsley]{}, L., & [Garmire]{}, G. P. 2006, , 131, 1730 , H. 1868, Comptes Rendus de l’Académie des Sciences (Paris), 66, 932 , K. 2004, Astronomische Nachrichten, 325, 3 , M. C., [Montes]{}, D., [Fern[á]{}ndez-Figueroa]{}, M. J., & [L[ó]{}pez-Santiago]{}, J. 2006, , 304, 59 , G., [Laws]{}, C., [Tyagi]{}, S., & [Reddy]{}, B. E. 2001, , 121, 432 , D. & [Lineweaver]{}, C. H. 2006, , 640, 1051 , R. F. & [Filiz Ak]{}, N. 2010, , 330, 47 , A. N., [Hinz]{}, P. M., [Kenworthy]{}, M., [Meyer]{}, M., [Sivanandam]{}, S., & [Miller]{}, D. 2010, , 714, 1570 , A. N., [Hinz]{}, P. M., [Sivanandam]{}, S., [Kenworthy]{}, M., [Meyer]{}, M., & [Miller]{}, D. 2010, , 714, 1551 , S., [Carpenter]{}, J. M., [Ireland]{}, M. J., & [Kraus]{}, A. L. 2011, , 730, L21+ , F., [Brandner]{}, W., [Hippler]{}, S., [Janson]{}, M., & [Henning]{}, T. 2007, , 463, 707 , A. W., [Marcy]{}, G. W., [Johnson]{}, J. A., [Fischer]{}, D. A., [Wright]{}, J. T., [Isaacson]{}, H., [Valenti]{}, J. A., [Anderson]{}, J., [Lin]{}, D. N. C., & [Ida]{}, S. 2010, Science, 330, 653 , N., [Lacour]{}, S., [Tuthill]{}, P., [Ireland]{}, M., [Kraus]{}, A., & [Chauvin]{}, G. 2011, , 528, L7+ , S. & [Lin]{}, D. N. C. 2004, , 604, 388 , M. J., [Kraus]{}, A., [Martinache]{}, F., [Lloyd]{}, J. P., & [Tuthill]{}, P. G. 2008, , 678, 463 , M. J. & [Kraus]{}, A. L. 2008, , 678, L59 , R. C. 1958, , 118, 276 , D., [Zuckerman]{}, B., & [Becklin]{}, E. 2003, in Astronomical Society of the Pacific Conference Series, Vol. 294, Scientific Frontiers in Research on Extrasolar Planets, ed. [D. Deming & S. Seager]{}, 91–94 , D., [Zuckerman]{}, B., [Song]{}, I., [Macintosh]{}, B. A., [Weinberger]{}, A. J., [Becklin]{}, E. E., [Konopacky]{}, Q. M., & [Patience]{}, J. 2004, , 414, 175 , M., [Apai]{}, D., [Janson]{}, M., & [Brandner]{}, W. 2007, , 472, 321 , J. H., [Zuckerman]{}, B., [Weintraub]{}, D. A., & [Forveille]{}, T. 1997, Science, 277, 67 , S. G., [Brown]{}, T. M., [Fischer]{}, D. A., [Nisenson]{}, P., & [Noyes]{}, R. W. 2000, , 533, L147 , A. L., [Ireland]{}, M. J., [Martinache]{}, F., & [Hillenbrand]{}, L. A. 2011, , 731, 8 , A. L., [Ireland]{}, M. J., [Martinache]{}, F., & [Lloyd]{}, J. P. 2008, , 679, 762 , D., [Doyon]{}, R., [Marois]{}, C., [Nadeau]{}, D., [Oppenheimer]{}, B. R., [Roche]{}, P. F., [Rigaut]{}, F., [et al.]{} 2007, , 670, 1367 , A., [Bonnefoy]{}, M., [Chauvin]{}, G., [Apai]{}, D., [Ehrenreich]{}, D., [Boccaletti]{}, A., [Gratadour]{}, D., [et al.]{} 2010, Science, 329, 57 , J. P., [Martinache]{}, F., [Ireland]{}, M. J., [Monnier]{}, J. D., [Pravdo]{}, S. H., [Shaklan]{}, S. B., & [Tuthill]{}, P. G. 2006, , 650, L131 , J., [Montes]{}, D., [Crespo-Chac[ó]{}n]{}, I., & [Fern[á]{}ndez-Figueroa]{}, M. J. 2006, , 643, 1160 , P. J., [Becklin]{}, E. E., [Schneider]{}, G., [Kirkpatrick]{}, J. D., [Weinberger]{}, A. J., [Zuckerman]{}, B., [Dumas]{}, C., [Beuzit]{}, J., [et al.]{} 2005, , 130, 1845 , K. L., [Stauffer]{}, J. R., & [Mamajek]{}, E. E. 2005, , 628, L69 , E. E. 2005, , 634, 1385 , E. E. & [Hillenbrand]{}, L. A. 2008, , 687, 1264 , E. E. & [Meyer]{}, M. R. 2007, , 668, L175 , G. W. & [Butler]{}, R. P. 2000, , 112, 137 , M. S., [Fortney]{}, J. J., [Hubickyj]{}, O., [Bodenheimer]{}, P., & [Lissauer]{}, J. J. 2007, , 655, 541 , F., [Rojas-Ayala]{}, B., [Ireland]{}, M. J., [Lloyd]{}, J. P., & [Tuthill]{}, P. G. 2009, , 695, 1183 , E., [Mundt]{}, R., [Henning]{}, T., [Alvarez]{}, C., & [Barrado y Navascu[é]{}s]{}, D. 2005, , 625, 1004 , C. & [Zuckerman]{}, B. 2004, , 127, 2871 , S. A. & [Hillenbrand]{}, L. A. 2009, , 181, 62 , A. A. 1891, , 45, 160 —. 1891, , 3, 217 , C., [Alibert]{}, Y., & [Benz]{}, W. 2009, , 501, 1139 , Y. K. & [Bertelli]{}, G. 1998, , 329, 943 , D. L., [Marcy]{}, G. W., [Butler]{}, R. P., [Fischer]{}, D. A., & [Vogt]{}, S. S. 2002, , 141, 503 , E. L. & [Close]{}, L. M. 2010, , 717, 878 , E. L., [Close]{}, L. M., [Biller]{}, B. A., [Masciadri]{}, E., & [Lenzen]{}, R. 2008, , 674, 466 , S. S. R., [Klein]{}, R. I., [McKee]{}, C. F., & [Krumholz]{}, M. R. 2009, , 703, 131 , S. H., [Shaklan]{}, S. B., [Wiktorowicz]{}, S. J., [Kulkarni]{}, S., [Lloyd]{}, J. P., [Martinache]{}, F., [Tuthill]{}, P. G., & [Ireland]{}, M. J. 2006, , 649, 389 , R. R. 2007, , 662, 642 , B., [Fischer]{}, D. A., [Ida]{}, S., [Harakawa]{}, H., [Omiya]{}, M., [Johnson]{}, J. A., [Marcy]{}, G. W., [Toyota]{}, E., [et al.]{} 2009, , 703, 671 , D. & [Whitworth]{}, A. P. 2009, , 392, 413 , C. A. O., [Quast]{}, G. R., [Melo]{}, C. H. F., & [Sterzik]{}, M. F. 2008, [Young Nearby Loose Associations]{}, ed. [Reipurth, B.]{}, 757 , P., [Lacour]{}, S., [Amico]{}, P., [Ireland]{}, M., [Norris]{}, B., [Stewart]{}, P., [Evans]{}, T., [Kraus]{}, A., [Lidman]{}, C., [Pompei]{}, E., & [Kornweibel]{}, N. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series , P., [Lloyd]{}, J., [Ireland]{}, M., [Martinache]{}, F., [Monnier]{}, J., [Woodruff]{}, H., [ten Brummelaar]{}, T., [Turner]{}, N., & [Townes]{}, C. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6272, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series , P. G., [Monnier]{}, J. D., [Danchi]{}, W. C., [Wishnow]{}, E. H., & [Haniff]{}, C. A. 2000, , 112, 555 , M. E., [P[é]{}rez]{}, M. R., [de Winter]{}, D., & [McCollum]{}, B. 2000, , 363, L25 , F., ed. 2007, Astrophysics and Space Science Library, Vol. 350, [Hipparcos, the New Reduction of the Raw Data]{} , R. A., [Zuckerman]{}, B., [Platais]{}, I., [Patience]{}, J., [White]{}, R. J., [Schwartz]{}, M. J., & [McCarthy]{}, C. 1999, , 512, L63 , D. A., [Saumon]{}, D., [Kastner]{}, J. H., & [Forveille]{}, T. 2000, , 530, 867 , S., [Demarque]{}, P., [Kim]{}, Y., [Lee]{}, Y., [Ree]{}, C. H., [Lejeune]{}, T., & [Barnes]{}, S. 2001, , 136, 417 , B. & [Song]{}, I. 2004, , 42, 685 , B., [Song]{}, I., & [Bessell]{}, M. S. 2004, , 613, L65 , B., [Song]{}, I., [Bessell]{}, M. S., & [Webb]{}, R. A. 2001, , 562, L87 , B., [Song]{}, I., & [Webb]{}, R. A. 2001, , 559, 388 [cccccc]{} Palomar & 5.1 &PHARO & 9H & 0.4 & 6\ & && 18H & 0.2 & 3\ Keck & 10 &NIRC2 & 9H & 1.1 & 11\ & && 18H & 0.5 & 5\ VLT & 8.2 &CONICA & 7H & 1.2 & 15\ [rrccc]{} AB Dor & [31]{}& $110 \pm 40$ & $34.1 \pm 12.8$ & Lu05, T08\ $\beta$ Pic & [11]{}& $12 \pm 5$ & $34.5 \pm 1.4$ & Z01a, F06, T08\ Her-Lyr & [7]{}& $200 \pm 80$ & $14.6 \pm 4.1$ & LS06\ Tuc-Hor & [2]{}& $30 \pm 10$ & $45.1 \pm 0.6$ & Z01b, T08\ TWA & [16]{}& $8 \pm 4$ & $55.0 \pm 2.7$ & dR06, T08\ [lrrccrcrrrccc]{}\ \ PW And & 00 18 20.8 & +30 57 24 & K2 & 0.81 & 28 $\pm$ 7 & ZS04 &6.39 & 6.51 & 7.02 & 2M & ZS04, T08 & MZ04, L05, L07,\ & & & & & & & & & & & & MH09, H10\ HIP3589 & 00 45 50.9 & +54 58 40 & F8 & 1.12 & 52.5 $\pm$ 2.5 & HIP & 6.69 & 6.72 & 6.93 & 2M & ZS04, T08 & $\cdots$\ HIP5191 & 01 06 26.1 & $-14$ 17 46 & K1 & 0.86 & 47.3 $\pm$ 2.8 & HIP & 7.34 & 7.43 & 7.91 & 2M & ZS04, T08 & C10\ HIP6276 & 01 20 32.2 & $-11$ 28 03 & G9 & 0.89 & 34.4 $\pm$ 1.2 & HIP & 6.55 & 6.65 & 7.03 & 2M & ZS04, T08 & MH09\ HIP12635 & 02 42 20.9 & +38 37 22 & K2 & 0.79 & 50.4 $\pm$ 6.7 & HIP & 7.76 & 7.90 & 8.38 & 2M & ZS04, T08 & $\cdots$\ HD16760 & 02 42 21.3 & +38 37 08 & G5 & 0.91 & 45.5 $\pm$ 4.9 & HIP & 7.11 & 7.15 & 7.47 & 2M & ZS04, T08 & $\cdots$\ HIP13027 & 02 47 27.4 & +19 22 19 & G1 & 1.02 & 33.6 $\pm$ 0.9 & HIP & 6.05 & 6.10 & 6.37 & 2M & ZS04, T08 & $\cdots$\ HD19668 & 03 09 42.3 & $-09$ 34 47 & G0 & 0.90 & 37.4 $\pm$ 1.6 & HIP & 6.70 & 6.79 & 7.16 & 2M & LS06, T08 & MH09\ HIP14807 & 03 11 12.3 & +22 25 24 & K6 & 0.76 & 52.5 $\pm$ 8.6 & HIP & 7.96 & 8.10 & 8.67 & 2M & ZS04, T08 & $\cdots$\ HIP14809 & 03 11 13.8 & +22 24 58 & G5 & 1.04 & 53.7 $\pm$ 3.3 & HIP & 6.97 & 7.07 & 7.27 & 2M & ZS04, T08 & $\cdots$\ HIP16563A & 03 33 13.5 & +46 15 27 & G5 & 0.88 & 34.4 $\pm$ 1.2 & HIP & 6.62 & 6.70 & 7.03 & 2M & ZS04, T08 & B07\ HIP16563B & 03 33 14.0 & +46 15 19 & M0 & 0.55 & 34.4 $\pm$ 1.2 & HIP & 8.07 & 8.21 & 8.83 & 2M & ZS04, T08 & $\cdots$\ HIP17695 & 03 47 23.2 & $-01$ 58 18 & M3 & 0.46 & 16.1 $\pm$ 0.7 & HIP& 6.93 & 7.17 & 7.80 & 2M & ZS04, T08 & L07\ HIP18859 & 04 02 36.7 & $-00$ 16 06 & F6 & 1.16 & 18.8 $\pm$ 0.1 & HIP & 4.18 & 4.34 & 4.71 & 2M & ZS04, T08 & L07, H10\ HIP19183 & 04 06 41.5 & +01 41 03 & F5 & 1.17 & 55.2 $\pm$ 2.8 & HIP & 6.58 & 6.70 & 6.89 & 2M & ZS04, T08 & $\cdots$\ BD+20 1790 & 07 23 44.0 & +20 25 06 & K5 & 0.76 & 32 $\pm$ 8 & LS06 & 6.88 & 7.03 & 7.64 & 2M & LS06, T08 & L05, MH09, H10\ HD89744 & 10 22 10.6 & +41 13 46 & F7 & 1.52 & 39.4 $\pm$ 0.5 & HIP & 4.45 & 4.53 & 4.86 & 2M & LS06 & $\cdots$\ HIP51317 & 10 28 55.6 & +00 50 28 & M2 & 0.43 & 7.1 $\pm$ 0.1 & HIP & 5.31 & 5.61 & 6.18 & 2M & LS06, T08 & M05, L07\ HD92945 & 10 43 28.3 & $-$29 03 51 & K1 & 0.85 & 21.4 $\pm$ 0.3 & HIP & 5.66 & 5.77 & 6.18 & 2M & LS06 & B07, L07\ GJ466 & 12 25 58.6 & +08 03 44 & M0 & 0.73 & 37.4 $\pm$ 3.2 & HIP & 7.33 & 7.45 & 8.12 & 2M & LS06 & MZ04\ HD113449 & 13 03 49.8 & $-05$ 09 41 & K1 & 0.84 & 21.7 $\pm$ 0.4 & HIP & 5.72 & 5.89 & 6.27 & 2M & ZS04, T08 & L07, H10\ EK Dra & 14 39 00.2 & +64 17 30 & G1.5 & 1.06 & 34.1 $\pm$ 0.4 & HIP & 5.91 & 6.01 & 6.32 & 2M & LS06 & MZ04, B07, L07,\ & & & & & & & & & & & & MH09\ HIP81084 & 16 33 41.7 & $-09$ 33 10 & M0 & 0.58 & 30.7 $\pm$ 2.3 & HIP & 7.55 & 7.78 & 8.38 & 2M & ZS04, T08 & L07\ HIP82688 & 16 54 08.2 & $-04$ 20 24 & G0 & 1.12 & 46.7 $\pm$ 2.0 & HIP & 6.36 & 6.48 & 6.70 & 2M & ZS04, T08 & MH09\ HD160934 & 17 38 39.7 & +61 14 16 & K7 & 0.70 & 33.1 $\pm$ 2.2 & HIP & 7.22 & 7.37 & 7.98 & 2M & ZS04, T08 & L05, H07, L07,\ & & & & & & & & & & & & MZ04\ HIP106231 & 21 31 01.6 & +23 20 09 & K5 & 0.75 & 24.8 $\pm$ 0.7 & HIP & 6.38 & 6.52 & 7.08 & 2M & ZS04, T08 & L05, L07, MZ04\ HIP110526 & 22 23 29.1 & +32 27 34 & M3 & 0.48 & 15.5 $\pm$ 1.6 & HIP & 6.05 & 6.28 & 6.90 & 2M & ZS04, T08 & $\cdots$\ HIP113579 & 23 00 19.2 & $-26$ 09 12 & G5 & 0.99 & 30.8 $\pm$ 0.7 & HIP & 5.94 & 6.04 & 6.29 & 2M & ZS04, T08 & MH09, C10\ HIP114066 & 23 06 04.6 & +63 55 35 & M1 & 0.60 & 24.5 $\pm$ 1.0 & HIP & 6.98 & 7.17 & 7.82 & 2M & ZS04, T08 & L07\ HIP115162 & 23 19 39.5 & +42 15 10 & G4 & 0.94 & 50.2 $\pm$ 2.9 & HIP & 7.22 & 7.28 & 7.61 & 2M & ZS04, T08 & $\cdots$\ HIP118008 & 23 56 10.5 & $-39$ 03 07 & K2 & 0.81 & 22.0 $\pm$ 0.4 & HIP & 5.91 & 6.01 & 6.51 & 2M & ZS04, T08 & B07, C10\ \ \ \ HR9 & 00 06 50.1 & $-23$ 06 27 & F3 & 1.40 & 39.4 $\pm$ 0.6 & HIP & 5.24 & 5.33 & 5.45 & 2M & ZS04, T08 & K03\ HIP10680 & 02 17 25.2 & +28 44 43 & F5 & 1.15 & 34.5 $\pm$ 0.6 & HIP & 5.79 & 5.84 & 6.05 & 2M & ZS04, T08 & $\cdots$\ HIP11437B & 02 27 28.1 & +30 58 41 & M2 & 0.35 & 40.0 $\pm$ 3.6 & HIP & 7.92 & 8.14 & 8.82 & 2M & ZS04, T08 & $\cdots$\ HIP11437A & 02 27 29.2 & +30 58 25 & K6 & 0.63 & 40.0 $\pm$ 3.6 & HIP & 7.08 & 7.24 & 7.87 & 2M & ZS04, T08 & $\cdots$\ HIP12545 & 02 41 25.8 & +05 59 19 & K6 & 0.67 & 42.0 $\pm$ 2.7 & HIP & 7.07 & 7.23 & 7.9 & 2M & ZS04, T08 & B07\ 51 Eri & 04 37 36.1 & $-02$ 28 25 & F0 & 1.41 & 29.4 $\pm$ 0.3 & HIP & 4.54 & 4.77 & 4.74 & 2M & ZS04, T08 & H10\ HIP25486 & 05 27 04.8 & $-11$ 54 04 & F7 & 1.25 & 27.0 $\pm$ 0.4 & HIP & 4.93 & 5.09 & 5.27 & 2M & ZS04, T08 & L05, K07, MH09\ GJ803 & 20 45 09.5 & $-31$ 20 27 & M1 & 0.44 & 9.9 $\pm$ 0.1 & HIP & 4.53 & 4.83 & 5.44 & 2M & ZS04, T08 & K03, MZ04, M05,\ & & & & & & & & & & & & B07, L07\ BD$-$17 6128 & 20 56 02.7 & $-17$ 10 54 & K6 & 0.73 & 45.7 $\pm$ 1.6 & HIP & 7.12 & 7.25 & 7.92 & K04 & ZS04, T08 & M05\ HIP112312A & 22 44 57.9 & $-33$ 15 02 & M4 & 0.31 & 23.3 $\pm$ 2.0 & HIP & 6.93 & 7.15 & 7.79 & 2M & ZS04, T08 & B07\ HIP112312B & 22 45 00.0 & $-33$ 15 26 & M4.5 & 0.17 & 23.3 $\pm$ 2.0 & HIP & 7.79 & 8.06 & 8.68 & 2M & ZS04, T08 & $\cdots$\ \ \ \ HD166 & 00 06 36.8 & 29 01 17.4 & K0 & 0.93 & 13.7 $\pm$ 0.1 & HIP & 4.31 & 4.63 & 4.73 & 2M & LS06 & L07, H10\ HD10008 & 01 37 35.5 & -06 45 37.5 & G5 & 0.89 & 24.0 $\pm$ 0.4 & HIP & 5.75 & 5.90 & 6.23 & 2M & LS06 & L07\ HD233153 & 05 41 30.7 & +53 29 23 & M0.5 & 0.58 & 12.4 $\pm$ 0.3 & HIP & 5.76 & 5.96 & 6.59 & 2M & LS06 & C05\ HIP37288 & 07 39 23.0 & +02 11 01 & K7 & 0.61 & 14.6 $\pm$ 0.3 & HIP & 5.87 & 6.09 & 6.77 & 2M & LS06 & M05, L07\ HD70573 & 08 22 50.0 & 01 51 33.6 & G6 & 0.89 & 46 $\pm$ 11 & LS06 & 7.19 & 7.28 & 7.56 & 2M & LS06 & L05, MH09\ HIP53020 & 10 50 52.1 & +06 48 29 & M4 & 0.25 & 6.8 $\pm$ 0.2 & HIP & 6.37 & 6.71 & 7.32 & 2M & LS06 & L07\ HN Peg & 21 44 31.3 & +14 46 19 & G0 & 1.06 & 17.9 $\pm$ 0.1 & HIP & 4.56 & 4.6 & 4.79 & 2M & LS06 & MZ04, L07\ \ \ \ HIP9141 & 01 57 48.9 & $-21$ 54 05 & G4 & 0.97 & 40.9 $\pm$ 1.1 & HIP & 6.47 & 6.56 & 6.86 & 2M & ZS04, T08 & B07, MH09\ HIP30030 & 06 19 08.1 & $-03$ 26 20 & G0 & 1.03 & 49.2 $\pm$ 2.0 & HIP & 6.55 & 6.59 & 6.85 & 2M & ZS04 & B07, MH09\ \ \ \ TWA-21 & 10 13 14.8 & $-52$ 30 54 & K3/4 & 0.63 & 48 $\pm$ 4 & MM05 & 7.19 & 7.35 & 7.87 & 2M & ZS04 & $\cdots$\ TWA-6 & 10 18 28.8 & $-31$ 50 02 & K7 & 0.43 & 55 $\pm$ 5 & MM05 & 8.04 & 8.18 & 8.87 & 2M & ZS04 & W99, MZ04, L05,\ & & & & & & & & & & & & M05\ TWA-7 & 10 42 30.3 & $-33$ 40 17 & M2 & 0.35 & 29 $\pm$ 2 & MM05 & 6.9 & 7.13 & 7.79 & 2M & ZS04, T08 & W99, MZ04, L05\ TW Hya & 11 01 51.9 & $-34$ 42 17 & K6 & 0.64 & 53.7 $\pm$ 6.2 & HIP & 7.30 & 7.56 & 8.22 & 2M & ZS04, T08 & W99, MZ04, L05\ TWA-3 & 11 10 28.0 & $-37$ 31 53 & M4 & 0.37 & 36 $\pm$ 4 & MM05 & 7.28 & 7.60 & $\cdots$ & W00 & ZS04, T08 & W99, C10\ TWA-14 & 11 13 26.5 & $-45$ 23 43 & M0 & 0.57 & 86 $\pm$ 8 & MM05 & 8.50 & 8.73 & 9.42 & 2M & ZS04 & B07, MZ04, C10\ TWA-13B & 11 21 17.2 & $-34$ 46 45 & M1 & 0.63 & 57 $\pm$ 10 & MM05 & 7.49 & 7.73 & 8.43 & 2M & ZS04, T08 & $\cdots$\ TWA-13A & 11 21 17.5 & $-34$ 46 50 & M1 & 0.61 & 57 $\pm$ 10 & MM05 & 7.46 & 7.68 & 8.43 & 2M & ZS04, T08 & $\cdots$\ TWA-8B & 11 32 41.4 & $-26$ 52 08 & M5 & 0.14 & 42 $\pm$ 5 & MM05 & 9.01 & 9.36 & $\cdots$ & W00 & ZS04, T08 & W99, M05, L05\ TWA-8A & 11 32 41.5 & $-26$ 51 55 & M3 & 0.40 & 41 $\pm$ 4 & MM05 & 7.44 & 7.72 & $\cdots$ & W00 & ZS04, T08 & W99, M05\ TWA-9 & 11 48 24.2 & $-37$ 28 49 & K5 & 0.38 & 46.8 $\pm$ 5.4 & HIP & 7.85 & 8.03 & 8.68 & 2M & ZS04, T08 & W99, M05\ TWA-23 & 12 07 27.4 & $-32$ 47 00 & M1 & 0.58 & 61 $\pm$ 5 & MM05 & 7.75 & 8.03 & 8.62 & 2M & ZS04 & C10\ TWA-25 & 12 15 30.8 & $-39$ 48 42 & M1 & 0.68 & 55 $\pm$ 4 & MM05 & 7.31 & 7.50 & 8.17 & 2M & ZS04, T08 & B07, C10\ TWA-10 & 12 35 04.3 & $-41$ 36 39 & M2 & 0.39 & 57 $\pm$ 9 & MM05 & 8.19 & 8.48 & 9.12 & 2M & ZS04, T08 & W99, MZ04, L05\ TWA-11B & 12 36 00.6 & $-39$ 52 16 & M2 & 0.52 & 72.8 $\pm$ 1.7 & HIP & 8.35 & 8.53 & 9.15 & 2M & ZS04, T08 & W99\ TWA-11A & 12 36 01.0 & $-39$ 52 10 & A0 & 2.31 & 72.8 $\pm$ 1.7 & HIP & 5.77 & 5.79 & 5.78 & 2M & ZS04, T08 & W99, C10 [ccccc]{} PHARO &CH4s & 1.57 & 0.10\ & H & 1.64 & 0.30\ & Ks & 2.15 & 0.31\ NIRC2 &Jcont & 1.21 & 0.02\ & Hcont & 1.58 & 0.02\ & CH4s & 1.59 & 0.13\ & Kp & 2.12 & 0.35\ & Kcont & 2.27 & 0.03\ & CO & 2.29 & 0.03\ CONICA & L${}^\prime$ & 3.80 & 0.62\ [lccrcrc]{}\ \ PW And & PHARO & CH4s & 18H & 3.9 & 2007 Jun 01 & 54252.5\ HIP3589 & PHARO & CH4s & 9H & 6.5 & 2007 Nov 27 & 54431.1\ HIP5191 & NIRC2 & Hcont & 18H & 4.0 & 2007 Jun 06 & 54257.6\ HIP6276 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 23 & 54427.2\ HIP12635 & NIRC2 & Kcont & 18H & 800 & 2008 Dec 23 & 54823.2\ HD16760 & NIRC2 & Kcont & 18H & 2.7 & 2008 Dec 23 & 54823.2\ & NIRC2 & Kcont & 18H & 13 & 2009 Aug 06 & 55049.6\ & NIRC2 & Jcont & 18H & 2.7 & 2009 Nov 20 & 55155.3\ & NIRC2 & Hcont & 18H & 2.7 & 2009 Nov 20 & 55155.3\ & NIRC2 & Kcont & 18H & 2.7 & 2009 Nov 20 & 55155.3\ & NIRC2 & CO & 18H & 2.7 & 2009 Nov 21 & 55156.2\ HIP13027 & PHARO & Ks & 9H & 19 & 2007 Nov 27 & 54431.2\ HD19668 & PHARO & Ks & 9H & 5.8 & 2007 Nov 29 & 54433.3\ HIP14807 & PHARO & Ks & 9H & 5.8 & 2007 Nov 29 & 54433.1\ & NIRC2 & CO & 18H & 5.0 & 2009 Nov 21 & 55156.2\ HIP14809 & NIRC2 & CO & 18H & 2.7 & 2009 Nov 21 & 55156.2\ HIP16563A & PHARO & Ks & 9H & 5.8 & 2007 Nov 27 & 54431.3\ HIP16563B & NIRC2 & CO & 18H & 5.3 & 2009 Nov 21 & 55156.2\ HIP17695 & PHARO & Ks & 9H & 12 & 2007 Nov 29 & 54433.3\ HIP18859 & PHARO & Ks & 9H & 8.7 & 2007 Nov 27 & 54431.2\ HIP19183 & PHARO & Ks & 9H & 8.7 & 2007 Nov 27 & 54431.3\ BD+20 1790 & PHARO & Ks & 9H & 5.8 & 2007 Nov 27 & 54431.5\ HD89744 & PHARO & CH4s & 18H & 8.6 & 2007 Apr 05 & 54195.3\ HIP51317 & PHARO & Ks & 9H & 13 & 2007 Apr 06 & 54196.3\ HD92945 & NIRC2 & Kcont & 18H & 1.3 & 2007 Jun 06 & 54257.2\ GJ466 & PHARO & Ks & 9H & 2.2 & 2008 Jun 19 & 54636.2\ HD113449 & PHARO & CH4s & 18H & 13 & 2007 Apr 06 & 54196.3\ & PHARO & Ks & 9H & 7.5 & 2007 Apr 07 & 54197.4\ & PHARO & CH4s & 18H & 3.9 & 2007 Jun 01 & 54252.1\ & NIRC2 & Hcont & 18H & 1.3 & 2008 Jun 17 & 54634.2\ & NIRC2 & CH4s & 9H & 1.7 & 2008 Dec 21 & 54821.7\ & NIRC2 & Hcont & 18H & 2.7 & 2010 Apr 25 & 55311.4\ EK Dra & PHARO & Ks & 9H & 4.3 & 2008 Jun 20 & 54637.2\ HIP81084 & PHARO & CH4s & 18H & 4.3 & 2007 May 30 & 54250.2\ HIP82688 & PHARO & CH4s & 18H & 4.3 & 2007 Jun 02 & 54253.4\ HD160934 & PHARO & H & 9H & 9.7 & 2008 Jun 23 & 54640.3\ & PHARO & Ks & 9H & 8.6 & 2008 Jun 23 & 54640.3\ & NIRC2 & Kcont & Clear & 0.2 & 2010 Apr 26 & 55312.6\ & NIRC2 & Jcont & 18H & 2.7 & 2011 Apr 23 & 55674.6\ & NIRC2 & Hcont & 18H & 2.7 & 2011 Apr 23 & 55674.6\ HIP106231 & PHARO & CH4s & 18H & 4.8 & 2007 May 31 & 54251.4\ HIP110526 & PHARO & CH4s & 9H & 1.9 & 2007 May 31 & 54251.4\ HIP113579 & NIRC2 & Hcont & 18H & 2.7 & 2007 Jun 05 & 54256.6\ HIP114066 & PHARO & CH4s & 18H & 3.9 & 2007 Jun 01 & 54252.4\ HIP115162 & NIRC2 & Kcont & 18H & 5.3 & 2009 Nov 21 & 55156.2\ HIP118008 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 23 & 54427.2\ \ \ \ HR9 & NIRC2 & Hcont & 18H & 2.7 & 2007 Jun 06 & 54257.6\ HIP10680 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.4\ HIP11437A & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.4\ HIP11437B & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.4\ HIP12545 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.4\ 51 Eri & NIRC2 & Kcont & 18H & 5.3 & 2008 Dec 21 & 54821.4\ HIP25486 & PHARO & Ks & 9H & 12 & 2007 Nov 27 & 54431.4\ GJ803 & NIRC2 & Hcont & 18H & 4.0 & 2007 Jun 05 & 54256.5\ BD$-$17 6128 & PHARO & CH4s & 9H & 4.3 & 2007 May 30 & 54250.5\ HIP112312A & NIRC2 & Kp & 9H & 10 & 2008 Jun 17 & 54634.6\ HIP112312B & NIRC2 & Hcont & 18H & 2.7 & 2007 Jun 05 & 54256.6\ \ \ \ HD166 & PHARO & CH4s & 18H & 5.8 & 2007 May 31 & 54251.5\ HD10008 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 23 & 54427.3\ HD233153 & PHARO & Ks & 9H & 8.7 & 2007 Nov 27 & 54431.5\ HIP37288 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.6\ HD70573 & PHARO & Ks & 9H & 13 & 2007 Nov 27 & 54431.5\ HIP53020 & NIRC2 & Kcont & 18H & 2.3 & 2007 Jun 06 & 54257.3\ HN Peg & PHARO & CH4s & 18H & 3.9 & 2007 Jun 01 & 54252.5\ \ \ \ HIP9141 & PHARO & Ks & 9H & 9.7 & 2007 Nov 29 & 54433.2\ HIP30030 & NIRC2 & Kcont & 18H & 5.3 & 2007 Nov 24 & 54428.6\ \ \ \ TWA-21 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 06 & 54896.1\ TWA-6 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 07 & 54897.1\ TWA-7 & CONICA & L${}^\prime$ & 7H & 40 & 2009 Mar 06 & 54896.1\ TW Hya & CONICA & L${}^\prime$ & 7H & 60 & 2009 Mar 07 & 54897.2\ TWA-3 & NIRC2 & Kp & 9H & 11 & 2008 Dec 22 & 54822.7\ TWA-14 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 07 & 54897.1\ TWA-13B & NIRC2 & Kp & 9H & 5.3 & 2008 Dec 21 & 54821.6\ TWA-13A & CONICA & L${}^\prime$ & 7H & 40 & 2009 Mar 06 & 54896.2\ TWA-8B & NIRC2 & Kp & 9H & 5.3 & 2008 Dec 23 & 54823.7\ TWA-8A & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 05 & 54895.3\ TWA-9 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 05 & 54895.3\ TWA-23 & CONICA & L${}^\prime$ & 7H & 25 & 2009 Mar 05 & 54895.3\ TWA-25 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 06 & 54896.2\ TWA-10 & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 06 & 54896.3\ TWA-11B & CONICA & L${}^\prime$ & 7H & 20 & 2009 Mar 07 & 54897.3\ TWA-11A & CONICA & L${}^\prime$ & 7H & 50 & 2009 Mar 07 & 54897.4\ [lclccccccc]{}\ HIP14807 & PHARO & 2007 Nov 29 &54433.1 & Ks & 2.15 & $63.22$ & $246.86$ & $10.15$ & $\cdots$\ & NIRC2 & 2009 Nov 21 &55156.2 & CO & 2.29 & $28.74 \pm 0.19$ & $89.74 \pm 0.29$ & $3.00 \pm 0.06$ & $0.52 \pm 0.09$\ \ \ HD16760 & NIRC2 & 2008 Dec 23 &54823.2 & Kcont & 2.27 & $26.11 \pm 2.59$ & $46.20 \pm 1.26$ & $13.48 \pm 3.15$ & $0.32 \pm 0.11$\ & NIRC2 & 2009 Aug 6 &55049.6 & Kcont & 2.27 & $26.78 \pm 0.90$ & $204.54 \pm 0.45$ & $13.11 \pm 1.00$ & $0.32 \pm 0.09$\ & NIRC2 & 2009 Nov 20 &55155.3 & Jcont & 1.21 & $28.13 \pm 1.93$ & $286.50 \pm 3.62$ & $31.04 \pm 5.92$ & $0.24 \pm 0.08$\ & NIRC2 & 2009 Nov 20 &55155.3 & Hcont & 1.58 & $26.06 \pm 1.75$ & $286.87 \pm 1.94$ & $20.53 \pm 1.88$ & $0.27 \pm 0.08$\ & NIRC2 & 2009 Nov 20 &55155.3 & Kcont & 2.27 & $39.07$ & $286.67$ & $26.58$ & $\cdots$\ & NIRC2 & 2009 Nov 21 &55156.2 & CO & 2.29 & $25.37 \pm 4.51$ & $290.25 \pm 1.95$ & $15.18 \pm 6.44$ & $0.29 \pm 0.15$\ \ \ HD113449 & PHARO & 2007 Apr 6 &54196.3 & CH4s & 1.57 & $35.62 \pm 0.51$ & $225.19 \pm 0.44$ & $4.27 \pm 0.25$ & $0.51 \pm 0.02$\ & PHARO & 2007 Apr 7 &54197.4 & Ks & 2.15 & $40.46$ & $223.30$ & $6.68$ & $\cdots$\ & PHARO & 2007 Jun 1 &54252.1 & CH4s& 1.57 & $28.46$ & $179.92$ & $13.73$ & $\cdots$\ & NIRC2 & 2008 Jun 17 &54634.2 & Hcont & 1.58 & $36.68 \pm 0.13$ & $222.91 \pm 0.21$ & $4.65 \pm 0.05$ & $0.50 \pm 0.02$\ & NIRC2 & 2008 Dec 21 &54821.7 & CH4s & 1.59 & $27.87 \pm 0.13$ & $250.14 \pm 0.18$ & $4.62 \pm 0.03$ & $0.51 \pm 0.03$\ & NIRC2 & 2010 Apr 25 &55311.4 & Hcont & 1.58 & $35.81 \pm 0.17$ & $202.38 \pm 0.23$ & $4.58 \pm 0.06$ & $0.51 \pm 0.02$\ \ \ HD160934 & PHARO & 2008 Jun 23 &54640.3 & H & 1.64 & $169.24 \pm 0.13$ & $273.35 \pm 0.05$ & $2.22\pm0.01$ & $0.54 \pm 0.03$\ & PHARO & 2008 Jun 23 &54640.3 & Ks & 2.15 & $169.79 \pm 0.25$ & $273.29 \pm 0.09$ & $2.11 \pm 0.02$ & $0.54 \pm 0.04$\ & NIRC2 & 2010 Apr 26 &55312.6 & Kcont & 2.27 & $68.8 \pm 0.7$ & $290.0 \pm 0.6$ & $2.1 \pm 0.2$ & $0.54 \pm 0.04$\ & NIRC2 & 2011 Apr 23 &55674.6 & Jcont & 1.21 & $19.96 \pm 0.05$ & $18.44 \pm 0.12$ & $2.21 \pm 0.01$ & $0.54 \pm 0.03$\ & NIRC2 & 2011 Apr 23 &55674.6 & Hcont & 1.58 & $20.00 \pm 0.03$ & $18.42 \pm 0.09$ & $2.18 \pm 0.01$ & $0.54 \pm 0.03$\ [lclcccccc]{} HIP14807 & PHARO & 2007 Nov 29 & 54433.1 & Ks & $3.00 \pm 0.06$ & $3.00 \pm 0.06$ & $45.43 \pm 1.15$ & $248.08 \pm 2.30$\ HD16760 & NIRC2 & 2009 Nov 20 & 55155.3 & Kcont & $13.19 \pm 0.94$ & $14.05 \pm 0.92$ & $25.80 \pm 1.03$ & $287.99 \pm 1.89$\ HD113449 & PHARO & 2007 Jun 01 & 54252.1 & CH4s & $4.62 \pm 0.02$ & $4.62 \pm 0.02$ & $21.97 \pm 0.73$ & $179.96 \pm 2.84$ [lccc]{} $P$ (days) & $466.5 \pm 0.4$ & $216.9 \pm 0.2$ & $3764.0 \pm 12.4$\ $T_0$ (MJD) & $53336.5 \pm 3$ & $53410.5 \pm 1$ & $52389.5 \pm 64$\ $e$ & $0.084 \pm 0.003$ & $0.300 \pm 0.005$ & $0.636 \pm 0.020$\ $i$ (deg) & $2.6 \pm 0.5$ & $57.5 \pm 1.5$ & $82.3 \pm 0.8$\ $a$ (mas) & $25.5 \pm 2.8$ & $33.7 \pm 0.4$ & $152.5 \pm 4.7$\ $a$ (AU) & $1.16 \pm 0.18$ & $0.73 \pm 0.02$ & $5.05 \pm 0.37$\ $\Omega$ (deg) & $86.9 \pm 1.1$ & $201.8 \pm 1.6$ & $266.7 \pm 0.6$\ $\omega$ (deg) & $243 \pm 2$ & $114.5 \pm 0.5$ & $216.0 \pm 3.1$\ $M_{\rm{total}}$ ([$M_\odot$]{}) & $0.96 \pm 0.44$ & $1.10 \pm 0.09$ & $1.21 \pm 0.27$\ [lcccc]{} 1998 Jun 30 & 50994 & $155 \pm 1$ & $275.5 \pm 0.2$ & [@2007AA...463..707H]\ 2005 Apr 18 & 53478.9 & $213 \pm 2$ & $268.5 \pm 0.7$ & [@2007ApJ...670.1367L]\ 2006 Jul 8 & 53924 & $215 \pm 2$ & $270.9 \pm 0.3$ & [@2007AA...463..707H]\ 2006 Sep 17 & 53995.2 & $218 \pm 2$ & $271.3 \pm 0.7$ & [@2007ApJ...670.1367L]\ 2008 Jun 23 & 54640.3 & $169.4 \pm 0.3$ & $273.3 \pm 0.1$ & This study\ 2010 Apr 26 & 55312.6 & $68.8 \pm 0.7$ & $290.0 \pm 0.6$ & This study\ 2011 Apr 23 & 55674.6 & $20.0 \pm 0.1$ & $18.43 \pm 0.1$ & This study\ [cccc]{} 28.2 & 0.603$\pm$0.042 & 5.29$\pm$0.02 & $\ga$100\ 30.2 & 0.526$\pm$0.037 & 5.44$\pm$0.02 & 100$^{+100}_{-50} $\ 32.3 & 0.463$\pm$0.032 & 5.59$\pm$0.02 & 55$\pm$10\ [lcccccccccccccc]{}\ \ PW And & PHARO & CH4s && 2.27 & 4.52 & 4.81 & 4.74 & 4.73 && 381 & 99 & 91 & 95 & 96\ HIP3589 & PHARO & CH4s && 3.23 & 5.31 & 5.60 & 5.54 & 5.57 && 443 & 125 & 105 & 109 & 107\ HIP5191 & NIRC2 & Hcont & & 4.56 & 4.84 & 4.79 & 4.73 & 4.23 & & 111 & 103 & 96 & 100 & 136\ HIP6276 & NIRC2 & Kcont & & 4.07 & 5.33 & 5.25 & 5.22 & 5.21 & & 141 & 72 & 75 & 76 & 77\ HIP12635 & NIRC2 & Kcont & & 3.85 & 4.93 & 4.88 & 4.81 & 4.83 & & 127 & 73 & 75 & 78 & 77\ HD16760 & NIRC2 & Kcont & & 5.16 & 6.20 & 6.17 & 6.13 & 6.13 & & 85 & 49 & 50 & 50 & 50\ HIP13027 & PHARO & Ks & & 1.53 & 4.38 & 5.30 & 5.42 & 5.39 & & 606 & 155 & 96 & 89 & 91\ HD19668 & PHARO & Ks && 0.960 & 4.05 & 4.99 & 5.09 & 5.03 & & 609 & 146 & 89 & 84 & 87\ HIP14807 & NIRC2 & CO & & 3.98 & 5.20 & 5.12 & 5.09 & 5.08 & & 132 & 70 & 73 & 74 & 74\ HIP14809 & NIRC2 & CO & & 3.84 & 4.95 & 4.91 & 4.84 & 4.80 && 233 & 114 & 118 & 123 & 126\ HIP16563A & PHARO & Ks & & 2.02 & 4.71 & 5.66 & 5.72 & 5.67 & & 442 & 101 & 58 & 56 & 58\ HIP16563B & NIRC2 & CO & & 4.12 & 5.26 & 5.19 & 5.15 & 5.13 & & 60 & 34 & 36 & 36 & 37\ HIP17695 & PHARO & Ks & & 0.52 & 3.71 & 4.74 & 4.77 & 4.78 && 275 & 57 & 34 & 34 & 34\ HIP18859 & PHARO & Ks & & 2.10 & 4.76 & 5.73 & 5.80 & 5.77 & & 632 & 180 & 96 & 104 & 94\ HIP19183 & PHARO & Ks & & 0.32 & 3.49 & 4.65 & 4.68 & 4.69 & & 955 & 384 & 186 & 183 & 181\ BD+20 1790 & PHARO & Ks & & 0.89 & 4.02 & 5.04 & 5.08 & 5.09 & & 532 & 106 & 64 & 63 & 63\ HD89744 & PHARO & CH4s & & 3.63 & 5.69 & 5.98 & 5.94 & 5.95 && 627 & 269 & 223 & 228 & 227\ HIP51317 & PHARO & Ks && 2.52 & 5.07 & 6.01 & 6.08 & 6.07 & & 100 & 28 & 21 & 17 & 20\ HD92945 & NIRC2 & Kcont & & 4.09 & 5.22 & 5.15 & 5.13 & 5.10 && 127 & 71 & 73 & 74 & 75\ GJ466 & PHARO & Ks & & 1.66 & 4.44 & 5.39 & 5.45 & 5.46 && 398 & 85 & 51 & 49 & 49\ HD113449 & PHARO & CH4s & & 2.88 & 5.12 & 5.52 & 5.47 & 5.45 & & 325 & 90 & 71 & 73 & 74\ EK Dra & PHARO & Ks & & 0.80 & 3.98 & 4.94 & 4.98 & 4.98 & & 776 & 224 & 121 & 117 & 118\ HIP81084 & PHARO & CH4s & & 2.23 & 4.54 & 4.94 & 4.90 & 4.90 && 200 & 58 & 49 & 49 & 49\ HIP82688 & PHARO & CH4s & & 2.71 & 4.90 & 5.21 & 5.17 & 5.16 && 520 & 161 & 132 & 136 & 137\ HD160934 & PHARO & H & & 2.11 & 4.22 & 4.76 & 4.81 & 4.64 & & 309 & 96 & 70 & 68 & 75\ HIP106231 & PHARO & CH4s & & 3.18 & 5.26 & 5.55 & 5.51 & 5.46 && 190 & 61 & 53 & 54 & 55\ HIP110526 & PHARO & CH4s & & 0.67 & 3.76 & 3.85 & 3.85 & 3.72 & & 282 & 61 & 58 & 58 & 62\ HIP113579 & NIRC2 & Hcont & & 5.24 & 5.51 & 5.42 & 5.36 & 5.14 & & 98 & 90 & 96 & 99 & 104\ HIP114066 & PHARO & CH4s & & 1.82 & 4.19 & 4.51 & 4.50 & 4.41 & & 271 & 74 & 63 & 63 & 66\ HIP115162 & NIRC2 & Kcont & & 4.34 & 5.60 & 5.51 & 5.47 & 5.49 && 130 & 67 & 71 & 72 & 71\ HIP118008 & NIRC2 & Kcont & & 3.88 & 5.11 & 5.02 & 5.01 & 4.97 & & 129 & 67 & 71 & 72 & 73\ \ \ \ HR9 & NIRC2 & Hcont & & 4.25 & 4.54 & 4.49 & 4.41 & 4.04 & & 134 & 110 & 114 & 120 & 154\ HIP10680 & NIRC2 & Kcont & & 3.80 & 5.15 & 5.05 & 5.00 & 4.97 && 86 & 30 & 33 & 34 & 35\ HIP11437B & NIRC2 & Kcont & & 3.71 & 4.91 & 4.79 & 4.78 & 4.79 & & 23 & 16 & 16 & 16 & 16\ HIP11437A & NIRC2 & Kcont & & 4.27 & 5.42 & 5.30 & 5.30 & 5.31 & & 27 & 17 & 18 & 18 & 18\ HIP12545 & NIRC2 & Kcont & & 3.50 & 4.89 & 4.84 & 4.78 & 4.81 & & 68 & 21 & 22 & 22 & 22\ 51 Eri & NIRC2 & Kcont & & 5.19 & 6.29 & 6.21 & 6.21 & 6.18 && 73 & 24 & 26 & 26 & 27\ HIP25486 & PHARO & Ks & & 1.47 & 4.33 & 5.32 & 5.37 & 5.37 & & 489 & 75 & 35 & 33 & 33\ GJ803 & NIRC2 & Hcont & & 5.46 & 5.75 & 5.68 & 5.64 & 5.36 && 16 & 15 & 15 & 15 & 16\ BD$-$17 6128 & PHARO & CH4s & & 3.20 & 5.57 & 5.57 & 5.56 & 5.49 & & 90 & 20 & 20 & 20 & 20\ HIP112312A & NIRC2 & Kp & & 4.29 & 5.31 & 5.19 & 5.08 & 5.04 & & 18 & 13 & 13 & 14 & 14\ HIP112312B & NIRC2 & Hcont & & 3.34 & 3.70 & 3.57 & 3.53 & 3.25 & & 19 & 17 & 18 & 18 & 20\ \ \ \ HD166 & PHARO & CH4s & & 2.74 & 4.88 & 5.13 & 5.10 & 5.08 & & 396 & 126 & 110 & 112 & 113\ HD10008 & NIRC2 & Kcont & & 3.74 & 4.89 & 4.80 & 4.79 & 4.78 && 213 & 112 & 118 & 118 & 119\ HD233153 & PHARO & Ks & & 0.54 & 3.73 & 4.73 & 4.80 & 4.80 & & 430 & 94 & 60 & 58 & 58\ HIP37288 & NIRC2 & Kcont & & 4.74 & 5.86 & 5.82 & 5.78 & 5.75 && 66 & 40 & 41 & 42 & 43\ HD70573 & PHARO & Ks & & 0.43 & 3.60 & 4.58 & 4.70 & 4.66 && 688 & 227 & 132 & 123 & 126\ HIP53020 & NIRC2 & Kcont & & 4.16 & 5.34 & 5.26 & 5.21 & 5.21 & & 34 & 20 & 20 & 20 & 20\ HNPeg & PHARO & CH4s && 2.76 & 4.92 & 5.17 & 5.12 & 5.12 && 497 & 174 & 152 & 156 & 156\ \ \ \ HIP9141 & PHARO & Ks & & 0.66 & 3.86 & 4.84 & 4.90 & 4.88 && 699 & 113 & 60 & 57 & 58\ HIP30030 & NIRC2 & Kcont & & 4.64 & 5.77 & 5.71 & 5.67 & 5.63 && 83 & 41 & 43 & 44 & 44\ \ \ \ TWA-21 & CONICA & L${}^\prime$ && 1.43 & 4.41 & 5.38 & 5.30 & 5.15 & & 163 & 19 & 12 & 13 & 14\ TWA-6 & CONICA & L${}^\prime$ && 1.25 & 4.34 & 5.28 & 5.12 & 4.55 && 118 & 15 & 8 & 10 & 14\ TWA-7 & CONICA & L${}^\prime$ && 1.64 & 4.56 & 5.38 & 5.26 & 4.02 && 75 & 12 & 6 & 7 & 16\ TW Hya & CONICA & L${}^\prime$ & & 2.64 & 5.28 & 6.28 & 6.20 & 6.13 && 79 & 14 & 7 & 7 & 7\ TWA-3 & NIRC2 & Kp && 2.43 & 3.61 & 3.47 & 3.24 & 2.96 && 69 & 27 & 31 & 40 & 48\ TWA-14 & CONICA & L${}^\prime$ & & 0.120 & 3.38 & 4.34 & 4.23 & 3.86 & & 316 & 43 & 19 & 20 & 26\ TWA-13B & NIRC2 & Kp & & 3.22 & 4.27 & 4.19 & 4.03 & 4.03 & & 73 & 32 & 36 & 41 & 40\ TWA-13A & CONICA & L${}^\prime$ & & 1.70 & 4.57 & 5.52 & 5.47 & 5.32 & & 150 & 18 & 12 & 12 & 13\ TWA-8B & NIRC2 & Kp && 2.69 & 3.89 & 3.78 & 3.59 & 3.44 & & 19 & 13 & 14 & 15 & 15\ TWA-8A & CONICA & L${}^\prime$ && 1.78 & 4.65 & 5.60 & 5.50 & 5.22 & & 81 & 13 & 6 & 7 & 9\ TWA-9 & CONICA & L${}^\prime$ && 1.22 & 4.29 & 5.32 & 5.22 & 4.96 && 106 & 15 & 7 & 8 & 10\ TWA-23 & CONICA & L${}^\prime$ && 1.45 & 4.44 & 5.42 & 5.31 & 5.17 && 157 & 18 & 11 & 12 & 13\ TWA-25 & CONICA & L${}^\prime$ && 1.56 & 4.50 & 5.34 & 5.29 & 4.79 && 173 & 19 & 14 & 14 & 17\ TWA-10 & CONICA & L${}^\prime$ && 0.224 & 3.50 & 4.56 & 4.43 & 4.31 && 190 & 21 & 13 & 14 & 15\ TWA-11B & CONICA & L${}^\prime$ & & 0.736 & 4.05 & 5.07 & 5.01 & 4.85 && 199 & 19 & 12 & 13 & 14\ TWA-11A & CONICA & L${}^\prime$ & & 3.91 & 6.40 & 7.35 & 7.23 & 7.17 && 160 & 21 & 14 & 15 & 15 [lcccccc]{} PWAnd & 500 & CH4s & 9 & $\leq 20$ & L07 & MZ04, L05,\ &&&&&& MH09, H10\ HIP5191 & 400 & H & 7 & 38 & C10 & $\cdots$\ HIP6276 & 500 & Ks & 7 & 30 & MH09 & $\cdots$\ HD19668 & 500 & Ks & 7 & 34 & MH09 & $\cdots$\ HIP16563A & 500 & H & 7 & 36 & B07 & $\cdots$\ HIP17695 & 500 & CH4s & 10 & $\leq 20$ & L07 & $\cdots$\ HIP18859 & 1000 & L${}^\prime$ & See footnote & $\leq 20$ & H10 & L07\ BD$+$20 1790 & 500 & Ks & 7 & 26 & MH09 & L05, H10\ HIP51317 & 420 & H & See footnote & $\leq 20$ & M05 & L07\ HD92945 & 300 & H & 7 & 36 & B07 & L07\ GJ466 & 1000 & K & 5 & 67 & MZ04 & $\cdots$\ HD113449 & 750 & CH4s & 11.5 & $\leq 20$ & L07 & H10\ EK Dra & 300 & H & 7 & 47 & B07 & MZ04, L07,\ &&&&&& MH09\ HIP81084 & 500 & CH4s & 9.5 & $\leq 20$ & L07 & $\cdots$\ HIP82688 & 500 & Ks & 7 & 48 & MH09 & $\cdots$\ HD160934 & 500 & CH4s & 9.5 & $\leq 20$ & L07 & MZ04, L05,\ &&&&&& H07\ HIP106231 & 500 & CH4s & 10.5 & $\leq 20$ & L07 & MZ04, L05\ HIP113579 & 400 & Ks & 7 & 40 & C10 & MH09\ HIP114066 & 500 & CH4s & 11.3 & $\leq 20$ & L07 & $\cdots$\ HIP118008 & 300 & H & 7 & 33 & B07 & C10\ HR9 & 400 & Kp & See footnote & $\leq 20$ & K03 & $\cdots$\ HIP12545 & 300 & H & 7 & $\leq 20$ & B07 & $\cdots$\ 51 Eri & 1000 & L${}^\prime$ & See footnote & $\leq 20$ & H10 & $\cdots$\ HIP25486 & 500 & Ks & 7 & $\leq 20$ & MH09 & K07, L05\ GJ803 & 200 & Ks & See footnote & $\leq 20$ & M05 & K03, MZ04,\ &&&&&& B07, L07\ BD$-$17 6128 & 290 & Ks & See footnote & $\leq 20$ & M05 & $\cdots$\ HIP112312A & 300 & H & 7 & $\leq 20$ & B07 & $\cdots$\ HD166 & 850 & L${}^\prime$ & See footnote & $\leq 20$ & H10 & L07\ HD10008 & 600 & CH4s & 10 & 27 & L07 & $\cdots$\ HD233153 & 1000 & Ks & 5 & 54 & C05 & $\cdots$\ HIP37288 & 400 & H & See footnote & $\leq 20$ & M05 & L07\ HD70573 & 500 & Ks & 7 & 40 & MH09 & L05\ HIP53020 & 500 & CH4s & 8.4 & $\leq 20$ & L07 & $\cdots$\ HN Peg & 750 & CH4s & 12.2 & $\leq 20$ & L07 & MZ04\ HIP9141 & 300 & H & 7 & 24 & B07 & MH09\ HIP30030 & 300 & H & 7 & 27 & B07 & MH09\ TWA-6 & 320 & Ks & See footnote & $\leq 20$ & M05 & MZ04, L05\ TWA-7 & 400 & H & 7 & $\leq 20$ & L05 & MZ04\ TW Hya & 400 & H & 7 & $\leq 20$ & L05 & MZ04\ TWA-3 & 200 & K & 4 & $\leq 20$ & W99 & C10\ TWA-14 & 300 & H & 7 & $\leq 20$ & B07 & MZ04, C10\ TWA-8B & 100 & Ks & See footnote & $\leq 20$ & M05 & L05\ TWA-8A & 140 & Ks & See footnote & $\leq 20$ & M05 & $\cdots$\ TWA-9 & 300 & Ks & See footnote & $\leq 20$ & M05 & $\cdots$\ TWA-23 & 400 & H & 7 & $\leq 20$ & C10 & $\cdots$\ TWA-25 & 300 & H & 7 & $\leq 20$ & B07 & C10\ TWA-10 & 400 & H & 7 & $\leq 20$ & L05 & MZ04\ TWA-11B & 200 & K & 4 & $\leq 20$ & W99 & $\cdots$\ TWA-11A & 400 & H & 7 & 24 & C10 & W99\ [rcccc]{}\ \ $M$ power law & 11 (18) & 14 (24) & 13 (22) & 14 (22)\ $q$ power law & 10 (17) & 13 (21) & 12 (20) & 13 (20)\ $q$ log-normal& 9 (14) & 11 (18) & 11 (16) & 12 (17)\ $M_2 = 20\,M_{\rm J}$ & 15 (26) & 21 (40) & 17 (32) & 19 (32)\ $M_2 = 40\,M_{\rm J}$ & 11 (19) & 14 (24) & 13 (22) & 14 (23)\ $M_2 = 60\,M_{\rm J}$ & 9 (15) & 11 (19) & 11 (17) & 12 (18)\ \ \ $M$ power law & 12 (17) & 15 (23) & 14 (21) & 15 (21)\ $q$ power law & 11 (16) & 14 (21) & 13 (19) & 14 (19)\ $q$ log-normal & 10 (13) & 12 (17) & 11 (16) & 12 (16)\ $M_2 = 20\,M_{\rm J}$ & 16 (25) & 23 (38) & 18 (31) & 20 (31)\ $M_2 = 40\,M_{\rm J}$ & 12 (18) & 15 (24) & 14 (21) & 15 (22)\ $M_2 = 60\,M_{\rm J}$ & 10 (14) & 12 (18) & 11 (17) & 13 (17)\ \ \ $M$ power law & 11 (19) & 13 (24) & 13 (22) & 14 (23)\ $q$ power law & 10 (17) & 12 (22) & 12 (20) & 13 (21)\ $q$ log-normal & 9 (14) & 11 (19) & 10 (17) & 12 (18)\ $M_2 = 20\,M_{\rm J}$ & 15 (27) & 20 (40) & 16 (32) & 18 (32)\ $M_2 = 40\,M_{\rm J}$ & 11 (20) & 14 (25) & 13 (23) & 14 (23)\ $M_2 = 60\,M_{\rm J}$ & 9 (15) & 11 (20) & 11 (18) & 12 (19)\ \
--- abstract: 'We use atomistic stochastic Landau-Lifshitz-Slonczewski simulations to study the interaction between large thermal fluctuations and spin transfer torques in the magnetic layers of spin valves. At temperatures near the Curie temperature $T_{\rm C}$, spin currents measurably change the size of the magnetization (i.e. there is a [*longitudinal*]{} spin transfer effect). The change in magnetization of the free magnetic layer in a spin valve modifies the temperature dependence of the applied field-applied current phase diagram for temperatures near $T_{\rm C}$. These atomistic simulations can be accurately described by a Landau-Lifshitz-Bloch + Slonczewski equation, which is a thermally averaged mean field theory. Both the simulation and the mean field theory show that a longitudinal spin transfer effect can be a substantial fraction of the magnetization close to $T_{\rm C}$.' author: - 'Paul M. Haney and M. D. Stiles' title: Magnetic dynamics with spin transfer torques near the Curie temperature --- Introduction ============ Spin transfer torque describes the interaction between the spin of itinerant, current-carrying electrons and the spins of the equilibrium electrons which comprise the magnetization of a ferromagnet. This torque results from the spin-dependent exchange-correlation electron-electron interaction, and leads to the mutual precession of equilibrium and non-equilibrium spins around the total spin. In spin valves with sufficiently high current density, spin transfer torque can excite a free ferromagnetic layer to irreversibly switch between two stable configurations (typically along an easy-axis, parallel or anti-parallel to an applied magnetic field), or to undergo microwave oscillations. Previous considerations of spin transfer torque mostly focus on the [*transverse*]{} response of the magnetization to spin currents [@slonczewski; @berger; @stiles; @brataas]. This is appropriate since the temperatures used in spin valve experiments are substantially below the Curie temperature $T_{\rm C}$ of the ferromagnets, so that longitudinal fluctuations can be ignored. Near $T_{\rm C}$, one expects an interplay between the large thermal fluctuations and the nonequilibrium spin transfer torque. Generally speaking, theories of critical phenomena in out-of-equilibrium systems have only recently been developed [@mitra; @feldman], and there remain many open questions on this topic. Even far from the Curie temperature, temperature plays an important role in quantitatively analyzing the dependence of the magnetic orientation on the applied field and applied current. The effect of finite temperature on spin dynamics in the presence of spin transfer torque has been modeled the macrospin approximation (fixed magnetization length) by adding a Slonczewski torque to the Langevin equation describing the stochastic spin dynamics [@li; @xiao], and by solving the Fokker-Planck equation with the spin transfer torque term added to the deterministic dynamics [@visscher]. The Keldysh formalism provides a formal derivation of the stochastic equation of motion [@nunez] for the non-equilibrium (i.e., current-carrying) system for a single spin of fixed magnitude. These treatments successfully describe the thermal characteristics of nanomagnets under the action of spin torques, such as dwell times and other details of thermally activated switching. For materials like GaMnAs, experiments are done near $T_{\rm C}$, so that the [*size*]{} of the magnetization is substantially reduced from its zero temperature value (temperature in Kelvin), and undergoes sizeable fluctuations. In this case, the applicability of a macrospin model is not clear. For field-driven dynamics, there is theoretical work which accounts for longitudinal fluctuations near $T_{\rm C}$ [@garanin1]. This formal treatment culminates in the construction of the Landau-Lifshitz-Bloch equation (LLB), which is an extension of the familiar Landau-Lifshitz equation with an additional longitudinal degree of freedom. In this work, we consider temperatures near the Curie temperature and include both longitudinal fluctuations of the magnetization and the influence of spin transfer torque. There are a number of issues that complicate magnetic dynamics near $T_{\rm C}$, including the temperature dependence of more basic magnetic properties such as magnetic damping and magneto-crystalline anisotropy, as well as the temperature dependence of the spin transfer torque itself. We use an atomistic approach for the stochastic dynamics of a local moment ferromagnet with the inclusion of spin transfer torque. Such a model is more appropriate for systems like the dilute magnetic semiconductor GaMnAs. Our use of simple approximations for the temperature dependence of the magnetic anisotropy, demagnetization field, and damping allow us to focus in the interplay between thermal fluctuations and spin transfer torque. We find that within this model, spin currents can change the [*size*]{} of the magnetization. We give an expression for this “spin-current longitudinal susceptibility", and propose an experimental scheme to measure this effect. We construct a Landau-Lifshitz-Bloch + Slonczewski (LLBS) equation to describe both longitudinal fluctuations and spin transfer torques. Following Ref. , we verify the applicability of the LLBS equation by comparing its results to the atomistic results. We then analyze the LLBS equation to find the applied field-applied current phase diagram for different temperatures. We find that critical switching currents are reduced by the same mechanism exploited in heat assisted magnetic recording, namely the temperature-induced reduction in the magnetic anisotropy [@rottmayer]. We also find that regions of the phase diagram which have been experimentally unattainable become relevant at high temperatures. The dependence of critical currents on temperature in these regions can provide quantitative details about the temperature dependence of spin transfer torque. Method ====== To study the interplay between temperature and spin transfer torque, we consider a spin valve with a fixed layer magnetization in the $+\hat z$-direction with Curie temperature $T_{\rm C}^1$, and a free layer with a smaller Curie temperature $T_{\rm C}^2$ (see Fig. (\[fig:stack\])). This allows for a nearly temperature independent spin current flux incident on the free layer. We make the approximation that all of the incoming spin current is absorbed uniformly throughout the free layer magnetization. This approximation is based on two expectations. Substantial spatial and temporal inhomogeneities in the magnetization should induce rather irregular spatial patterns in the spin currents carried by propagating states. This will lead to large dephasing effects, so that the total spin current should rapidly decay away from the interface as in the conventional picture of spin transfer torques[@stiles]. In addition, in this temperature regime, and for thin layers ($\approx 3~{\rm nm}$), magnetic non-uniformities in the direction transverse to current flow should be more substantial than non-uniformities [*along*]{} the current flow resulting from a localized spin transfer torque. 0.2 cm ![Schematic of system, two ferromagnetic layers with different Curie temperatures. We suppose that $T_{\rm C}^1 > T_{\rm C}^2$.[]{data-label="fig:stack"}](stack.eps "fig:"){width="2.in"} 0.2 cm Stochastic Landau-Lifshitz with spin transfer --------------------------------------------- We adopt three approaches to model the system. The first is an atomistic lattice model of normalized spins $\bf S$, which results in a stochastic Landau-Lifshitz equation (SLL). We include nearest-neighbor Heisenberg coupling with exchange constant $J$, and an easy-axis anisotropy field of magnitude $H_{\rm an}$ in the $\hat z$-direction. To model the temperature dependence of the anisotropy, we make the ansatz that the magnitude of anisotropy at temperature $T$ is proportional to the reduced magnetization $m(T) = M_{\rm s}(T)/M_{\rm s}^0$: $$\begin{aligned} H_{\rm an}(T) = H_{\rm an}(T=0) m(T), \end{aligned}$$ so that the anisotropy field on spin $i$ is given by $H_{\rm an}^i(T) = H_{\rm an}(T=0) \overline{| {\bf S} |}S_i^z$ , where the bar indicates a spatial average. A hard-axis anisotropy field with magnitude $H_{\rm d}$ in the $\hat y$-direction is added to model the demagnetization field of the thin layer. We make an ansatz for the form of this field to make the numerics more tractable. We take the demagnetization field to be uniform on all spins and given by $H_{\rm d}^i(T) = -H_{\rm d}(T=0) \overline{ S^y } \hat y$. This form of the hard-axis field ensures that $H_{\rm d} \sim M_{\rm s}(T)$, and roughly captures the non-local nature of the field. Finally, we include an applied field $H_{\rm app}$ in the $\hat z$-direction. The Hamiltonian for spin $i$ is then: $$\begin{aligned} H_i &=& J \sum_{j \in {\rm n.n.}} {\bf S}_i \cdot {\bf S}_j + \mu_{\rm B} \mu_0 \left( \frac{H_{\rm an}(T=0) \overline {|{\bf S} |}}{2}\left(S^z_i\right)^2 \right.\nonumber \\&&~~~~~~ - H_{\rm d}(T=0) S^y_i \left(\overline{ S^y }\right) + H_{\rm app} S_i^z \Bigg),\label{eq:H}\end{aligned}$$ where the sum in the first term is over nearest neighbors, $\mu_{\rm B}$ is the Bohr magneton, and $\mu_0$ is the permeability of free space. To model nonzero temperatures, we add damping $\alpha$ and a stochastic field ${\bf H}_{\rm fl}$ to the equation of motion implied by Eq. (\[eq:H\]), with the standard statistical properties: $$\begin{aligned} \langle H_{\rm fl}^\alpha(t) \rangle &=& 0, \\ \langle H_{\rm fl}^\alpha(t) H_{\rm fl}^\beta(t') \rangle &=& \frac{\alpha}{1 + \alpha^2} \frac{2k_B T}{\gamma \rho } \delta_{\alpha\beta}\delta(t-t').\end{aligned}$$ where $\alpha,\beta$ are the Cartesian components of the field, $k_B$ is the Boltzmann constant, $\rho$ is the magnetic moment on each lattice site, and $\gamma$ is the gyromagnetic ratio. We numerically integrate the equation of motion using a second-order Heun scheme [@palacios]. We add a Slonczewski-like spin transfer torque term to the equation of motion for the $i$th spin, which is given finally as: $$\begin{aligned} \dot{{\bf S}_i} &=& -\gamma\mu_0 \left[{\bf S}_i \times \left({\bf H}_{\rm eff} + {\bf H}_{\rm fl}\right) -\alpha \left({\bf S}_i \times {\bf S}_i \times {\bf H}_{\rm eff}\right) \right.\nonumber \\&& ~~~~~~\left.+ H_{I} \left({\bf S}_i \times {\bf S}_i \times \hat z\right)\right]. \label{eq:LLS}\end{aligned}$$ $H_{I}$ parameterizes the spin transfer torque: $H_{I}= -I p\mu_{\rm B} / \mu_0 e \gamma M_{\rm s}^0 \ell A$, where $I$ is the applied current, $p$ is the spin polarization of the current, $M_{\rm s}^0$ is the zero temperature magnetization, $\ell$ is the free layer thickness, $A$ is the transverse layer area, and $-|e|$ is the electron charge. The effective magnetic field is given by ${\bf H}_{\rm eff} = H_{\rm app} \hat z + H_{\rm an} \overline{| {\bf S} |}S^z_i \hat z - H_{\rm d} \left(\overline{ S^y}\right) \hat y + J/\left({\mu_{\rm B} \mu_0}\right) \sum_{j \in {\rm n.n.}} {\bf S}_j$. We use both a bulk geometry consisting of a $N=48^3$ periodic array of spins in 3 dimensions (simple cubic lattice), and a layer geometry with an array of 100 $\times$ 100 $\times$ 15 spins. We employ the bulk geometry in comparing the stochastic model behavior with predictions from mean field theory, and the layer geometry for studying the effect of spin current on magnetization size. Landau-Lifshitz-Bloch + Slonczewski equation -------------------------------------------- In the second approach, we add a Slonczewski torque term to the LLB equation. To derive the LLB equation, a probability distribution for the spin orientation is assumed, which is used to find the ensemble average of Eq. (\[eq:LLS\]). In addition, the nearest neighbor exchange field is replaced by its mean-field value. The details of the derivation follow closely those in Ref. , so we omit them here. The final LLB+Slonczewski equation takes the form: $$\begin{aligned} \dot{{\bf m}} &=& -\gamma \mu_0 \left[\left({\bf m} \times {\bf H}_{\rm eff} \right) + \frac{2 k_B T}{J_0 m^2} {\bf m} \cdot\left( \alpha{\bf H}_{\rm eff}+ H_{I} \hat z\right) {\bf m} \nonumber \right. \\&& \left.- \frac{1}{m^2}\left(1-\frac{k_B T}{J_0}\right) {\bf m} \times {\bf m} \times \left( \alpha{\bf H}_{\rm eff} + H_{I} \hat z\right) \right],\label{eq:LLB}\end{aligned}$$ with an effective field given by: $$\begin{aligned} {\bf H}_{\rm eff} &=& H_{\rm app} \hat z + H_{\rm an} m^2 m_z \hat z - H_{d} m_y \hat y \nonumber \\&& ~~~~-\frac{M_{\rm s}^0}{2\chi}\left(\frac{m^2}{m_e^2}-1\right) {\bf m}.\end{aligned}$$ where $M_{\rm s}^0$ is the zero temperature saturation magnetization, ${\bf m}= {\bf M}/ M_{\rm s}^0 $ is the dimensionless magnetization with magnitude between zero and one, $m_e(T)$ is the zero field, zero current equilibrium magnetization: $m_e(T)=B(J_0/k_B T)$, and $B$ is the Brillouin function. $\chi(T)$ is the longitudinal susceptibility: $\chi(T)= M_{\rm s}^0 \left(\partial m_e(T)/\partial H_{\rm app}\right)$. $J_0$ is the 0th component of the Fourier transformed exchange, and ${\bf m}$ is a vector with size between 0 and 1. The spin transfer torque is parameterized by $H_I$, as described in the previous section. The double cross product in Eq. (\[eq:LLB\]) is the familiar Landau-Lifshitz damping term, which describes the relaxation of the magnetization [*direction*]{} to the nearest energy minimum. The term longitudinal to ${\bf m}$ distinguishes the LLB equation from the Landau-Lifshitz equation. This longitudinal term describes the relaxation of the [*size*]{} of the magnetization to its steady state value, which is determined by the temperature, applied fields, and applied currents. The detailed dependence of the magnetic anisotropy on temperature is generally material specific. In our model, the anisotropy and demagnetization fields depend on temperature through their $m$ dependence, and vary as $m^3(T)$ and $m(T)$, respectively. The magnetic exchange $J_0$ can also depend on temperature. This dependence is stronger for ferromagnets with indirect exchange interactions (such as GaMnAs, where the magnetic interactions are mediated by hole carriers), and weaker for local moment systems with direct exchange (such as Fe). For simplicity we treat $J_0$ as temperature-independent. Finally we consider the standard Landau-Lifshitz equation with a reduced but fixed saturation magnetization. We find in Sec. (\[sec:LL\]) that it is possible to appropriately modify the damping coefficient in a standard Landau-Lifshitz approach so that the phase diagram it predicts agrees qualitatively with those predicted by the more complicated models. Results ======= Longitudinal spin current susceptibility {#sec:long} ---------------------------------------- In transition metal ferromagnets, longitudinal spin transfer, which is another way of saying spin accumulation, is typically quite small compared to the magnetization and has a negligible effect on the magnetization dynamics. However, for temperatures close to the Curie temperatures, the longitudinal spin transfer can be a sizeable fraction of the magnetization and can significantly affect the dynamics. Using the LLB+Slonczewski equation, it is straightforward to show that the change in the magnetization in the presence of spin current is $$\begin{aligned} \delta m(I,T) = \frac{H_{I}}{M_{\rm s}^0}\frac{\chi(T)}{\alpha}~. \label{eq:chiI}\end{aligned}$$ This longitudinal spin transfer effect is demonstrated in Fig. \[fig:chi\], which shows the longitudinal susceptibility to magnetic field and spin current for a full stochastic simulation with 100$\times$100$\times$15 spins. (In the figure, $\chi$ is rescaled: the magnetic field is scaled by the exchange field $J_0/\mu_B \mu_0$, and the magnetization is scaled by $M_{\rm s}^0$.) In the simulation, the spins’ polar angle is initialized to a uniform distribution between $\theta=0$ and $\theta=\theta_{\rm max}$, where $\theta_{\rm max}$ is chosen so that the initial spins’ average is equal to the equilibrium value. We allow the system to relax to steady state, and find the value of the magnetization and its fluctuations by finding the average and standard deviation over an interval of time (the appropriate time interval is temperature dependent). The fluctuations lead to the statistical uncertainty shown in Fig. (\[fig:chi\]). The spin current susceptibility $\chi_I$ is defined as $\chi_I = M_{\rm s}^0 \left(\partial m / \partial H_{I}\right)$. We find that $\chi$ and $\chi_I \alpha$ correspond very well, demonstrating that Eq. (\[eq:chiI\]) accurately describes the numerical stochastic model. The change in magnetization should be measurable. The fractional change in the magnetization compared to the zero-temperature saturation magnetization is $$\begin{aligned} \delta m = \left(\frac{p \mu_B }{e\gamma\mu_0\ell A \left(M_{\rm s}^0\right)^2 } \right)\left(\frac{\chi(T)}{\alpha}\right) I ~.\label{eq:chi_dim}\end{aligned}$$ For $T/T_{\rm C} = 0.95$, so that $\left(\chi\cdot J_0/\mu_0\mu_B M_{\rm s}^0\right)=7$ (from Fig. (\[fig:chi\])), and with an exchange field of $J_0/\mu_{\rm B}\mu_0=1.2\times 10^{8} ~{\rm A/m}$ (which corresponds to a $T_{\rm C}$ of 150 K in a cubic nearest neighbor Heisenberg model), $M_{\rm s}^0=10^6 ~{\rm A/m}$, $I/A=10^{11} {\rm A/m^2}$, $p=0.5$, $\alpha=0.01$, and $\ell=3 ~{\rm nm}$ gives a change compared to the zero temperature value of $\delta m=$5 %. Since the magnetization is reduced to approximately 20 % of its zero temperature value at $T/T_{\rm C} = 0.95$, the fractional change in the magnetization is approximately 25 %. 0.2 cm ![The magnetic field and spin current susceptibility versus temperature for the stochastic Landau-Lifshitz equation in the layer geometry. The spin current susceptibility is multiplied by $\alpha$. The error bars indicate statistical uncertainty (one standard deviation). In the plot, $\chi$ is rescaled by $\mu_0 \mu_B M_{\rm s}^0 /J$. []{data-label="fig:chi"}](chiWIthError.eps "fig:"){width="3.25in"} 0.2 cm A notable aspect of this longitudinal spin transfer is that the size of the magnetization can either be increased or decreased according to the direction of current flow. For electron flow from fixed to free layer, the free layer moment [*increases*]{}, while electron flow in the opposite direction [*decreases*]{} the free layer moment. This contrasts with current-induced Joule heating, which always decreases the magnetization. This distinction can be exploited to probe the longitudinal spin transfer by using the experimental scheme shown in Fig. (\[fig:rh\]). We consider the case where $T_{\rm C}^1 \gg T>T_{\rm C}^2$. We choose sign conventions such that a positive $H_{\rm app}$ aligns with the fixed layer, and a positive current represents electron flow from fixed to free layer. In the absence of a longitudinal spin transfer ($\chi_I$=0, black line in Fig. (\[fig:rh\])), the application of a magnetic field will partially order the free layer to align or anti-align with the fixed layer. This should cause the resistance $R$ of the device to change in some way, according to the giant magnetoresistance effect and magnetic order induced in the free layer (the detailed dependence of $R$ on $H_{\rm app}$ is not important here). If a positive current $I^0$ is applied, then the longitudinal spin transfer induces partial ordering of the free layer, so that $m\left(H_{\rm app}=0\right) = +\chi_I H_{I^0} /M_{\rm s}^0$. Then the curve of $m\left(H_{\rm app}\right)$, and therefore the curve $R\left(H_{\rm app}\right)$ is simply shifted by $+\chi_I H_{I^0}/\chi$ (the red dashed curve in Fig. (\[fig:rh\])). If a negative current density $-|I^0|$ is applied, then $m\left(H_{\rm app}=0\right) = -\chi_I H_{I^0}/M_{\rm s}^0$ and the $m\left(H_{\rm app}\right)$ and $R\left(H_{\rm app}\right)$ curves are shifted by $-\chi_I H_{I^0}/\chi$ (black dotted curve in Fig. (\[fig:rh\])). This shift represents a unique signature of longitudinal spin transfer. Using the same parameters as before, we estimate a total shift $\delta=2\chi_I H_{I^0} / \chi$ between $R\left(H_{\rm app}\right)$ for positive and negative current to be on the order of $8 \times 10^{5}~\rm A/m~(\approx 1~{\rm T})$. Eq. (\[eq:chi\_dim\]) indicates that materials with small exchange field (or small $T_{\rm C}$), and those that can support large current densities show the effect most strongly. This suggests that weak metallic ferromagnets such as ${\rm Gd}~(T_{\rm C} = 300 ~{\rm K})$, and Fe alloys such as ${\rm Fe S_2}$ and ${\rm FeBe_5}~(T_{\rm C} = 270 ~{\rm K})$ [@bozorth] may be good candidates for free layer material. 0.2 cm ![Experimental scheme for detecting longiudinal spin transfer: for $T_{\rm C}^1 \gg T>T_{\rm C}^2$, an applied field $H_{\rm app}$ changes the resistance $R$ via the magnetoresistance effect. The application of a positive and negative current density of magnitude $I^0$ shifts $m(H_{\rm app})$ in the positive and negative direction, respectively, via longitudinal spin transfer. The $R(H_{\rm app})$ curves therefore shift to the positive and negative directions.[]{data-label="fig:rh"}](RH_shift.eps "fig:"){width="3.25in"} 0.2 cm Landau-Lifshitz-Bloch-Slonczewski vs Stochastic Landau-Lifshitz {#sec:LLBSvsSLL} --------------------------------------------------------------- In this section, we compare the results obtained from the full 3-dimensional stochastic LL+S equation with those obtained from the mean-field LLBS equation. In our numerics, we rescale time $t$ to as $\tau = (\gamma J/\mu_{\rm B})t$, which rescales the magnetic fields $H_{\rm eff}$ by the exchange field $H_{\rm ex} = J/\mu_0\mu_{\rm B}$. Dimensionless fields are denoted by lowercase: $h_{\rm app} = H_{\rm app} \mu_0\mu_{\rm B} / J$, etc. The dimensionless spin torque is denoted by $j_{\rm app}$, where $j_{\rm app} = H_{I}\mu_0\mu_{\rm B} / J$. We consider a current-induced magnetic excitation for the bulk lattice geometry at various temperatures. The average magnetization is initialized at a $45^\circ$ angle with respect to the $+\hat z$-direction (the individual spins’ initial direction is distributed uniformly within $3^\circ$ in the $\theta,~\phi$ direction about $\theta=45^\circ,~\phi=0^\circ$). The spin transfer torque is applied to excite the magnetization away from the $\hat z$-direction. The parameters used are an applied field of $h_{\rm app}=0.0001$, a demagnetization field of $h_{\rm d} = 0.01$, a current of $j_{\rm app}=-0.0002$, and damping of $\alpha=0.1$ (the artificially high damping was chosen to allow the numerical simulation of the switching to be carried out in a reasonable time). The time step used for the numerical integration is $d\tau = 0.0002$. We vary the temperature $T$, and present results in terms of the scaled temperature $T' = T/T_{\rm C}$. As we increase temperature, we obtain trajectories of varying complexity. Fig. (\[fig:traj\]) compares the LLBS and several realizations of the stochastic Landau-Lifshitz equation. For this range of parameters, the magnetic dynamics evolves from steady oscillations to current induced switching as the temperature is increased. Generally, the level of correspondence between the two is qualitatively good, although it varies between different realizations of the stochastic dynamics. We can conclude from this data that the LLBS equation qualitatively captures the features of the full stochastic simulations. The trajectories for $t=0.08$ indicate that a realization of stochastic dynamics can exhibit the crossover from precession to stable switching, whereas at this temperature the trajectory obtained with the LLBS equation shows only oscillations. This illustrates an important distinction between the stochastic Landau-Lifshitz and LLBS models. The LLBS is an equation for the thermally averaged magnetization, derived using an assumed probability distribution function (in this case, a distribution function most appropriate for temperatures well above and below energy barriers). For this reason, the LLBS does not contain information about fluctuations, and in particular does not capture stochastic switching over the energy barrier. The fluctuations may be obtained by solving the Fokker-Planck equation, or by supplementing the LLBS equation with stochastic fields, as done in Ref. . 0.2 cm ![Comparison of the ($\hat z$-component) magnetization time evolution with spin transfer torque for the atomistic stochastic simulation and the LLB+Slonczewski equation for various reduced temperatures $T' = T /T_{\rm C}$. The dashed line gives the LLB+Slonczweski trajectory, while the solid lines show various realizations of the stochastic trajectory. The dimensionless time $\tau$ is given by $\tau = \left(\gamma J / \mu_{\rm B} \right) t$.[]{data-label="fig:traj"}](trajFinal3.eps "fig:"){width="3.25in"} 0.2 cm Applied field-applied current phase diagram ------------------------------------------- Both high temperatures and the longitudinal degree of freedom change the applied field-applied current phase diagram of the free magnetic layer. Fig. (\[fig:phase\]) shows the generic topology for regions of stability for the parallel (“P", or $+\hat z$-direction) and antiparallel (“AP", or $-\hat z$-direction) fixed points. We focus on the stability of the AP configuration for positive applied fields (the dashed boundary in the upper-half-plane of Fig. (\[fig:phase\]). We first briefly describe the main qualitative features before providing a mathematical description. For applied fields between $h_{\rm an}m^3$ and $h_{\rm an}m^3 + h_{\rm d}m$, the stability boundary is a horizontal parabola, while for other values of applied field, the stability boundary is linear with slope $1/\alpha$. For applied fields with magnitude less than $h_{\rm an}m$, there is hysteresis in the current switching. For $T=0$, this phase diagram reduces to the known form found experimentally [@ralph]. As $T$ increases, the size of the hysteretic region (and the switching current) decreases. Also the range of field with the parabolic boundary decreases, and the outer edge of the parabola gets pulled in closer to 0. For sufficiently high temperatures, this parabolic stability boundary should be experimentally accessible. A quantitative description of the phase diagram follows from Eq. (\[eq:LLB\]). We determine the stability of fixed points using the standard method of linearizing Eq. (\[eq:LLB\]) about a fixed point and finding parameter-dependent eigenvalues $\lambda$. A positive real part of $\lambda$ indicates a loss of stability. This analysis leads to the following condition for instability of the antiparallel configuration (where it should be noted that $m$ depends on $j_{\rm app}$ through $m = m_e + \tilde{\chi}\left(h + \frac{j_{\rm app}}{\alpha}\right)$, and $\tilde{\chi}$ is the rescaled susceptibility, given by $\tilde{\chi}=\chi\left( J_0/\mu_0\mu_{\rm B} M_{\rm s}^0\right)$): $$\begin{aligned} {\rm Re}\left[j_{\rm app}^{\rm crit} + \alpha\left(h + h_{\rm an} m^3 + \frac{h_{\rm d}}{2} m \frac{1-T'}{1-3T'} - \frac{m}{2\tilde{\chi}}\left(1-\frac{m^2}{m_e^2} \right) \frac{2T'}{1-3T'}\right) - \frac{m\sqrt{- \left( h + h_{\rm an} m^3\right) \left( h + h_{\rm an} m^3 + h_{\rm d} m\right)}}{ 1-3T'}\right]=0. \label{eq:bst1} \nonumber\end{aligned}$$ This leads to a cubic equation for $j_{\rm app}^{\rm crit}$. Assuming $m_e \gg \tilde{\chi}\left(h + \frac{j_{\rm app}}{\alpha}\right)$, and expanding to 0th order in $\tilde{\chi}$ leads to an approximate, closed form for $j_{\rm app}^{\rm crit}$. Again we distinguish between different regimes of applied field. For $h \not\in [h_{\rm an}m^3,h_{\rm an}m^3+h_{\rm d}m]$ $$\begin{aligned} j_{\rm app}^{\rm crit} &=& \alpha \left(h + \frac{h_{\rm d}}{2}m_e + h_{\rm an} m_e^3 \frac{1-3T'}{1-T'} \right) \label{eq:hst1},\end{aligned}$$ where again $m_e$ is the equilibrium magnetization in the absence of applied field and applied current. Eq. (\[eq:hst1\]) shows that the slope of the boundary is temperature independent, and is given by $1/\alpha$ (the intrinsic damping $\alpha$ is assumed to be temperature independent). The temperature independence of the slope follows from the fact that the spin transfer torque increases like $1/m(T)$, but the effective damping rate also increases as $1/m(T)$. The intercepts of this boundary line are temperature dependent due to the temperature dependence of $m$. The contribution from the easy-axis anisotropy field has an additional temperature dependence, but the magnitude of this field is much smaller than the demagnetization field, so it does not play an important role. The critical current at zero field is reduced by $m(T)$ because of the reduction in the demagnetization field. This is important because the demagnetization field is usually larger than applied fields, and is therefore the primary impediment to current induced switching. Its reduction through increased temperature offers a route to reduced critical switching currents. For $h_{\rm an}m^3 < h <h_{\rm an}m^3+h_{\rm d}m$, a very large spin torque is required to stabilize the AP configuration. The values of current for which the AP configuration is stabilized are much higher than those attainable experimentally, so that for this range of fields the AP configuration is not seen [@BJZ]. The approximate critical current along the AP stability boundary is: $$\begin{aligned} j_{\rm app}^{\rm crit} &=& \frac{m_e\sqrt{h(h_{\rm d} m_e-h)}}{ 1-T'}.\end{aligned}$$ The reduction in the outer boundary of the parabolic stability line is reduced at high temperature, and this reduction can also be traced back to the reduced magnetic anisotropy. For low temperatures, the application of spin transfer torques results in a elliptical precession mostly in the easy plane about the $-\hat z$ fixed point. To stabilize the AP configuration in this regime, the spin transfer torque must overcome the [*precessional*]{} torque (usually, the spin transfer torque must overcome the much weaker [*damping*]{} torque). Assuming $h=h_{\rm d}m/2$ for definiteness, the precessional torque decreases with $T$ as $h_{\rm d} m(T)$, while the spin transfer torque increases like $1/m$. This implies a value for the maximum reach of the parabola of $j_{\rm app}= m^2(T)h_{\rm d}/(2(1-T))$. Plugging in typical values for material parameters (the same used in Sec. (\[sec:long\])) leads to a critical current of $10^{12} {\rm A / m^2}$ for $T=0.95 T_{\rm C}$. This is an order of magnitude smaller than the zero temperature case. The behavior of this critical current versus temperature at a fixed applied field is shown in Fig. (\[fig:ic\]). (Solid line gives LLBS result). It should also be noted that the stochastic trajectories (shown in Fig. (\[fig:traj\])) indicate that thermal fluctuations can effectively drive the system out of the precessional state and into the static antiparallel configuration. 0.2 cm ![Schematic of parallel/anti-parallel stability versus applied field and applied current. The hysteretic box near the origin and the fully unstable regions (white parabolic shapes) contract in size with increasing temperature.[]{data-label="fig:phase"}](PhaseDiagram3.eps "fig:"){width="3.in"} 0.2 cm Comparison with Landau-Lifshitz-Slonczewski {#sec:LL} ------------------------------------------- The Landau-Lifshitz-Slonczewski (LLS) equation can be modified to emulate the LLBS equation. Based on the qualitative behavior of the LLBS equation, a suitable form for a temperature dependent LLS equation for a nanomagnet of reduced magnetization size $m$ and orientation $\hat n$ is: $$\begin{aligned} \dot{\hat n} = - \gamma\mu_0\left( \hat n \times {\bf H}_{\rm eff} - \frac{\alpha}{m} \hat n \times \hat n \times {\bf H}_{\rm eff} - \frac{H_{I}}{m} \hat n \times \hat n \times \hat z\right) \nonumber\end{aligned}$$ where ${\bf H}_{\rm eff} = {\bf H}_{\rm app} - m H_{\rm d} n_y \hat y + m^3 H_{\rm an} n_z \hat z$, and the temperature dependence is contained entirely in $m(T)$. Clearly the divergence of the damping at $T=T_{\rm C}$ is unphysical, however a more detailed treatment of damping near $T_{\rm C}$ is beyond the scope of this paper. The differences between this LLS equation and the LLBS equation are quantitative (as opposed to qualitative) in nature. One difference is in the dependence of the critical current on temperature for $h_{\rm an}m^3 < h <h_{\rm an}m^3+h_{\rm d}m$. Fig. (\[fig:ic\]) shows the prediction based on the LLS equation. 0.2 cm ![Critical current versus temperature for LLBS and LLS equations. The parameters are: $h_{\rm app} = -0.001$, $h_{\rm d} = 0.01, h_{\rm an} = 0.0001$. Recall that all fields are scaled by the exchange field.[]{data-label="fig:ic"}](Icnew2.eps "fig:"){width="3.in"} 0.2 cm The LLS equation equation neglects the longitudinal spin transfer and applied field susceptibility, which are responsible for dynamically changing the size of the magnetization (and therefore the size of the effect fields) during a switching event, or other magnetization dynamics. However, Fig. (\[fig:ic\]) shows qualitative agreement between the critical currents found in both LLBS and LLS models. This is indicative of the fact that for the applied field-applied current phase diagram, the spin-current and applied-field longitudinal susceptibilities play a role that is secondary to the more pronounced effects of temperature reduced anisotropies. Discussion ========== Spin transfer torques can affect the longitudinal fluctuations of a ferromagnet near its critical temperature. To consider these effects, we studied an atomistic, stochastic Landau-Lifshitz-Slonczewski simulation at high temperatures. We find that there is a longitudinal spin transfer effect, and estimate that at temperatures near $T_{\rm C}$, spin currents can measurably change the size of the magnetization. We then supplemented the Landau-Lifshitz-Bloch equation with a Slonczewski torque term, and verified that this model captures the qualitative features of the stochastic simulations. We showed that the applied field-applied current phase diagram undergoes large changes in the presence of high temperatures, and that these changes may be useful for reducing critical switching currents and for studying the detailed behavior of the temperature dependence of the spin transfer torque. It should be emphasized that these results are predicated on a disordered local moment model of a ferromagnetic phase transition. This model leads to an effective damping that increases with temperature as $1/m(T)$, which effectively counteracts the similar $1/m(T)$ increase in the magnitude of spin transfer torque. Materials that undergo a Stoner transition should also have a $1/m(T)$ dependence for the spin transfer torque, but a different temperature dependence for damping. Such materials should therefore behave differently than the model considered here. The experimental system relevant for the effects we describe (shown schematically in Fig. (\[fig:stack\])) should be relatively straightforward to fabricate. Jiang [*et al.*]{} considered a similar system [@jiang], although that work dealt with other issues such as the ferrimagnet compensation point for magnetization and total angular momentum. By considering simpler ferromagnets with different Curie temperatures, the role of temperature may be more easily inferred. It is of course necessary to account for Joule heating in assessing the detailed temperature dependence of the spin transfer torque. However recent experiments on domain wall motion illustrates the feasibility of compensating for this effect [@yamanouchi]. On the other hand, experiments conducted at fixed current with varying ambient temperatures and applied fields may offer a more straightforward route to observing the longitudinal spin transfer effect. Many experiments done with dilute magnetic semiconductors deal with domain wall motion, where thermal effects play an important role in even the qualitative aspects of the domain wall behavior[@yamanouchi]. There are additional challenges associated with extending this work from spin valves to continuous magnetic textures. Among these is the renormalization of the exchange interaction associated with the coarse graining of the magnetization, which becomes more important at higher temperatures [@grinstein]. In addition, the crucial role played by the demagnetization field in intrinsic domain wall pinning implies that the finite temperature treatment of the demagnetization field must also be handled more carefully. For these reasons the spin valve geometry may provide greater experimental control and admit a simpler theoretical description. [00]{} J. C. Slonczewski, J. Magn. Magn. Mat. [**62**]{}, 123 (1996). L. Berger, Phys. Rev. B [**54**]{}, 9353 (1996). M. D. Stiles and A. Zangwill, Phys. Rev. B [**66**]{}, 014407 (2002). A. Brataas, G. E. W. Bauer, and P. J. Kelly, Phys. Rep. [**427**]{}, 157 (2006). A. Mitra, S. Takei, Yong Baek Kim, and A. J. Millis, Phys. Rev. Lett. [**97**]{}, 236808 (2006). D. E. Feldman, Phys. Rev. Lett. [**95**]{}, 177201 (2005). Z. Li and S. Zhang, Phys. Rev. B [**69**]{}, 134416 (2004). J. Xiao, A. Zangwill, and M. D. Stiles, Phys. Rev. B [**72**]{}, 014446 (2005). D. M. Apalkov and P. B. Visscher, Phys. Rev. B [**72**]{}, 180405 (2005). A. S. Núñez and R. A. Duine, Phys. Rev. B [**77**]{}, 054401 (2008). D. A. Garanin, Phys. Rev. B [**55**]{}, 3050 (1997). O. Chubykalo-Fesenko, U. Nowak, R. W. Chantrell, and D. Garanin. Phys. Rev. B [**74**]{}, 094436 (2006). R. E. Rottmayer [*et al.*]{}, IEEE Trans. Magn. [**42**]{}, 2417 (2006). J. L. Garcia-Palacios and F. J. Lazaro, Phys. Rev. B [**58**]{}, 14937 (1998). R. M. Bozorth, [*Ferromagnetism*]{}, D. Van Nostrand Company, New York (1951). D. Garanin and O. Chubykalo-Fesenko, Phys. Rev. B [**70**]{}, 212409 (2004). S. I. Kiselev, J. C. Sankey, I. N. Krivorotov, N. C. Emley, R. J. Schoelkopf, R. A. Buhrman, and D. C. Ralph, Nature [**425**]{}, 380 (2003). Ya. B. Bazaliy, B. A. Jones, and S.-C. Zhang, Phys. Rev. B [**69**]{}, 094421 (2002). X. Jiang, L. Gao, J. Z. Sun, and S. S. P. Parkin, Phys. Rev. Lett. [**97**]{}, 217202 (2006). M. Yamanouchi, D. Chiba, F. Matsukura, T. Dietl, and H.Ohno, Phys. Rev. Lett. [**96**]{}, 106601 (2006). G. Grinstein and R. H. Koch, Phys. Rev. Lett. [**90**]{}, 207201 (2003).
--- abstract: 'We study uniform perturbations of crossed product C$^*$-algebras by amenable groups. Given a unital inclusion of C$^*$-algebras $C\subseteq D$ and sufficiently close separable intermediate C$^*$-subalgebras $A$, $B$ for this inclusion with a conditional expectation from $D$ onto $B$, if $A=C\rtimes G$ with $G$ discrete amenable, then $A$ and $B$ are isomorphic. Furthermore, if $C\subseteq D$ is irreducible, then $A=B$.' address: 'Department of Mathematical Sciences, Kyushu University, Motooka, Fukuoka, 819-0395, Japan.' author: - SHOJI INO title: 'Perturbations of crossed product C$^*$-algebras by amenable groups' --- introduction ============ Kadison and Kastler started the study of perturbation theory of operator algebras with [@KK] in 1972. They equipped the set of operator algebras on a fixed Hilbert space with a metric induced by Hausdorff distance between the unit balls. Examples of close operator algebras are obtained by conjugating by a unitary near to the identity. They conjectured sufficiently close operator algebras must be unitarily equivalent. For injective von Neumann algebras, this conjecture was settled in [@Chris2; @RT; @Johnson1; @Chris5] with earlier special cases [@Chris1; @P1]. Cameron et al. [@CCSSWW] and Chan [@Chan] gave examples of non-injective von Neumann algebras satisfying the Kadison-Kastler conjecture. In Christensen [@Chris3], this conjecture was solved positively for von Neumann subalgebras of a common finite von Neumann algebra. For C$^*$-algebras, the separable nuclear case was solved positively in Christensen et al. [@CSSWW], building on the earlier special cases in [@Chris5; @PR1; @PR2; @Khoshkam1]. In full generality, there are examples of arbitrarily close non-separable nuclear C$^*$-algebras which are not $*$-isomorphic in Choi and Christensen [@CC]. Johnson gave examples of arbitrarily close pairs of separable nuclear C$^*$-algebras which conjugate by unitaries where the implementing unitaries could not be chosen to be the identity in [@Johnson2]. The author and Watatani [@IW] showed that for an inclusion of simple C$^*$-algebras $C\subseteq D$ with finite index in the sense of Watatani [@Watatani], sufficiently close intermediate C$^*$-subalgebras are unitarily equivalent. The implementing unitary can be chosen close to the identity and in the relative commutant algebra $C'\cap D$. Our estimates depend on the inclusion $C\subseteq D$, since we use the finite basis for $C\subseteq D$. Dickson obtained uniform estimates independent of all inclusions in [@Dickson]. To get this, Dickson showed that row metric is equivalent to the Kadison-Kastler metric. The author [@I] showed that von Neumann subalgebras of a common von Neumann algebra with finite probabilistic index in the sense of Pimsner-Popa [@PP] satisfy the Kadison-Kastler conjecture. The implementing unitary can be chosen as being close to the identity. Compared with the author and Watatani case [@IW], we do not assume that von Neumann subalgebras have a common subalgebra with finite index. In this paper, we study perturbations of crossed product C$^*$-algebras by discrete amenable groups. We introduce crossed product-like inclusions of C$^*$-algebras in Definition \[amenable\]. For a unital inclusion of ${\mathrm{C}}^*$-algebras $A\subseteq B$, we call $A\subseteq B$ is crossed product-like if there exists a discrete group $U$ in the normalizer $\mathcal{N}_B(A)$ of $A$ in $B$ such that $A$ and $U$ generate $B$. An example of crossed product-like inclusions is $A\subseteq A\rtimes G$, where $G$ is a discrete group. Now suppose that we have a unital inclusion $C\subseteq D$ of ${\mathrm{C}}^*$-algebras and two close separable intermediate ${\mathrm{C}}^*$-subalgebras $A,B$ for this inclusion. If there is a conditional expectation $E\colon D\to B$, then we get a map from $A$ into $B$ which is uniformly close to the identity map of $A$ by restricting $E$ to $A$. Since $C$ is a subalgebra of $A\cap B$, $E|_A\colon A\to B$ is a $C$-fixed map, that is, $E|_A(c)=c$ for any $c\in C$. Furthermore, if $C\subseteq A$ is crossed product-like by a discrete amenable group $U$ in $\mathcal{N}_A(C)$, then we can consider the point-norm averaging technique form [@CSSWW] by using the amenability of $U$. To apply this technique to $E|_A$ we need that $E|_A$ is a $C$-fixed map. Then in Lemma \[1.4\], we can obtain a $C$-fixed $(X,\varepsilon)$-approximate $*$-homomorphism from $A$ into $B$ for a finite subset $X$ in $A_1$ and $\varepsilon>0$. To show this, we modify [@CSSWW Lemma 3.2] to $C$-fixed versions. In Lemma \[1.5\], we obtain unitaries which conjugate these maps by modifying [@CSSWW Lemma 3.4] to $C$-fixed version. The unitaries can be chosen in the relative commutant $C'\cap D$ of $C$ in $D$. Therefore, if $C\subseteq D$ is irreducible, then the untiaries are scalars. Then by these lemmas, we show our first main result: Theorem A, which is appeared in Theorem \[irreducible\]. Let $C\subseteq D$ be a unital irreducible inclusion of ${\mathrm{C}}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate ${\mathrm{C}}^*$-subalgebras for $C\subseteq D$ with a conditional expectation from $D$ onto $B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and ${d(A,B)}<140^{-1}$. Then $A = B$. In Theorem B, we show our second main result. By an intertwining argument which is modified [@CSSWW Lemma 4.1] to $C$-fixed version, we show that $A$ is $*$-isomorphic to $B$. The implementing surjective $*$-isomorphism can be chosen as $C$-fixed. Theorem B is provided in Section \[isomorphism\] as Theorem \[3.3\]. Let $C\subseteq D$ be a unital inclusion of ${\mathrm{C}}^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation from $D$ onto $B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<10^{-3}$. Then there exists a $C$-fixed surjective $*$-isomorphism $\alpha \colon A\to B$. In section \[von\], we consider crossed product-like inclusions of von Neumann algebras. Given an inclusion $N\subseteq M$ of von Neumann algebras, we call $N\subseteq M$ is crossed product-like if there is a discrete group $U$ in $\mathcal{N}_M(N)$ such that $M$ is generated by $N$ and $U$. For a crossed product-like inclusion $A\subseteq B$ of ${\mathrm{C}}^*$-algebras acting non-degenerately on $H$, an inclusion $\overline{A}^{{\mathrm{w}}}\subseteq \overline{B}^{{\mathrm{w}}}$ of von Neumann algebras is crossed product-like. In Theorem C, we consider the perturbations of crossed product von Neumann algebras by discrete amenable groups. This result is based on Christensen’s work [@Chris2] and is appeared in Theorem \[2.3.5\]. Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N \subseteq M$ with a normal conditional expectation from $M$ onto $B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in N ' \cap (A\cup B)''$ such that $u A u^*= B$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. By the theorem above, we can consider the perturbations of the second dual C$^*$-algebras of crossed product algebras by amenable groups in Corollary \[2.4\]. Given a unital inclusion $C\subseteq D$ of ${\mathrm{C}}^*$-algebras and sufficiently close intermediate ${\mathrm{C}}^*$-subalgebras $A,B$ for this inclusion, if $C\subseteq A$ is a crossed product-like inclusion by a discrete amenable group and there is a conditional expectation $E\colon D\to B$, then $A^{**}$ and $B^{**}$ are unitarily equivalent. To show this, we use a normal conditional expectation $E^{**}\colon D^{**}\to B^{**}$ and identify $A^{**},B^{**},C^{**}$ and $D^{**}$ with $\pi(A)'',\pi(B)'',\pi(C)''$ and $\pi(D)''$, respectively, where $\pi$ is the universal representation of $D$. In Proposition \[4.3\], we obtain a unitary such that the unitary implement a $*$-isomorphism under the assumption $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{{\mathrm{w}}}$. To show Proposition \[4.3\] we prepare Lemma \[4.1\] and \[4.2\] by using Lemma \[1.5\] and \[1.8\] and Theorem \[3.3\]. Combining Proposition \[4.3\] with Corollary \[2.4\] gives Theorem D, which is appeared in Theorem \[main\]. To show this, we modify the arguments of Section 5 in Christensen et al. [@CSSWW]. Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate ${\mathrm{C}}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{{\mathrm{w}}}$. If ${d(A,B)}<10^{-7}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. Preliminaries ============= Given a C$^*$-algebra $A$, we denote by $A_1$ and $A^u$ the unit ball of $A$ and the unitaries in $A$, respectively. We recall Kadison and Kastler’s metric on the set of all C$^*$-subalgebras of a C$^*$-algebra from [@KK]. Let $A$ and $B$ be C$^*$-subalgebras of a C$^*$-algebra $C$. Then we define a metric between $A$ and $B$ by $$d(A,B):= \max \left\{ \sup_{a\in A_1} \inf_{b\in B_1} \| a - b\| , \ \sup_{b\in B_1}\inf_{a\in A_1} \| a-b\| \right\}.$$ In the definition above, $d(A,B)<\gamma$ if and only if for any $x$ in either $A_1$ or $B_1$, there exists $y$ in other one such that $\| x - y\| <\gamma$. Let $A$ be a C$^*$-algebra in $\mathbb{B}(H)$ and $u$ be a unitary in $\mathbb{B}(H)$. Then $d(A, u A u^*)\le 2\| u-I_H\|$. Near inclusions of ${\mathrm{C}}^*$-algebras are defined by Christensen in [@Chris5]. Let $A$ and $B$ be C$^*$-subalgebras of a C$^*$-algebra $C$ and let $\gamma>0$. We write $A\subseteq_{\gamma}B$ if for any $x\in A_1$ there exists $y\in B$ such that $\| x-y\| \le \gamma$. If there is $\gamma'<\gamma$ with $A\subseteq_{\gamma'}B$, then we write $A\subset_{\gamma}B$. The next two proposition is folklore. The second can be found as [@CSSWW Proposition 2.10]. \[surjective\] Let $A$ and $B$ be ${\mathrm{C}}^*$-algebras with $A\subseteq B$. If $B\subset_1 A$, then $A=B$. Let $A$ and $B$ be ${\mathrm{C}}^*$-subalgebras of a ${\mathrm{C}}^*$-algebra $C$. If $B\subset_{1/2} A$ and $A$ is separable, then $B$ is separable. The following lemma appears in [@KK Lemma 5]. \[weak-closure\] Let $A$ and $B$ be ${\mathrm{C}}^*$-subalgebras acting on a Hilbert space $H$. Then $d(\overline{A}^{{\mathrm{w}}},\overline{B}^{{\mathrm{w}}})\le {d(A,B)}$. The lemma below shows some standard estimates. \[polar\] Let $A$ be a unital ${\mathrm{C}}^*$-algebra. 1. Given $x\in A$ with $\| I-x \|<1$, let $u\in A$ be the unitary in the polar decomposition $x=u|x|$. Then, $$\| I-u\| \le \sqrt{2}\| I-x \|.$$ 2. Let $p\in A$ be a projection and $a\in A$ a self-adjoint operator. Suppose that $\delta:=\| a-p\| <1/2$. Then $q:=\chi_{[1-\delta,1+\delta]}(a)$ is a projection in $C^*(a,I)$ satisfying $$\| q-p\|\le 2\| a-p\| <1.$$ 3. Let $p,q\in A$ be projections with $\| p-q\| <1$. Then there exists a unitary $w\in A$ such that $$w p w^* =q \ \ \text{and} \ \ \| I-w \| \le \sqrt{2}\| p-q\|.$$ In the paper, we consider metric between maps restricted to finite sets. The following are introduced by [@CSSWW]. Let $A$ and $B$ be C$^*$-algebras and let $\phi_1,\phi_2\colon A\to B$ be maps. Given a subset $X\subseteq A$ and $\varepsilon>0$, write $\phi_1\approx_{X,\varepsilon}\phi_2$ if $$\|\phi_1(x)-\phi_2(x)\|\le \varepsilon, \ \ x\in X.$$ Let $A$ and $B$ be C$^*$-algebras, $X$ a subset of $A$ and $\varepsilon>0$. Given a completely positive contractive map (cpc map) $\phi\colon A\to B$, we call $\phi$ is an ($X,\varepsilon$)-[*approximate*]{} $*$-[*homomorphism*]{} if it satisfies $$\| \phi (x) \phi(x^*)-\phi(xx^*)\| \le \varepsilon, \ \ x\in X\cup X^*.$$ We only consider pairs of the form $(x,x^*)$ in the previous definition by the following proposition, which can be found as [@Paulsen Lemma 7.11]. \[1/2\] Let $A$ and $B$ be $\mathrm{C}^*$-algebras and $\phi\colon A\to B$ a cpc map. Then for $x,y\in A$, $$\| \phi(x y) -\phi(x) \phi(y) \| \le \| \phi(x x^*) -\phi(x) \phi(x^*) \|^{1/2} \|y \|.$$ Let $A$ and $B$ be C$^*$-algebras and let $C$ be a C$^*$-subalgebras of $A\cap B$. A map $\phi:A\to B$ is [*$C$-fixed*]{} if $\phi|_C={\mathrm{id}}_C$. Given a map $\phi\colon A\to B$ between ${\mathrm{C}}^*$-algebras and a ${\mathrm{C}}^*$-subalgebra $C$ of $A\cap B$, if $\phi$ is $C$-fixed, then $\phi$ is a $C$-bimodule map, that is, for $x,z\in C$ and $y\in A$, $$\phi(x y z)=x \phi(y) z.$$ The following lemma appears in [@Arveson p.332]. We need the lemma in Lemma \[1.4\], \[1.5\] and \[1.8\]. \[Arveson\] Let $X\subseteq \mathbb{C}$ be a compact set and $\varepsilon,M>0$. Then given a continuous function $f\in C(X)$, there exists $\eta>0$ such that for any Hilbert space $H$, normal operator $s\in\mathbb{B}(H)$ with $\mathrm{s p}(s)\subseteq X$ and $a\in \mathbb{B}(H)$ with $\| a \| \le M$, the inequality $\| s a - a s \| <\eta$ implies that $\| f(s)a - a f(s) \| <\varepsilon$. Let $p$ be a polynomial such that $\| f- p\|<\varepsilon/(4M)$, where this norm is the supremum norm of $C(X)$. Let $p$ have the form $p(t)= c_o+c_1 t +\cdots + c_n t^n$. Define $$\eta:= \frac{\varepsilon}{2} \left( \sum_{k=1}^n k |c_k| \right)^{-1}.$$ Let a Hilbert space $H$ be given and let a normal operator $s\in \mathbb{B}(H)$ and $a\in \mathbb{B}(H)_1$ satisfy $\| s a - a s \|<\eta$. Let $D$ be the derivation $D(x)=x a - a x$. Since $D(s^{n+1})=s D(s^n) - D(s) s^n$, $\| D(s^{n+1})\| \le \| D(s^n)\| + \| D(s)\|$. Hence, $\| D(s^n) \| \le n \| D(s) \|$. Therefore, $$\begin{aligned} \| f(s) a - a f(s)\| &\le \| f(s) a - p(s) a\| +\| D(p(s))\| + \| a p(s) - a f(s)\| \\ &\le 2 \| f-p\| \| a\| + \sum_{k=1}^n k |c_k| \| D(s) \| <\varepsilon,\end{aligned}$$ and the lemma follows. The next lemma appears in the proof of [@CSSWW Lemma 3.7]. \[1.7\] Let $H$ be a Hilbert space. Then for any $\mu_0>0$, there exists $\mu>0$ with the following property$\colon$ given a finite set $S\subseteq H_1$ and a self-adjoint operator $h\in \mathbb{B}(H)_1$, there exists a finite set $S'\subseteq H_1$ such that for any self-adjoint operator $k\in \mathbb{B}(H)_1$, if $$\| (h-k) \xi' \| < \mu , \ \ \xi' \in S',$$ then $$\| ( e^{i\pi h} - e^{i\pi k}) \xi \| <\mu_0 \ \ and \ \ \| ( e^{i\pi h} - e^{i\pi k})^* \xi \| <\mu_0, \ \ \xi\in S.$$ There exists a polynomial $p(t)=\sum_{j=0}^r \lambda_j t^j$ such that $$\begin{aligned} \label{1.7.1} | p(t) - e^{i\pi t} | <\frac{\mu_0}{3}, \ \ -1\le t \le 1.\end{aligned}$$ Let $$\begin{aligned} \label{1.7.2} \mu:= \frac{\mu_0}{3 r \sum_{j=0}^r | \lambda_j | }.\end{aligned}$$ Given a finite set $S\subseteq H_1$ and a self-adjoint operator $h \in \mathbb{B}(H)_1$, define $$\begin{aligned} S':= \{ h^m \xi : \xi\in S, \, m \le r-1 \}.\end{aligned}$$ Let $k \in\mathbb{B}(H)_1$ be a self-adjoint operator with $$\begin{aligned} \label{1.7.3} \| (h-k) \xi'\| <\mu, \ \ \xi'\in S'.\end{aligned}$$ For any $\xi \in S$ and $0\le j\le r$, by (\[1.7.3\]), $$\begin{aligned} \| (h^j - k^j) \xi \| &\le \| (h^j - k h^{j-1}) \xi \| + \| ( k h^{j-1} - k^2 h^{j-2}) \xi \| + \dots + \| (k^{j-1} h - k^j) \xi \| \\ &\le \| (h - k) h^{j-1} \xi \| + \| k ( h-k) h^{j-2} \xi \| + \dots + \| k^{j-1} (h - k ) \xi\| \\ &\le \sum_{m=0}^{j-1} \| (h - k ) h^m \xi \| < r \mu.\end{aligned}$$ Thus, for $\xi\in S$, $$\begin{aligned} \| (p(h) - p(k)) \xi \| \le \sum_{j=0}^r |\lambda_j| \| (h^j- k^j) \xi \| \le \sum_{j=0}^r |\lambda_j| r\mu = \frac{\mu_0}{3},\end{aligned}$$ by (\[1.7.2\]). $$\begin{aligned} \| ( e^{i\pi h} - e^{i\pi k})\xi \| &\le \| (e^{i\pi h} -p(h)) \xi\| + \| (p(h) -p(k) )\xi\| + \| (p(k) - e^{i\pi k}) \xi \| \\ &\le \frac{\mu_0}{3} + \frac{\mu_0}{3} + \frac{\mu_0}{3} = \mu_0,\end{aligned}$$ by (\[1.7.1\]). Similarly, we have $\|(e^{i\pi h}-e^{i\pi k})^* \xi\| <\mu_0$. Crossed product-like inclusions and approximate averaging ========================================================= In this section, we introduce the crossed product-like inclusions of C$^*$-algebras. Moreover, we use the Følner condition of discrete amenable groups to modify the averaging results in [@CSSWW Section 3]. In Theorem \[irreducible\], we show our first main result: Theorem A. Given an inclusion $A\subseteq B$ of C$^*$-algebras, we denote by $\mathcal{N}_B(A)$ the normalizer of $A$ in $B$, that is, $\mathcal{N}_B(A)= \{ u \in B^u : u A u^*=A\}$. \[amenable\]Let $A\subseteq B$ be a unital inclusion of C$^*$-algebras. Then we call the inclusion $A\subseteq B$ is [*crossed product-like*]{} if there exists a discrete group $U$ in $\mathcal{N}_B(A)$ such that $B= C^*(A,U)$. Since $U$ is in $\mathcal{N}_B(A)$, $B=C^*(A,U)$ is the norm closure of $\mathrm{span}\{ a u : a\in A, u\in U\}$. Throughout this paper, we only consider crossed product-like inclusions are by discrete [*amenable*]{} groups. For any $x\in B$ and $\varepsilon>0$, there exist $\{ a_1,\dots,a_N \}\subseteq A_1$ and $\{u_1,\dots,u_N\}\subseteq U$ such that $\| x- \sum_{i=1}^N a_i u_i \| <\varepsilon$. In fact, let $K$ be a positive integer with $K \ge \max\{ \| a_1\|,\dots,\|a_N\| \}$. Define $$a_{(i-1)K+j}':= \frac{1}{K} a_i, \ \ i=1,2,\dots,N, j=1,2,\dots,K.$$ Then $a_k'\in A_1$ and $$\sum_{i=1}^N a_i u_i = \sum_{j=1}^K \sum_{i=1}^N a_{(i-1)K+j}' u_i.$$ Let $G$ be a discrete amenable group acting on a ${\mathrm{C}}^*$-algebra $A$. Then an inclusion $A\subseteq A\rtimes G$ is crossed product-like by $\{\lambda_g\}_{g\in G}$. Let $(A,G,\alpha,\sigma)$ be a twisted ${\mathrm{C}}^*$-dynamical system and let $A\rtimes_{\alpha,r}^{\sigma} G$ be the reduced twisted crossed product. Then an inclusion $A\subseteq A\rtimes_{\alpha,r}^{\sigma} G$ is crossed product-like by $\{\lambda_{\sigma}(g)\}_{g\in G}$. Let $A\subseteq B$ be a crossed product-like inclusion of ${\mathrm{C}}^*$-algebras by $U$. Then for a unital ${\mathrm{C}}^*$-algebra $C$, $A\otimes C\subseteq B\otimes C$ is a crossed product inclusion by $U\otimes I$. If $\mathbb{C}I \subseteq A$ is a crossed product-like inclusion of ${\mathrm{C}}^*$-algebras by a discrete amenable group, then $A$ is strongly amenable. Hence, the Cuntz algebras $\mathcal{O}_n$ are nuclear but $\mathbb{C}I\subseteq \mathcal{O}_n$ is not crossed product-like by discrete amenable groups. In the next lemma, to get a point-norm version of [@Chris2 Lemma 3.3] we modify the argument of [@CSSWW Lemma 3.2] for crossed product-like inclusions by amenable groups. \[1.4\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A,B$ be intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$ and $d(A,B)<\gamma<1/4$. Then for any finite subset $X\subseteq A_1$ and $\varepsilon>0$, there exists a unital $C$-fixed $(X,\varepsilon)$-approximate $*$-homomorphism $\phi\colon A\to B$ such that $$\| \phi -{\mathrm{id}}_A \| \le \left(8 \sqrt{2} + 2 \right)\gamma .$$ Let a finite set $X\subseteq A_1$ and $0<\varepsilon<1$ be given. Let $D$ act on a Hilbert space $H$. By Stinespring’s theorem, we can find a Hilbert space $K \supseteq H$ and a unital $*$-homomorphism $\pi\colon D\to \mathbb{B}(K)$ such that $$E(d)= P_H \pi(d) |_H, \ \ d\in D,$$ because $E\colon D\to B$ is a unital cpc map. Furthermore, $P_H\in \pi(B)'$, since $E$ is a $B$-fixed map. By Lemma \[Arveson\], there exists $\eta>0$ such that for any self-adjoint operator $t\in \mathbb{B}(K)$ with $\mathrm{s p}(t)\subseteq [0,2\gamma]\cup[1-2\gamma,1]$ and $x\in\mathbb{B}(K)$ with $\|x\|\le 2$, the inequality $\| x t-t x\| <\eta$ implies $\| x p - p x\| < \varepsilon^2/18$, where $p$ is the spectral projection of $t$ for $[1-2\gamma,1]$. There exist $\{ u_1,\dots,u_N\}\subseteq U$ and $\{c_i^{(x)} : 1\le i \le N, x\in X\}\subseteq C_1$ such that $$\left\| x - \sum_{i=1}^{N} c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3}, \ \ x\in X.$$ Let $\tilde{x}:=\sum_{i=1}^N c_i^{(x)} u_i$ for $x\in X$. Then $\|\tilde{x} \| \le \|x\|+\varepsilon<2$. Since $U$ is amenable, we may choose a finite subset $F\subseteq U$ satisfying $$\frac{| u_i F\bigtriangleup F |}{|F|} <\frac{\eta}{N}, \ \ 1\le i \le N.$$ Define $$t := \frac{1}{|F|}\sum_{v\in F} \pi(v) P_H \pi(v^*) \in \mathbb{B}(K).$$ Since $U\subseteq \mathcal{N}_A(C)$ and $P_H\in \pi(C)'$, we have $t \in \pi(C)'$. For any $x\in X$, $$\begin{aligned} \pi(\tilde{x}) t &=\sum_{i=1}^{N} \pi( c_i^{(x)} u_i ) \frac{1}{|F|} \sum_{v\in F} \pi(v) P_H \pi(v^*) \\ &=\frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} \pi( c_i^{(x)}u_i v) P_H \pi (v^* ) \\ &=\frac{1}{|F|} \sum_{i=1}^{N} \sum_{\tilde{v}\in u_i F} \pi( c_i^{(x)}\tilde{v}) P_H \pi (\tilde{v}^*u_i )\end{aligned}$$ and $$\begin{aligned} t \pi(\tilde{x}) &= \frac{1}{|F|} \sum_{v\in F} \pi(v) P_H \pi(v^*) \sum_{i=1}^{N} \pi( c_i^{(x)} u_i ) \\ &= \frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} \pi( c_i^{(x)} v) P_H \pi ( v^* u_i ) .\end{aligned}$$ Therefore, $$\label{eta} \left\| \pi(\tilde{x}) t - t \pi(\tilde{x}) \right\| \le \sum_{i=1}^{N} \frac{| u_i F\bigtriangleup F|}{|F|} <\eta, \ \ x \in X.$$ For $v\in F$, there exists $v'\in B_1$ such that $\| v - v' \| <\gamma$. Since $P_H\in \pi(B)'$, we have $$\| \pi(v) P_H - P_H \pi(v) \| \le \| \pi(v) P_H - \pi(v') P_H\| + \| P_H \pi(v') - P_H \pi (v) \| \le 2 \gamma.$$ Thus, $\mathrm{s p}(t) \subseteq [0,2\gamma]\cup [1-2\gamma,1]$, since $$\begin{aligned} \| t -P_H\| &=\left\| \frac{1}{|F|}\sum_{v\in F} \pi(v) P_H \pi(v^*) - \frac{1}{|F|}\sum_{v\in F} P_H \pi(v) \pi(v^*) \right\| \\ &\le \frac{1}{|F|} \sum_{v\in F} \| \pi(v) P_H - P_H \pi(v) \| \| \pi(v^*)\| \le 2 \gamma.\end{aligned}$$ Let $q=\chi_{[1-2\gamma,1]}(t) \in C^* ( t, I_K)$. By (\[eta\]), $$\label{ab} \| \pi(\tilde{x}) q- q\pi(\tilde{x})\| < \frac{\varepsilon^2}{18}, \ \ x\in X.$$ Since $\| q- P_H\| \le 2\| t- P_H\| <1$, there exists a unitary $w\in C^*( t, P_H , I_K)$ such that $w P_H w^* =q$ and $\| w-I_K\| \le \sqrt{2} \| q- P_H\|$. Define $\phi\colon A\to \mathbb{B}(K)$ by $$\phi(a)= P_H w^* \pi(a) w |_H, \ \ a\in A.$$ Since $w\in C^*(t, P_H, I_K) \subseteq C^*( \pi(A) ,P_H)$ and $P_H \pi(A)|_H = \mathrm{ran}(E) \subset B$, the range of $\phi$ is contained in $B$. Furthermore, $\phi|_C={\mathrm{id}}_C$ because $w\in C^*(t,P_H,I_K) \subseteq \pi(C)'$. For $x \in X\cup X^*$, by using $P_H w^* =P_H w^* q$ and (\[ab\]), $$\begin{aligned} \begin{split} \label{cc} \| \phi(\tilde{x} \tilde{x}^*) -\phi(\tilde{x})\phi(\tilde{x}^*)\| &=\| P_H w^* \pi(\tilde{x}\tilde{x}^*) w P_H - P_H w^* \pi(\tilde{x}) w P_H w^* \pi(\tilde{x}^*)w P_H \| \\ &=\| P_H w^* q \pi(\tilde{x} \tilde{x}^*) w P_H - P_H w^* \pi(\tilde{x}) q \pi(\tilde{x}^*) w P_H \| \\ &\le \| q \pi(\tilde{x}) -\pi (\tilde{x}) q \| \| \pi (\tilde{x}^*) \| < \frac{\varepsilon^2}{9} . \end{split}\end{aligned}$$ Therefore, by (\[cc\]) and Proposition \[1/2\], $$\begin{aligned} &\| \phi(x x^*) -\phi(x)\phi(x^*)\| \\ &\le \| \phi(x x^*) - \phi(x \tilde{x}^*) \| + \| \phi(x \tilde{x}^*) -\phi(x) \phi(\tilde{x}^*)\| + \| \phi(x)\phi(\tilde{x})-\phi(x)\phi(x^*)\| \\ &\le \| x x^* - x \tilde{x}^*\| + \| \phi( \tilde{x}\tilde{x}^*)-\phi(\tilde{x})\phi(\tilde{x}^*)\|^{1/2}\| x\| + \|\phi(x)\| \| \phi(\tilde{x}^*)-\phi(x^*)\| \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon.\end{aligned}$$ For $a\in A_1$, we have $$\begin{aligned} \| \phi(a) - a \| &\le \| \phi(a) - E(a) \| + \| E(a) -a \| \\ &\le \| P_H w^* \pi(a) w P_H - P_H \pi(a) P_H\| + 2 d(A,B) \\ &\le 2 \|w-I_K\| +2d(A,B) \le (8\sqrt{2}+2)\gamma,\end{aligned}$$ and the lemma follows. The next lemma is a version of [@CSSWW Lemma 3.4] for crossed product-like inclusions by amenable groups. \[1.5\] Let $A,B$ and $C$ be $\mathrm{C}^*$-algebras with a common unit. Suppose that $C\subseteq A\cap B$ and $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$ and $\varepsilon>0$, there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property$:$ Given $\gamma<1/10$ and two unital $C$-fixed $(Y,\delta)$-approximation $*$-homomorphisms $\phi_1,\phi_2\colon A\to B$ with $\phi_1\approx_{Y,\gamma} \phi_2$, there exists a unitary $u\in C'\cap B$ such that $$\phi_1 \approx_{X,\varepsilon} \mathrm{Ad}(u) \circ \phi_2 \ \ \text{and} \ \ \| u-I\| \le \sqrt{2}(\gamma +\delta).$$ Let a finite set $X\subseteq A_1$ and $0<\varepsilon<1$ be given. There exist $\{ u_1,\dots,u_N \} \subseteq U$ and $\{ c_i^{(x)} : 1\le i \le N, x\in X \} \subseteq C_1$ such that $$\left\| x- \sum_{i=1}^N c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3} , \ \ x\in X.$$ Let $\tilde{x}:=\sum_{i=1}^{N} c_i^{(x)} u_i^{(x)}$ for $x\in X$. Then $\| \tilde{x} \| \le 1+ \varepsilon/3< 2$. By Lemma \[Arveson\], there exists $\eta>0$ such that for any $s\in B_1$ and $a\in B$ with $\| a\|\le 2$, the inequality $\| s s^* a- a s s^*\| <\eta$ implies $\| |s| a- a |s| \| < \varepsilon/12$. Let $$0<\delta < \min \left\{ \left(\frac{\varepsilon}{60}\right)^2, \frac{\eta^2}{100} \right\}.$$ There exists a finite set $Y \subseteq U$ such that $$\frac{ | u Y \bigtriangleup Y | }{ | Y | } <\frac{\delta}{N}, \ \ u\in \left\{ u_i, u_i^* : 1\le i\le N \right\}.$$ Let $\gamma<1/10$ and $\phi_1, \phi_2\colon A\to B$ be $C$-fixed $(Y,\delta)$-approximation $*$-homomorphisms with $\phi_1 \approx_{Y,\gamma} \phi_2$. Define $$s:= \frac{1}{ | Y | } \sum_{v\in Y} \phi_1(v) \phi_2(v^*).$$ Since $\phi_1$ and $\phi_2$ are $C$-fixed maps and for $u\in U$, $u C u^*=C$, we have $s\in C'\cap B$. By Proposition \[1/2\], for $x\in X$ and $v\in Y$, $$\begin{aligned} &\| \phi_1 (\tilde{x} v)-\phi_1 (\tilde{x}) \phi_1 (v) \| \le \| \phi_1 (v v^* ) - \phi_1 (v) \phi_1 (v^*) \|^{1/2} \|\tilde{x} \| \le 2 \sqrt{\delta}, \label{ad} \\ &\| \phi_2( v^* \tilde{x} ) -\phi_2(v^*) \phi_2( \tilde{x})\| \le \| \phi_2( v^* v)-\phi_2(v^*)\phi_2(v)\|^{1/2}\| \tilde{x} \| \le 2 \sqrt{\delta}. \label{ae}\end{aligned}$$ Furthermore, $$\begin{aligned} \begin{split}\label{aa} \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x} v) \phi_2(v^*) &= \frac{1}{|Y|} \sum_{v\in Y} \sum_{i=1}^{N} \phi_1 \left( c_i^{(x)} u_i v \right) \phi_2(v^*) \\ &= \frac{1}{|Y|} \sum_{i=1}^{N} \sum_{v\in u_i Y} \phi_1 \left( c_i^{(x)} v \right) \phi_2 \left( v^* u_i \right) \end{split}\end{aligned}$$ and $$\begin{aligned} \begin{split}\label{ab} \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x} ) &=\frac{1}{|Y|} \sum_{v\in Y} \sum_{i=1}^{N} \phi_1(v) \phi_2 \left( v^* c_i^{(x)} u_i \right) \\ &=\frac{1}{|Y|} \sum_{i=1}^{N} \sum_{v\in Y} \phi_1 \left( c_i^{(x)} v \right) \phi_2 \left( v^* u_i \right) \end{split}\end{aligned}$$ By (\[aa\]), (\[ab\]) and the choice of $Y$, for $x\in X$, $$\begin{aligned} \begin{split}\label{ac} \left\| \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x} v) \phi_2(v^*) - \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x} ) \right\| < \sum_{i=1}^N \frac{ | u_i^{(x)} Y \bigtriangleup Y | }{ | Y | } <\delta. \end{split}\end{aligned}$$ Similarly, we have $$\label{az} \left\| \frac{1}{|Y|} \sum_{v\in Y} \phi_1(\tilde{x}^* v) \phi_2(v^*) - \frac{1}{|Y|} \sum_{v\in Y} \phi_1( v) \phi_2(v^* \tilde{x}^* ) \right\| <\delta, \ \ x\in X.$$ By (\[ad\]), (\[ae\]) and (\[ac\]), $$\label{af} \| \phi_1( \tilde{x} ) s- s \phi_2(\tilde{x}) \| \le \delta + 4 \sqrt{\delta} < 5 \sqrt{\delta}, \ \ x \in X \cup X^*.$$ By taking adjoints, $$\| s^* \phi_1(\tilde{x}) - \phi_2(\tilde{x}) s^* \| \le 5 \sqrt{\delta} , \ \ x \in X\cup X^*.$$ Thus, for $x\in X\cup X^*$, $$\begin{aligned} \| \phi_2(\tilde{x}) s^* s - s^* s \phi_2(\tilde{x}) \| &\le \| \phi_2(\tilde{x}) s^*s - s^* \phi_1( \tilde{x}) s \| + \| s^* \phi_1(\tilde{x} ) s - s^* s \phi_2(\tilde{x}) \| \\ &\le \| \phi_2(\tilde{x}) s^* - s^* \phi_1(\tilde{x}) \| \| s\| + \| s^* \| \| \phi_1(\tilde{x}) s -s \phi_2(\tilde{x}) \| \\ &\le 10 \sqrt{\delta}<\eta.\end{aligned}$$ By the choice of $\eta$ and (\[ah\]), $$\label{ah} \| \phi_2(\tilde{x}) |s| - |s| \phi_2(\tilde{x}) \| < \frac{\varepsilon}{12}, \ \ x\in X\cup X^*.$$ Since $\phi_1$ is a $(Y,\delta)$-approximation $*$-homomorphism and $\phi_1\approx_{Y,\gamma} \phi_2$, we have $$\begin{aligned} \begin{split}\label{aj} \| s -I \| &= \left\| \frac{1}{ |Y| } \sum_{v\in Y} \phi_1(v) \phi_2(v^*) - \frac{1}{ |Y| } \sum_{v\in Y} \phi_1(v v^*) \right\| \\ &\le \frac{1}{ |Y| } \sum_{v\in Y} \left\| \phi_1(v) \phi_2(v^*) - \phi_1(v) \phi_1(v^*) \right\| + \frac{1}{ |Y| } \sum_{v\in Y} \left\| \phi_1(v) \phi_1(v^*) - \phi_1(v v^*) \right\| \\ &\le \gamma + \delta<1. \end{split}\end{aligned}$$ Since this inequality gives invertibility of $s$, the unitary $u$ in the polar decomposition $s=u|s|$ lies in $C^*(s, I)\subseteq C'\cap B$ and satisfies $\| u- I \| \le \sqrt{2}(\gamma+\delta)$. Then, by (\[aj\]), $$\begin{aligned} \| |s | -I \| \le \| u^*s- I \| \le \| s- I\| + \| I -u\| \le (1+ \sqrt{2})(\gamma+ \delta)< \frac{1}{2}.\end{aligned}$$ Hence, $\| |s|^{-1} \| \le 2$ so, $$\begin{aligned} \label{ak} \begin{split} \| \phi_1(\tilde{x}) - u \phi_2(\tilde{x})u^* \| &= \| \phi_1(\tilde{x})u - u \phi_2(\tilde{x}) \| \\ &\le \| \phi_1(\tilde{x}) u|s| - u \phi_2(\tilde{x}) |s| \| \, \| |s|^{-1}\| \\ &\le 2 \| \phi_1(\tilde{x}) u|s| - u \phi_2(\tilde{x}) |s| \| \\ &\le 2 \| \phi_1(\tilde{x}) s- s \phi_2(\tilde{x})\| +2 \| s \phi_2(\tilde{x}) - u \phi_2(\tilde{x}) |s| \| \\ &\le 10 \sqrt{\delta}+ 2\| |s| \phi_2(\tilde{x}) - \phi_2(\tilde{x}) |s| \| \\ &\le 10 \sqrt{\delta} + \frac{\varepsilon}{6} < \frac{\varepsilon}{3}, \end{split}\end{aligned}$$ for $x\in X$, by (\[af\]), (\[ah\]) and (\[aj\]). For $x\in X$, by (\[ak\]), $$\begin{aligned} \| \phi_1(x) - u \phi_2(x) u^* \| &\le \| \phi_1(x) - \phi_1(\tilde{x}) \| + \| \phi_1(\tilde{x}) - u \phi_2(\tilde{x}) u^* \| + \| u \phi_2(\tilde{x}) u^* -u \phi_2(x) u^* \| \\ &< \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon.\end{aligned}$$ Therefore, $\phi_1 \approx_{X, \varepsilon} \mathrm{Ad}(u) \circ \phi_2$. Let a pair $(Y,\delta)$ hold Lemma \[1.5\]. Then for any finite set $Y'\supseteq Y$ and constant $\delta'<\delta$, a pair $(Y',\delta')$ holds Lemma \[1.5\]. By Lemma \[1.4\] and \[1.5\], we can show Theorem A. \[irreducible\] Let $C\subseteq D$ be a unital irreducible inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate ${\mathrm{C}}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group. If ${d(A,B)}<140^{-1}$, then $A= B$. Let $a\in A_1,\varepsilon>0$ and ${d(A,B)}<\gamma<140^{-1}$ be given. By Lemma \[1.5\], there exist a finite subset $Y\subseteq A_1$ and $\delta>0$ with the following property: Given $\gamma'<1/10$ and two unital $C$-fixed $(Y,\delta)$-approximate $*$-homomorphisms $\phi_1,\phi_2\colon A\to D$ with $\phi_1\approx_{Y,\gamma'}\phi_2$, there exists a unitary $u\in C'\cap D$ such that $$\| \phi_1(a) - ({\mathrm{Ad}}(u) \circ \phi_2)(a) \| \le \varepsilon.$$ By Lemma \[1.4\], there exists a unital $C$-fixed $(Y,\delta)$-approximate $*$-homomorphism $\phi\colon A\to B$ such that $\| \phi - {\mathrm{id}}_A\| \le \left(8 \sqrt{2} + 2 \right)\gamma$. Then there exists a unitary $u\in C'\cap D$ such that $$\| \phi(a) - ({\mathrm{Ad}}(u)) (a) \| \le \varepsilon$$ by the definition of $Y$ and $\delta$. Since $u\in C'\cap D=\mathbb{C}I$, we have $\| \phi(a) - a\| \le \varepsilon$. Therefore, since $\phi(a)\in B$ and $\varepsilon$ is arbitrary, $a\in B$, that is, $A\subseteq B$. Furthermore, by Lemma \[surjective\], the theorem follows. In the following lemma, we modify [@CSSWW Lemma 3.6], which is a Kaplansky density style result for approximate commutants. \[1.6\] Let $C\subseteq A$ be a unital inclusion of non-degenerate $\mathrm{C}^*$-algebras in $\mathbb{B}(H)$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$ and $\varepsilon, \mu >0$, there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property$\colon$ Given a finite set $S\subseteq H_1$ and a self-adjoint operator $m\in \overline{C'\cap A_1}^{{\mathrm{w}}}$ with $$\| m y - y m\| \le \delta, \ \ y\in Y,$$ there exists a self-adjoint operator $a\in C'\cap A_1$ such that $\| a\| \le \| m\|$, $$\begin{aligned} \| ax- x a\| <\varepsilon, \ \ x\in X, \end{aligned}$$ and $$\begin{aligned} \| (a-m)\xi\| <\mu \ \ and \ \ \| (a-m)^* \xi\| <\mu , \ \ \xi \in S.\end{aligned}$$ Let a finite set $X\subseteq A_1$ and $\varepsilon,\mu>0$ be given. There exist $\{u_1,\dots,u_{N}\}$$\subseteq U$ and $\{ c_i^{(x)}: 1\le i\le N, x\in X\}\subseteq C_1$ such that $$\left\| x- \sum_{i=1}^{N} c_i^{(x)} u_i \right\| <\frac{\varepsilon}{3}, \ \ x\in X.$$ Let $\tilde{x}:= \sum_{i=1}^{N} c_i^{(x)} u_i$ for $x\in X$. Since $U$ is amenable, there exists a finite set $F\subseteq U$ such that $$\begin{aligned} \label{1.6.1} \frac{ | u_i F\bigtriangleup F | }{|F|} <\frac{\varepsilon}{3N}, \ \ 1\le i\le N.\end{aligned}$$ Define $Y:=F\cup F^*$ and $\delta:=\mu/2$. Let $S$ be a finite set in $H_1$ and $m \in \overline{C'\cap A_1}^{{\mathrm{w}}}$ be a self-adjoint operator with $$\begin{aligned} \label{1.6.2} \| m y - y m \| < \delta, \ \ y\in Y.\end{aligned}$$ By Kaplansky’s density theorem, there exists a self-adjoint operator $a_0\in C'\cap A_1$ such that $\| a_0\| \le \| m\|$, $$\begin{aligned} \label{1.6.3} \| (a_0-m) v^*\xi\| <\mu \ \ \text{and} \ \ \| (a_0-m)^* v \xi\| <\mu , \ \ v\in Y, \ \xi\in S.\end{aligned}$$ Define $$a:= \frac{1}{|F|}\sum_{v\in F} v a_0 v^*.$$ Then, $\| a \| \le \| a_0 \| \le \|m \|$. For any $x\in X$, $$\begin{aligned} \label{1.6.4} \begin{split} \| \tilde{x}a- a\tilde{x}\| &= \left\| \frac{1}{|F|} \sum_{i=1}^{N}\sum_{v\in F} c_i^{(x)} u_i v a_0v^* - \frac{1}{|F|} \sum_{i=1}^{N} \sum_{v\in F} c_i^{(x)} v a_0 v^* u_i \right\| \\ &= \left\| \frac{1}{|F|} \sum_{i=1}^{N} c_i^{(x)} \left( \sum_{\tilde{v}\in u_i F} \tilde{v} a_0 \tilde{v}^* u_i - \sum_{v\in F} v a_0 v^* u_i \right) \right\| \\ &\le \sum_{i=1}^{N} \frac{| u_i F \bigtriangleup F|}{|F|} <\frac{\varepsilon}{3}, \end{split}\end{aligned}$$ by (\[1.6.1\]). For $x\in X$, since $\| x- \tilde{x}\| <\varepsilon/3$, $$\begin{aligned} \| x a - a x\| &\le \| x a- \tilde{x} a\| + \| \tilde{x}a- a \tilde{x}\| + \| \tilde{x}a- x a\| \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3}+ \frac{\varepsilon}{3}=\varepsilon\end{aligned}$$ by (\[1.6.4\]). For $\xi \in S$, by (\[1.6.2\]) and (\[1.6.3\]), $$\begin{aligned} &\| (a-m) \xi\| \\ &\le \left\| \left( \frac{1}{|F|}\sum_{v\in F} v a_0 v^* - \frac{1}{|F|}\sum_{v\in F} v m v^* \right) \xi \right\| + \left\| \left( \frac{1}{|F|}\sum_{v\in F} v m v^* - \frac{1}{|F|}\sum_{v\in F} v v^* m \right) \xi \right\| \\ &\le \max_{v\in F} \|(a_0-m) v^*\xi \| + \max_{v\in F}\| m v^* - v^* m \| \\ &\le \frac{\mu}{2}+ \delta = \mu.\end{aligned}$$ Similarly, for $\xi\in S$, $$\begin{aligned} \| (a-m)^*\xi\| &\le \max_{v\in F} \| (a_0 -m )^* v \xi\| + \max_{v\in F} \| m v - v m\| \le \frac{\mu}{2}+ \delta =\mu,\end{aligned}$$ and the lemma follows. By Lemma \[1.7\] and \[1.6\], we obtain the following version of Lemma \[1.6\] for unitary operators. We need the next lemma in Section \[unitary\]. \[1.8\] Let $C\subseteq A$ be a unital inclusion of non-degenerate $\mathrm{C}^*$-algebras in $\mathbb{B}(H)$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then for any finite set $X\subseteq A_1$, $\varepsilon_0, \mu_0 >0$ and $0<\alpha<2$, there exist a finite set $Y\subseteq A_1$ and $\delta_0>0$ with the following property$\colon$Given a finite set $S\subseteq H_1$ and a unitary $u \in \overline{C'\cap A}^{{\mathrm{w}}}$ with $\| u - I_H\| \le \alpha$ and $$\| u y - y u \| \le \delta_0 , \ \ y\in Y,$$ there exists a unitary $v\in C'\cap A$ such that $\| v - I_H \| \le \| u - I_H \|$, $$\begin{aligned} \| v x- x v \| <\varepsilon_0, \ \ x\in X, \end{aligned}$$ and $$\begin{aligned} \| (v-u)\xi\| <\mu_0 \ \ and \ \ \| (v-u)^* \xi\| <\mu_0 , \ \ \xi \in S.\end{aligned}$$ Let a finite set $X\subseteq A_1$, $\varepsilon_0, \mu_0 >0$ and $0<\alpha<2$ be given. There exists $0<c <\pi$ such that $| 1 - e^{i \pi \theta}| \le \alpha$ if and only if $\theta \in [ -c , c]$ modulo $2\pi$. By Lemma \[Arveson\], there exists $\varepsilon>0$ such that given a self-adjoint operator $k \in \mathbb{B}(H)_1$ and $a \in \mathbb{B}(H)_1$, if $\| a k - k a \| <\varepsilon$, then $\| a e^{i\pi k} - e^{i\pi k} a\| <\varepsilon_0$. By Lemma \[1.7\], there exists $\mu>0$ with the following property: Given a finite set $S\subseteq H_1$ and a self-adjoint operator $k\in \mathbb{B}(H)_1$, there exists a finite set $S'\subseteq H_1$ such that for a self-adjoint operator $k\in \mathbb{B}(H)_1$, if $$\begin{aligned} \| (h-k)\xi'\| <\mu_0, \ \ \xi'\in S',\end{aligned}$$ then $$\begin{aligned} \| (e^{i\pi h} - e^{i\pi k}) \xi\| <\mu \ \ \text{and} \ \ \| (e^{i\pi h} - e^{i\pi k})^* \xi\| <\mu, \ \ \xi \in S.\end{aligned}$$ By Lemma \[1.6\], there exist a finite set $Y\subseteq A_1$ and $\delta>0$ with the following property: For any finite set $S\subseteq H_1$ and self-adjoint operator $m\in \overline{C'\cap A_1}^{{\mathrm{w}}}$ with $$\begin{aligned} \| m y - y m \| < \delta, \ \ y\in Y,\end{aligned}$$ there exists a self-adjoint operator $a \in C'\cap A_1$ such that $\| a \| \le \| m \|$, $$\begin{aligned} \| a x &- x a\| <\varepsilon, \ \ x\in X, \\ \| (a - m) \xi \| <\mu& \ \ \text{and} \ \ \| (a-m)^* \xi\| <\mu , \ \ \xi \in S.\end{aligned}$$ By Lemma \[Arveson\], there exists $\delta_0>0$ such that for any $y \in \mathbb{B}(H)$ and unitary $u \in \mathbb{B}(H)$ with $\| u - I_H\| \le \alpha$, if $\| u y - y u\| \le \delta_0$, then $$\begin{aligned} \left\| \frac{\log u}{\pi} y - y \frac{\log u}{\pi} \right\| \le \delta.\end{aligned}$$ Given a finite set $S\subset H_1$ and a unitary $ u\in \overline{C'\cap A}^{{\mathrm{w}}}$ with $\| u - I_H\| \le \alpha$ and $$\begin{aligned} \| u y - y u \| \le \delta_0, \ \ y\in Y.\end{aligned}$$ Let $$h:= -i \frac{\log u}{\pi} \in C'\cap M.$$ By the definition of $\delta_0$, $$\begin{aligned} \| h y - y h \| \le \delta, \ \ y\in Y.\end{aligned}$$ By the definition of $\mu$, there exists a finite set $S'\subseteq H_1$ such that for any self-adjoint operator $k \in \mathbb{B}(H)_1$, if $$\begin{aligned} \| (h - k) \xi' \| <\mu_0, \ \ \xi'\in S',\end{aligned}$$ then $$\begin{aligned} \| (e^{i\pi h}- e^{i\pi k})\xi \| <\mu \ \ \text{and} \ \ \| (e^{i\pi h} - e^{i\pi k})^* \xi\| <\mu, \ \ \xi\in S.\end{aligned}$$ By the definitions of $Y$ and $\delta$, there exists a self-adjoint operator $k\in C'\cap A_1$ such that $\|k\| \le \|h\|$, $$\begin{aligned} \| k x &- x k \| <\varepsilon, \ \ x\in X, \\ \| (h - k) \xi' \| <\mu& \ \ \text{and} \ \ \| (h - k)^* \xi' \| <\mu , \ \ \xi' \in S'.\end{aligned}$$ Define $v:= e^{i\pi k}$. Then, we have $\| v- I_H\| \le \| e^{i\pi h} - I_H \| = \| u- I_H\| $. By the definition of $\varepsilon$ and $S'$, we have $$\| v x - x v \| <\varepsilon_0 , \ \ x\in X$$ and $$\| (v - u) \xi \| <\mu_0 \ \ \text{and} \ \ \| (v - u )^* \xi \| <\mu_0 , \ \ \xi \in S.$$ Hence the lemma is proved. Isomorphisms {#isomorphism} ============ In this section, we show Theorem B. Given a unital inclusion $C\subseteq D$ of C$^*$-algebras and intermediate C$^*$-subalgebras $A,B$ for this inclusion with a conditional expectation form $D$ onto $B$, if $A=C\rtimes G$, where $G$ is a discrete amenable group, and if $A$ and $B$ are sufficiently close, then $A$ must be $*$-isomorphic to $B$. To do this, we modify [@CSSWW Lemma 4.1] in the next lemma. The approximation approach of [@CSSWW Lemma 4.1] inspired by the intertwining arguments of [@Chris5 Theorem 6.1]. \[3.1\] Let $C\subseteq D$ be a unital inclusion of ${\mathrm{C}}^*$-algebras and let $A,B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E \colon D\to B$. Let $\{a_n\}_{n=0}^{\infty}$ be a dense subset in $A_1$ with $a_0=0$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Put $\eta:=(8\sqrt{2}+2)\gamma$. Then for any finite set $X \subseteq A_1$, there exist finite subsets $\{X_n\}_{n=0}^{\infty}, \{ Y_n\}_{n=0}^{\infty}\subseteq A_1$, positive constants $\{ \delta_n \}_{n=0}^{\infty}$, $C$-fixed cpc maps $\{ \theta_n\colon A\to B\}_{n=0}^{\infty}$ and unitaries $\{u_n\}_{n=1}^{\infty}\subseteq C'\cap B$ with the following conditions$\colon$ 1. For $n\ge 0$, $\delta_n < \min\{ 2^{-n}, \gamma \}$, $a_n\in X_n\subseteq X_{n+1}$ and $X \subseteq X_1;$ 2. For $n\ge0$ and two unital $C$-fixed $(Y_n,\delta_n)$-approximation $*$-homomorphisms $\phi_1,\phi_2 \colon A\to B$ with $\phi_1\approx_{Y_n,2\eta}\phi_2$, there exists a unitary $u\in C'\cap B$ such that $\mathrm{Ad}(u)\circ \phi_1\approx_{X_n,\gamma/2^n}\phi_2$ and $\| u- I\| \le \sqrt{2}(2\eta+\delta_n);$ 3. For $n\ge 0$, $X_n\subseteq Y_n;$ 4. For $n\ge 0$, $\theta_n$ is a $(Y_n,\delta_n)$-approximation $*$-homomorphism with $\| \theta_n-{\mathrm{id}}_A\| \le \eta;$ 5. For $n\ge 1$, $\mathrm{Ad}(u_n)\circ \theta_n\approx_{X_{n-1},\gamma/2^{n-1}}\theta_{n-1}$ and $\| u_n- I\| \le \sqrt{2}(2\eta+\delta_{n-1})$. We prove this lemma by the induction. Let a finite subset $X\subseteq A_1$ be given. Let $X_0=\{ 0 \}=\{ a_0\}=Y_0$, $\delta=1$ and $\theta:=E|_A \colon A\to B$. Suppose that we can construct completely up to the $n$-th stage. We will write the condition (a) for $n$ as (a)$_n$. Let $X_{n+1}:= X_n\cup X \cup \{a_{n+1} \} \cup Y_n$. By Lemma \[1.5\], there exist a finite set $Y_{n+1}\subseteq A_1$ and $0<\delta_{n+1} < \min\{ \delta_n, 2^{-(n+1)}, \gamma \}$ satisfying condition (b)$_{n+1}$ and $X_{n+1}\subseteq Y_{n+1}$. By Lemma \[1.4\], there exists a unital $C$-fixed $(Y_{n+1},\delta_{n+1})$-approximation $*$-homomorphism $\theta_{n+1}\colon A\to B$ such that $\| \theta_{n+1}- {\mathrm{id}}_A\| \le \eta$. Therefore, $X_{n+1}, Y_{n+1}, \delta_{n+1}$ and $\theta_{n+1}$ satisfy (a)$_{n+1}$, (b)$_{n+1}$, (c)$_{n+1}$ and (d)$_{n+1}$. Since $Y_n\subseteq Y_{n+1}$ and $\delta_{n+1}<\delta_n$, $\theta_n$ and $\theta_{n+1}$ are unital $C$-fixed $(Y_n,\delta_n)$-approximation $*$-homomorphisms with $\| \theta_n- \theta_{n+1} \| \le 2\eta$. Thus, by (b)$_n$, there exists a unitary $u_{n+1}\in C'\cap B$ such that $\mathrm{Ad}(u_{n+1})\circ \theta_{n+1}\approx_{X_n, \gamma/2^n}\theta_n$ and $\| u_{n+1}- I\|\le \sqrt{2}(2\eta +\delta_n)$. Then (e)$_{n+1}$ follows. \[3.2\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Then for any finite subset $X\subseteq A_1$, there exists a $C$-fixed surjective $*$-isomorphism $\alpha\colon A\to B$ such that $$\alpha\approx_{X, 15\gamma} {\mathrm{id}}_A.$$ Let $\{a_n\}_{n=0}^{\infty}$ be a dense subset in $A_1$ with $a_0=0$. Put $\eta:=(8\sqrt{2}+2)\gamma$. By Lemma \[3.1\], we can construct $\{X_n\}_{n=0}^{\infty}, \{Y_n\}_{n=0}^{\infty}\subseteq A_1$, $\{\delta_n \}_{n=0}^{\infty}$, $\{ \theta_n\colon A\to B\}_{n=0}^{\infty}$ and $\{u_n\}_{n=1}^{\infty}\subseteq C'\cap B$ which satisfy conditions (a)-(e) of that lemma. For any $n\ge 1$, define $$\alpha_n:= \mathrm{Ad}(u_1\cdots u_n)\circ \theta_n.$$ Fix $k\in \mathbb{N}$ and $x\in X_k$. For any $n\ge k$, $$\begin{aligned} \label{3.2.1} \begin{split} \| \alpha_{n+1}(x)-\alpha_n(x) \| &=\| \left( \mathrm{Ad}(u_1\cdots u_n)\circ \mathrm{Ad} (u_{n+1}) \circ \theta_{n+1} - \mathrm{Ad}(u_1\cdots u_n) \circ \theta_n \right)(x) \| \\ &= \| \left( \mathrm{Ad}(u_{n+1}) \circ \theta_{n+1} - \theta_n \right) (x) \| \le \frac{\gamma}{2^n} \end{split}\end{aligned}$$ For any $\varepsilon>0$, there exists $N\ge k$ such that $\gamma/2^{N-1} < \varepsilon$. For any two natural numbers $m \ge n\ge N$, by (\[3.2.1\]), $$\| \alpha_m(x) - \alpha_n(x) \| \le \sum_{j=n}^{m-1} \| \alpha_{j+1}(x) -\alpha_j(x) \| \le \sum_{j=n}^{m-1} \frac{\gamma}{2^j} <\frac{\gamma}{2^{N-1}} <\varepsilon.$$ Thus, $\{ \alpha_n(x) \}$ is a cauchy sequence. Since $\bigcup_{n=0}^{\infty} X_n$ is dense in $A_1$, the sequence $\{ \alpha_n \}$ converges in the point-norm topology to a $C$-fixed cpc map $\alpha\colon A\to B$. Moreover, $\alpha$ is a $*$-homomorphism, since $\lim_{n\to \infty}\delta_n=0$ and $\bigcup_{n=0}^{\infty} Y_n$ is dense in $A_1$. For any $n\in \mathbb{N}$ and $x\in X_n$, $$\begin{aligned} \label{3.2.2} \begin{split} \| \alpha(x) - \alpha_n(x) \| \le \sum_{j=n}^{\infty} \| \alpha_{j+1}(x) - \alpha_j(x) \| \le \sum_{j=n}^{\infty} \frac{\gamma}{2^j} = \frac{\gamma}{2^{n-1}}. \end{split}\end{aligned}$$ Hence, by (e) in Lemma \[3.1\], $$\begin{aligned} \label{3.2.3} \begin{split} \| \alpha(x) \| &\ge \left| \| \alpha(x) - \alpha_n(x) \| - \| \alpha_n(x) \| \right| \\ &\ge \| \alpha_n(x)\| - \frac{\gamma}{2^{n-1}} \\ &= \| \theta_n(x) \| - \frac{\gamma}{2^{n-1}} \\ &\ge | \| \theta_n(x) - x \| - \| x \| | - \frac{\gamma}{2^{n-1}} \\ &\ge \| x \| - \eta - \frac{\gamma}{2^{n-1}}. \end{split}\end{aligned}$$ Let $n\to \infty$ in (\[3.2.3\]). Then, for any $x$ in the unit sphere of $A$, we have $$\begin{aligned} \| \alpha(x) \| \ge 1-\eta >0,\end{aligned}$$ by the density of $\bigcup_{n=0}^{\infty}X_n$ in $A_1$. Therefore, $\alpha$ is an injective map. For any $b\in B_1$ and $n\in \mathbb{N}$, there exists $x\in A_1$ such that $\| x- u_n^* \cdots u_1^* b u_1 \cdots u_n \| <\gamma$. $$\begin{aligned} \| \alpha_n(x) - b_j \| &= \| u_1\cdots u_n \theta_n(x) u_n^* \cdots u_1^* - b \| \\ &\le \| \theta_n(x) -x \| + \| x - u_n^* \cdots u_1^* b u_1 \cdots u_n \| \\ &< \eta + \gamma <1.\end{aligned}$$ Thus, $d(\alpha(A),B)<1$, that is, $\alpha$ is a surjective map by Proposition \[surjective\]. For any $x\in X$, $$\begin{aligned} \| \alpha(x) - x \| \le \| \alpha(x) - \alpha_1(x) \| + \| \theta_1(x) - x \| \le \gamma +\eta <15 \gamma.\end{aligned}$$ by (\[3.2.3\]) and (e) in Lemma \[3.1\]. \[3.3\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-3}$. Then for any finite subset $X\subseteq A_1$ and finite set $Y\subseteq B_1$, there exists a $C$-fixed surjective $*$-isomorphism $\alpha \colon A\to B$ such that $$\alpha\approx_{X, 15\gamma} {\mathrm{id}}_A \ \ and \ \ \alpha^{-1} \approx_{Y, 17\gamma} {\mathrm{id}}_B.$$ There exists a finite set $\tilde{X}\subseteq A_1$ such that $\tilde{X}\subset_{\gamma} Y$. By Proposition \[3.2\], there exists a $C$-fixed surjective $*$-isomorphism $\alpha\colon A\to B$ such that $$\alpha \approx_{X\cup \tilde{X}, 15\gamma} {\mathrm{id}}_A.$$ Fix $y\in Y$. Since $\tilde{X}\subset_{\gamma}Y$, there exists $\tilde{x}\in \tilde{X}$ such that $\| \tilde{x} - y \| <\gamma$. Then, we have $$\begin{aligned} \| \alpha^{-1}(y) - y\| &\le \| \alpha^{-1}(y) - \tilde{x}\| + \| \tilde{x} - y\| \\ &\le \| y- \alpha(\tilde{x}) \| + \gamma \\ &\le \| y - \tilde{x} \| + \| \tilde{x} - \alpha(\tilde{x}) \| +\gamma \\ &\le \gamma + 15\gamma + \gamma \le 17 \gamma.\end{aligned}$$ Therefore, $\alpha^{-1} \approx_{Y,17\gamma} {\mathrm{id}}_B$. Crossed product von Neumann algebras by amenable groups {#von} ======================================================= In Theorem \[2.3.5\], we now show that given a unital inclusion $N\subseteq M$ of von Neumann algebras and intermediate von Neumann subalgebras $A,B$ for this inclusion with a normal conditional expectation form $M$ onto $B$, if $A=N\rtimes G$, where $G$ is a discrete amenable group, and if $A$ and $B$ are sufficiently close, then there exists a unitary $u \in N' \cap M$ such that $u A u^* = B$. This unitary can be chosen to be close to the identity. Let $N\subseteq M$ be a unital inclusion of von Neumann algebras in $\mathbb{B}(H)$. Then we call the inclusion $N\subseteq M$ is [*crossed product-like*]{} if there exists a discrete group $U$ in $\mathcal{N}_M(N)$ such that $M= (N \cup U)''$. Let $G$ be a discrete amenable group acting on a von Neumann algebra $N$. Then an inclusion $N\subseteq N\rtimes G$ is crossed product-like by $\{\lambda_g\}_{g\in G}$. Let $A\subseteq B$ be a crossed product-like inclusion of ${\mathrm{C}}^*$-algebras acting non-degenerately on $H$. Then an inclusion $\bar{A}^{{\mathrm{w}}}\subseteq \bar{B}^{{\mathrm{w}}}$ of von Neumann algebras is crossed product-like. Let $N\subseteq M$ be a crossed product-like inclusion of von Neumann algebras in $\mathbb{B}(H)$ by a discrete amenable group $U\subseteq \mathcal{N}_M(N)$. Then there is a left-invariant mean $m\colon \ell^{\infty}(U)\to \mathbb{R}$ with a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that $$\lim_{\mu} \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g) = m(f) , \ \ f\in \ell^{\infty}(U).$$ Given a linear bounded map $\phi\colon U\to \mathbb{B}(H)$. For $\xi,\eta\in H$, define $\phi_{\xi,\eta}\in \ell^{\infty}(U)$ by $$\phi_{\xi,\eta}(u)=\langle \phi(u) \xi, \eta \rangle, \ \ u\in U.$$ Then there is an operator $T_{\phi}\in \mathbb{B}(H)$ which we will often write in the form $$T_{\phi}= \int_{u\in U} \phi(u) d m$$ such that $$\langle T_{\phi}\xi,\eta \rangle = m( \phi_{\xi,\eta} )= \int_{u\in U} \langle \phi(u) \xi,\eta \rangle d m , \ \ \xi,\eta\in H.$$ By the construction of $m$, we have $$\langle T_{\phi} \xi, \eta \rangle = \lim_{\mu} \frac{1}{|F_{\mu}|} \sum_{u\in F_{\mu}} \left\langle\phi(u) \xi,\eta \right\rangle, \ \ \xi,\eta\in H,$$ that is, $T_{\phi}\in \overline{\mathrm{conv}}^{\mathrm{w}} \{ \phi(u) : u \in U \}$. Furthermore, $$\begin{aligned} \| T_{\phi} \| &= \sup_{\xi,\eta\in H} \left| \int_{u\in U} \langle \phi(u) \xi,\eta \rangle d m \right| \\ &\le \int_{u\in U} \sup_{\xi,\eta\in H} \left| \langle \phi(u) \xi,\eta \rangle \right| d m =\int_{u\in U} \| \phi(u) \| d m.\end{aligned}$$ In the next lemma, we shall find a unital normal $*$-homomorphism between von Neumann algebras. This lemma is originated in Christensen’s work [@Chris2 Lemma 3.3], which discusses the perturbation theory for injective von Neumann algebras. \[2.2\] Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N\subseteq M$ with a normal conditional expectation $E \colon M\to B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group $U$ and $d(A,B)<\gamma<1/4$. Then there exists a unital $N$-fixed normal $*$-homomorphism $\Phi \colon A \to B$ such that $$\| \Phi -{\mathrm{id}}_A \| \le ( 8 \sqrt{2}+2)\gamma .$$ Let $A_0:= \mathrm{span} \{ x u : x\in N, u\in U\}$. By Stinespring’s theorem, there exist a Hilbert space $\tilde{H} \supseteq H$ and a unital normal $*$-homomorphism $\pi \colon M \to \mathbb{B}(\tilde{H})$ such that $$\begin{aligned} E (x)= P_H \pi (x) |_H, \ \ x\in M .\end{aligned}$$ Let $m\colon \ell^{\infty}(U)\to \mathbb{R}$ be a left-invariant mean with a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that $$\lim_{\mu} \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g) = m(f) , \ \ f\in \ell^{\infty}(U).$$ Define $$t:= \int_{u\in U} \pi(u)P_H \pi(u^*) d m.$$ Since $P_H\in \pi( N )'$, we have $t\in \pi(N )'$. Fix $x\in A_0$. Then there exist $\{ u_1,\dots, u_N\} \subseteq U$ and $\{ x_1,\dots,x_N\}\subseteq N_1$ such that $x=\sum_{i=1}^N x_i u_i $. For any $\xi,\eta\in H$, $$\begin{aligned} \left\langle \pi(x) t \xi, \eta \right\rangle &=\int_{u\in U} \left\langle \pi(x) \pi(u) P_H \pi(u^*) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \sum_{i=1}^N \left\langle \pi( x_i u_i u)P_H\pi(u^*) \xi, \eta \right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi(x_i v)P_H\pi(v^* u_i ) \xi, \eta \right\rangle d (u_i^* m) \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi( x_i v)P_H\pi(v^* u_i ) \xi, \eta \right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \pi(v)P_H\pi(v^* x_i u_i ) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \left\langle \pi(u) P_H \pi(u^*) \pi(x) \xi, \eta \right\rangle d m = \left\langle t \pi(x) \xi, \eta \right\rangle.\end{aligned}$$ Therefore, $t\in \pi( A)'$ by the normality of $\pi$. Furthermore, for $u\in U$, there is $v\in B_1$ such that $\| u- v\| <\gamma$. Then since $P_H\in \pi(B)'$, we have $$\| \pi (u)P_H - P_H\pi(u)\| \le \| \pi(u) P_H- \pi(v)P_H\| + \| P_H\pi(v) - P_H\pi(u)\| <2\gamma.$$ Therefore, $$\begin{aligned} \| t - P_H\| &\le \int_{u\in U} \| \pi(u)P_H \pi(u^*) - P_H\| d m \\ &=\int_{u\in U} \| \pi(u) P_H - P_H \pi(u) \| d m \le 2 \gamma <\frac{1}{2}.\end{aligned}$$ Define $\delta:=\| t - P_H\|$ and $q:=\chi_{[1-\delta,1]}(t)$. Since $\|q-P_H\|\le 2\delta<1$, there exists a unitary $w\in C^*(t, P_H, I_{\tilde{H}})$ such that $w P_H w^*=q$ and $\| w- I_{\tilde{H}} \| \le 2 \sqrt{2}\delta$ by Lemma \[polar\](3). Define a map $\Phi\colon A \to \mathbb{B}(\tilde{H})$ by $$\Phi(x):= P_H w^* \pi (x) w |_H, \ \ x\in A .$$ Since $t\in \overline{\mathrm{conv}}^{{\mathrm{w}}} \{ \pi(u) P_H \pi(u^*) : u\in U\}$ and $P_H \pi( A ) |_H= E( A )\subseteq B$, we have $\Phi( A )\subseteq B$. For any $x,y\in A$, $$\begin{aligned} \Phi(x)\Phi(y) &=P_H w^*\pi(x) w P_H w^*\pi(y)w P_H \\ &=P_H w^* \pi(x) q \pi(y) w P_H \\ &=P_H w^* q \pi(x y)w P_H \\ &=P_H w^* \pi (x y) w P_H =\Phi(x y).\end{aligned}$$ Therefore, $\Phi$ is a $*$-homomorphism. Furthermore, for any $x\in A_1$, $$\begin{aligned} \| \Phi(x) - x \| &\le \| \Phi(x) - E(x)\| + \| E (x) - x\| \\ &\le \| P_H w^* \pi(x) w P_H - P_H \pi(x) P_H \| + 2d(A , B) \\ &\le 2\| w - I_{\tilde{H}}\| + 2 d(A , B) \\ &\le (8 \sqrt{2}+2) \gamma.\end{aligned}$$ Since $w\in C^*(t, P_H, I_{\tilde{H}})\subseteq \pi(N)'$, $\Phi$ is a $N$-fixed map. We base the next lemma on Christensen’s work [@Chris2 Propositions 4.2 and 4.4], which show similar results for injective von Neumann algebras. \[2.3\] Let $A,B$ and $N$ be von Neumann algebras in $\mathbb{B}(H)$ with $N\subseteq A\cap B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group $U$. Then given two unital $N$-fixed normal $*$-homomorphisms $\Phi_1,\Phi_2 \colon A \to B$ with $\| \Phi_1-\Phi_2\| <1$, there exists a unitary $u\in N' \cap B$ such that $\Phi_1=\mathrm{Ad}(u)\circ \Phi_2$ and $\| u - I \| \le \sqrt{2}\| \Phi_1- \Phi_2\|$. Let $A_0:= \mathrm{span}\{ x u : x\in N ,u\in U\}$ and let $m\colon\ell^{\infty}(U)\to \mathbb{R}$ be a left-invariant mean with there is a net of finite subsets $\{ F_{\mu} \} \subseteq U$ such that $m_{\mu}\to m$ in the weak-$*$ topology, where $$m_{\mu}(f)= \frac{1}{|F_{\mu}| } \sum_{g\in F_{\mu}} f(g), \ \ f\in \ell^{\infty}(U).$$ Define $$s := \int_{u\in U} \Phi_1(u) \Phi_2(u^*) d m .$$ Since $\Phi_1$ and $\Phi_2$ are $N$-fixed maps and $U\subseteq \mathcal{N}_A(N)$, we have $s \in N ' \cap B$. For $x\in A_0$, there exist $\{ u_1,\dots,u_N\} \subseteq U$ and $\{ x_1,\dots ,x_N\} \subseteq N$ such that $x=\sum_{i=1}^N x_i u_i $. For any $\xi ,\eta\in H$, $$\begin{aligned} \langle \Phi_1(x)s\xi,\eta \rangle &=\int_{u\in U} \left\langle \Phi_1(x) \Phi_1(u)\Phi_2(u^*) \xi, \eta \right\rangle d m \\ &=\int_{u\in U} \sum_{i=1}^N \left\langle \Phi_1(c_i u_i u) \Phi_2(u^* ) \xi, \eta\right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1(c_i v) \Phi_2(v^* u_i ) \xi, \eta\right\rangle d (u_i^* m) \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1(c_i v) \Phi_2(v^* u_i ) \xi, \eta\right\rangle d m \\ &=\sum_{i=1}^N \int_{v\in U} \left\langle \Phi_1( v) \Phi_2(v^* c_i u_i ) \xi, \eta\right\rangle d m \\ &= \int_{v\in U} \left\langle \Phi_1(v) \Phi_2(v^* x) \xi, \eta\right\rangle d m =\langle s \Phi_2(x) \xi,\eta \rangle.\end{aligned}$$ Therefore, by the normality of $\Phi_1$ and $\Phi_2$, $$\begin{aligned} \label{2.3.1} \Phi_1(x)s= s\Phi_2(x), \ \ x\in A.\end{aligned}$$ By taking adjoint, $$\begin{aligned} \label{2.3.2} s^*\Phi_1(x)=\Phi_2(x)s^*, \ \ x\in A.\end{aligned}$$ By (\[2.3.1\]) and (\[2.3.2\]), for $x\in A$, $s^* s \Phi_2(x)= s^* \Phi_1(x) s= \Phi_2(x) s^* s$. Thus, $$\begin{aligned} \label{2.3.3} |s|^{-1} \Phi_2(x)= \Phi_2(x)|s|^{-1}, \ \ x\in A.\end{aligned}$$ Furthermore, $$\begin{aligned} \|s- I_H\| \le \int_{u\in U} \| \Phi_1(u)\Phi_2(u^*)- \Phi_1(u)\Phi_1(u^*) \| d m \le \| \Phi_1-\Phi_2\| <1.\end{aligned}$$ Hence by Lemma \[polar\](1), we can choose the unitary $u\in C^*(s,I )\subseteq N'\cap B$ in the polar decomposition of $s$ with $\| u- I \| \le \sqrt{2}\| s- I \|$. By (\[2.3.1\]) and (\[2.3.3\]), $$\begin{aligned} \Phi_1(x)u=\Phi_1(x)s |s|^{-1}= s\Phi_2(x) |s|^{-1}= s |s|^{-1} \Phi_2(x) =u \Phi_2(x), \ \ x\in A.\end{aligned}$$ Therefore, $\Phi_1=\mathrm{Ad}(u)\circ \Phi_2$. Using Lemma \[2.2\] and \[2.3\], it follows Theorem C. \[2.3.5\] Let $N\subseteq M$ be an inclusion of von Neumann algebras in $\mathbb{B}(H)$ and let $A,B$ be intermediate von Neumann subalgebras for $N \subseteq M$ with a normal conditional expectation from $M$ onto $B$. Suppose that $N\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in N ' \cap (A\cup B)''$ such that $u A u^*= B$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. By Lemma \[2.2\], there exists a unital $N$-fixed normal $*$-homomorphism $\Phi\colon A \to B$ such that $$\| \Phi- {\mathrm{id}}_A \| \le (8 \sqrt{2}+2)\gamma.$$ Since $(8 \sqrt{2}+2)\gamma <1$, there exists a unitary $u\in N' \cap (A\cup B)''$ such that $\Phi=\mathrm{Ad}(u)$ and $\| u - I \| \le \sqrt{2}\| \Phi- {\mathrm{id}}_A \| $ by Lemma \[2.3\]. Thus, $$u A u^* = \Phi(A) \subseteq B.$$ Fix $x\in B_1$. There exists $y \in A_1$ such that $\| x -y \|\le \gamma$. Then, $$\begin{aligned} \| y - u x u^*\| \le \| y- x\| + \| x- u x u^*\| \le \gamma + 2\| u- I\| <1.\end{aligned}$$ Thus, $d(u A u^*, B )<1$, that is, $u A u^*= B $ by Proposition \[surjective\]. \[2.4\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras and let $A,B$ be intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $d(A,B)<\gamma<10^{-2}$. Then there exists a unitary $u\in (C^{**})' \cap W^*(A^{**}, B^{**})$ such that $u A^{**} u^*= B^{**}$ and $\| u - I \| \le 2(8+\sqrt{2})\gamma$. By a general construction, there exists a normal conditional expectation $E ^{**}\colon D^{**}\to B^{**}$. Let $(\pi, H)$ be the universal representation of $D$ and identify $A^{**}$, $B^{**}$, $C^{**}$ and $D^{**}$ with $\pi(A)''$, $\pi(B)''$, $\pi(C)''$ and $\pi(D)''$, respectively. Then by Theorem \[2.3.5\] and Lemma \[weak-closure\], the corollary follows. Unitary equivalence {#unitary} =================== In this section, we show the fourth main result: Theorem D. For a unital inclusion $C\subseteq D$ of C$^*$-algebras acting on a separable Hilbert space $H$ and sufficiently close separable intermediate C$^*$-subalgebras $A$, $B$ for $C\subseteq D$ with a conditional expectation of $D$ onto $B$, if $A=C \rtimes G$ with $G$ discrete amenable and if $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{{\mathrm{w}}}$, then $A$ and $B$ are unitarily equivalent. The unitary can be chosen in the relative commutant of $C'\cap (A\cup B)''$. To show this, we modify the arguments of Section 5 in Christensen et al. [@CSSWW]. \[4.1\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{{\mathrm{w}}}$. If $d(A,B)<\gamma<10^{-4}$, then for any finite subsets $X\subseteq B_1$, $Z_A\subseteq A_1$ and $\varepsilon,\mu>0$, there exist finite subsets $Y\subseteq B_1$, $Z\subseteq A_1$, a positive constant $\delta>0$, a unitary $u\in C'\cap C^*(A,B)$ and a $C$-fixed surjective $*$-isomorphism $\theta\colon B\to A$ with the following conditions$\colon$ 1. $\delta<\varepsilon;$ 2. $X\subseteq_{\varepsilon}Y;$ 3. $\| u - I\| \le 75\gamma;$ 4. $\theta\approx_{Y,\delta} \mathrm{Ad}(u);$ 5. $\theta \approx_{X,117\gamma} {\mathrm{id}}_B$ and $\theta^{-1} \approx_{Z_A, 115\gamma} {\mathrm{id}}_A;$ 6. For any $C$-fixed surjective $*$-isomorphism $\phi\colon B\to A$ with $ \phi^{-1}\approx_{Z, 365\gamma} {\mathrm{id}}_A, $ there exists a unitary $w\in C'\cap A$ such that $$\mathrm{Ad}(w) \circ \phi \approx_{Y, \delta/2} \theta \ \ \text{and} \ \ \| w - u\| \le 665\gamma;$$ 7. For any finite subset $S\subseteq H_1$ and unitary $v\in C'\cap C^*(A,B)$ with $\mathrm{Ad} (v) \approx_{Y,\delta}\theta$ and $\| v -u\| \le 740\gamma$, there exists a unitary $\tilde{v} \in C'\cap A$ such that $\mathrm{Ad}(\tilde{v} v) \approx_{X,\varepsilon} \theta$, $\| \tilde{v}- I\| \le 740\gamma$ and $$\| (\tilde{v}v-u)\xi \| <\mu \ \ and \ \ \| (\tilde{v}v-u)^* \xi\| <\mu, \ \ \xi \in S.$$ Let $X, Z_A, \varepsilon$ and $\mu$ be given. By Lemma \[1.5\], there exists a finite subset $Z_1\subseteq B_1$ satisfying the following condition: given two unital $C$-fixed $*$-homomorphisms $\phi_1,\phi_2\colon B\to B$ with $\phi_1\approx_{Z_1,32\gamma}\phi_2$, there exists a unitary $w_1\in C'\cap B$ such that $\phi_1\approx_{X,\varepsilon/3}\mathrm{Ad} (w_1) \circ \phi_2$ and $\| w_1- I_H\| \le 32\sqrt{2}\gamma$. By Proposition \[3.2\], there exists a $C$-fixed surjective $*$-isomorphism $\beta\colon B\to A$ such that $$\begin{aligned} \label{4.1.1} \beta\approx_{Z_1, 17\gamma} {\mathrm{id}}_B.\end{aligned}$$ Let $X_0:=\beta(X)$. By Lemma \[1.8\], there exist a finite set $Y_0\subseteq A_1$ and $\delta>0$ with the following properties: $\delta<\varepsilon/6$, $X_0\subseteq Y_0$ and given a finite set $S_0\subseteq H_1$ and a unitary $u\in C'\cap C^*(A,B)$ with $\| u - I\| \le 740\gamma$ and $$\begin{aligned} \| u y_0 - y_0 u \| \le 3\delta, \ \ y_0\in Y_0,\end{aligned}$$ there exists a unitary $v\in C'\cap A$ such that $\| v -I\| \le 740\gamma$, $$\| v x_0 - x_0 v \| \le \frac{\varepsilon}{6}, \ \ x_0\in X_0$$ and $$\| (v-u)\xi_0\| <\mu \ \ \mathrm{and} \ \ \|(v-u)^*\xi_0\|<\mu, \ \ \xi_0\in S_0,$$ since $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{{\mathrm{w}}}$. By Lemma \[1.5\], there exists a finite set $Z\subseteq A_1$ with the following properties: $\beta(Z_1)\cup Z_A\subseteq Z$ and given $\gamma_0<1/10$ and two unital $C$-fixed $*$-homomorphism $\phi_1,\phi_2\colon A\to C^*(A,B)$ with $\phi_1\approx_{Z,\gamma_0}\phi_2$, there exists a unitary $u_0\in C'\cap C^*(A,B)$ such that $\mathrm{Ad} (u_0) \circ \phi_1\approx_{Y_0,\delta/2}\phi_2$ and $\| u_0 - I\| \le \sqrt{2}\gamma_0$. By Proposition \[3.3\], there exists a $C$-fixed surjective $*$-isomorphism $\sigma\colon A\to B$ such that $$\begin{aligned} \label{4.1.2} \sigma\approx_{Z,15\gamma} {\mathrm{id}}_A \ \ \mathrm{and} \ \ \sigma^{-1}\approx_{X,17\gamma} {\mathrm{id}}_B.\end{aligned}$$ Hence, by the choice of $Z$, there exists a unitary $u_0\in C'\cap C^*(A,B)$ such that $$\begin{aligned} \label{4.1.3} \sigma\approx_{Y_0,\delta/2} \mathrm{Ad} (u_0)\end{aligned}$$ and $\| u_0 - I\| \le 15\sqrt{2}\gamma <25\gamma$. Since $\beta(Z_1)\subseteq Z$, (\[4.1.1\]) and (\[4.1.2\]), we have $$\begin{aligned} \label{4.1.4} \sigma\circ\beta\approx_{Z_1,32\gamma} {\mathrm{id}}_B.\end{aligned}$$ By the definition of $Z_1$, there exists a unitary $w_1\in C'\cap B$ such that $$\begin{aligned} \label{4.1.5} \sigma\circ \beta \approx_{X,\varepsilon/3} {\mathrm{Ad}}(w_1)\end{aligned}$$ and $\| w_1 - I\| \le 32\sqrt{2}\gamma < 50\gamma$. Now define $\theta:=\sigma^{-1}\circ {\mathrm{Ad}}(w_1)$, $Y:=\theta^{-1}(Y_0)$ and $u:= u_0^*w_1$. Fix $y\in Y$. Let $y_0:=\theta(y)\in Y_0$. Then, $$\begin{aligned} \| \theta(y) - {\mathrm{Ad}}(u) (y)\| &=\| y_0 - ({\mathrm{Ad}}(u) \circ\theta^{-1})(y_0) \| \\ &=\| y_0- ({\mathrm{Ad}}(u_0^*) \circ \sigma)(y_0) \| \\ &=\| {\mathrm{Ad}}(u_0) (y_0) - \sigma(y_0) \| \le \frac{\delta}{2},\end{aligned}$$ since $\theta^{-1}={\mathrm{Ad}}(w_1^*) \circ \sigma$ and (\[4.1.3\]). Thus, $\theta\approx_{Y,\delta/2}{\mathrm{Ad}}(u)$, so that condition (iv) holds. By the definition of $u$, we have $$\begin{aligned} \| u- I\| = \| w_1- u_0\| \le \| w_1- I\| + \| I - u_0 \| <75 \gamma.\end{aligned}$$ Hence, condition (iii) holds. For any $x\in X$, $$\begin{aligned} \label{4.1.6} \begin{split} \| \theta(x) - x \| &\le \| (\sigma^{-1}\circ{\mathrm{Ad}}(w_1))(x) - \sigma^{-1}(x) \| + \| \sigma^{-1}(x) - x\| \\ &\le 2\| w_1- I_H\| +17\gamma \\ &\le 100\gamma +17\gamma= 117\gamma. \end{split}\end{aligned}$$ For any $z\in Z$, $$\begin{aligned} \| \theta^{-1}(z) - z \| &\le \| ({\mathrm{Ad}}(w_1^*) \circ \sigma)(z) - {\mathrm{Ad}}(w_1^*)(z) \| + \| {\mathrm{Ad}}(w_1^*)(z) - z\| \\ &\le \| \sigma(z) - z\| + 2\| w_1-I\| \\ &\le 15\gamma +100\gamma=115\gamma.\end{aligned}$$ Therefore, $$\begin{aligned} \label{4.1.7} \theta^{-1}\approx_{Z,115\gamma} {\mathrm{id}}_A.\end{aligned}$$ Since $Z_A\subseteq Z$, we have $\theta^{-1}\approx_{Z_A,115\gamma} {\mathrm{id}}_A$, so that condition (v) holds. By (\[4.1.5\]), $$\begin{aligned} \label{4.1.8} \theta=\sigma^{-1}\circ {\mathrm{Ad}}(w_1) \approx_{X,\varepsilon/3} \sigma^{-1}\circ \sigma\circ\beta=\beta\end{aligned}$$ Fix $x_0\in X_0$. Let $x:= \beta^{-1}(x_0)\in X$. Then, by (\[4.1.8\]), $$\begin{aligned} \| \theta^{-1}(x_0)-\beta^{-1}(x_0)\| = \| (\theta^{-1}\circ\beta)(x) - x \| =\| \beta(x) -\theta(x)\| \le \frac{\varepsilon}{3}.\end{aligned}$$ Therefore, $$\begin{aligned} \label{4.1.9} \theta^{-1}\approx_{X_0, \varepsilon/3} \beta^{-1}.\end{aligned}$$ Hence, $$X=\beta^{-1}(X_0)\subseteq_{\varepsilon/3} \theta^{-1}(X_0) \subseteq \theta^{-1}(Y_0) =Y,$$ so that condition (ii) holds. We now verify condition (vi). Let $\phi\colon B\to A$ be a $C$-fixed surjective $*$-isomorphism with $\phi^{-1}\approx_{Z,365\gamma} {\mathrm{id}}_A$. By (\[4.1.2\]), $$\phi^{-1}\approx_{Z,380\gamma}\sigma.$$ Thus, by the definition of $Z$, there exists a unitary $w_0\in C'\cap B$ such that $$\begin{aligned} \label{4.1.10} {\mathrm{Ad}}(w_0)\circ \phi^{-1}\approx_{Y_0,\delta/2}\sigma\end{aligned}$$ and $\| w_0 - I\| \le 380\sqrt{2}\gamma<540\gamma$. Fix $y\in Y$. Let $y_0:= \theta(y)\in Y_0$. Then, since $w_0^*w_1\in B$, we have $$\begin{aligned} \| \theta(y) - ({\mathrm{Ad}}(\phi(w_0^*w_1))\circ \phi)(y) \| &= \| \theta(y)- (\phi\circ{\mathrm{Ad}}(w_0^*w_1))(y) \| \\ &= \| y_0 - (\phi\circ{\mathrm{Ad}}(w_0^*) \circ\sigma)(y_0)\| \\ &=\| ({\mathrm{Ad}}(w_0) \circ\phi^{-1})(y_0)-\sigma(y_0)\| \le \frac{\delta}{2}\end{aligned}$$ by (\[4.1.10\]). Define $w:=\phi(w_0^*w_1)$ so $\theta\approx_{Y,\delta/2}{\mathrm{Ad}}(w) \circ\phi$. Since $\phi$ is $C$-fixed map and $w_0,w_1\in C'$, $w$ is in $C'\cap A$. Moreover, $$\begin{aligned} \| w-u\| &\le \| w -I\|+ \| I-u\| \le \| w_0^*w_1-I\| + 75\gamma \\ &\le \|w_0-I\|+\|I-w_1\|+75\gamma \le (540+50+75)\gamma \\ &=665\gamma.\end{aligned}$$ Therefore, condition (vi) is proved. It only remains to prove condition (vii). Let $S\subseteq H_1$ be a finite set and $v\in C'\cap C^*(A,B)$ be a unitary with $\| v-u\| \le 740\gamma$ and $$\begin{aligned} \label{4.1.11} {\mathrm{Ad}}(v)\approx_{Y,\delta}\theta.\end{aligned}$$ Fix $y_0\in Y_0$. Let $y:=\theta^{-1}(y_0)\in Y$. Then, $$\begin{aligned} \label{4.1.99} \begin{split} \| \sigma(y_0)-{\mathrm{Ad}}(w_1v^*)(y_0)\| &=\| ({\mathrm{Ad}}(w_1^*) \circ \sigma)(y_0)- {\mathrm{Ad}}(v^*) (y_0) \| \\ &=\| y - ({\mathrm{Ad}}(v^*) \circ\theta)(y) \| \\ &=\| {\mathrm{Ad}}(v) (y) - \theta(y) \| \le\delta . \end{split}\end{aligned}$$ This and (\[4.1.3\]) give ${\mathrm{Ad}}(u_0) \approx_{Y_0,3\delta/2}{\mathrm{Ad}}(w_1v^*)$. Therefore, for any $y_0\in Y_0$, $$\begin{aligned} \label{4.1.12} \| (v w_1^* u_0) y_0 - y_0(v w_1^* u_0) \| =\|u_0 y_0u_0^*- (w_1 v^*) y_0 (w_1 v^*)^* \| \le \frac{3}{2}\delta.\end{aligned}$$ Furthermore, $$\begin{aligned} \label{4.1.13} \| v w_1^* u_0 - I\| =\| w_1^* u_0- v^*\| = \| u^* - v^*\| \le 740\gamma.\end{aligned}$$ Let $S_0:=S\cup w_1 S\cup v S$. By the definition of $Y_0$ and $\delta$, with $v w_1^* u_0$ and $S_0$, there exists a unitary $v_0\in C'\cap A$ such that $\| v_0-I\|\le 740\gamma$, $$\begin{aligned} \label{4.1.14} \| v_0 x_0- x_0 v_0 \| \le \frac{\varepsilon}{6}, \ \ x_0\in X_0\end{aligned}$$ and $$\begin{aligned} \label{4.1.15} \| (v_0- v w_1^*u_0)\xi_0\| <\mu \ \ \mathrm{and} \ \ \| (v_0- v w_1^*u_0)^*\xi_0\| <\mu, \ \ \xi_0\in S_0.\end{aligned}$$ Let $\tilde{v}:=v_0^*$. Then, $\| \tilde{v}-I\| \le \| v_0-I\| \le 740\gamma $. For any $\xi\in S$, $$\begin{aligned} \| (\tilde{v}v -u)\xi\| &=\| ( v_0^* v - u_0^* w_1)\xi \| =\| ( v_0^* - u_0^* w_1 v^*) v \xi\| \\ &=\| (v_0- v w_1 u_0)^* v \xi \| <\mu\end{aligned}$$ by (\[4.1.15\]) and $v \xi \in S_0$. Moreover, $$\| (\tilde{v} v -u)^*\xi\| =\| (v_0^*v- u_0^*w_1)^*\xi\| = \| v^*(v_0 - v w_1^* u_0)\xi\| <\mu$$ by (\[4.1.15\]). For any $x_0\in X_0$, by (\[4.1.99\]) and (\[4.1.14\]), $$\begin{aligned} \label{4.1.16} \begin{split} \| \theta^{-1}(x_0) - {\mathrm{Ad}}(v^* v_0)(x_0)\| &\le \| \theta^{-1}(x_0) - {\mathrm{Ad}}(v^*)(x_0) \| + \| {\mathrm{Ad}}(v^*)(x_0) - {\mathrm{Ad}}(v^* v_0)(x_0) \| \\ &= \| ({\mathrm{Ad}}(w_1^*) \circ \sigma)(x_0) - {\mathrm{Ad}}(v^*)(x_0) \| + \| x_0 - {\mathrm{Ad}}(v_0)(x_0) \| \\ &= \| \sigma(x_0) - {\mathrm{Ad}}(w_1v^*)(x_0) \| + \| v_0 x_0 - x_0 v_0 \| \\ &< \delta + \frac{\varepsilon}{6} \le \frac{\varepsilon}{3}. \end{split}\end{aligned}$$ Let $x\in X$ and $x_0:=\beta(x)\in X_0$. By (\[4.1.8\]), (\[4.1.9\]) and (\[4.1.16\]), $$\begin{aligned} \| {\mathrm{Ad}}(\tilde{v}v)(x) - \theta(x) \| &\le \| {\mathrm{Ad}}(\tilde{v}v)(x) - \beta(x) \| + \| \beta(x)- \theta(x)\| \\ &\le \| ({\mathrm{Ad}}(\tilde{v}v)\circ \beta^{-1})(x_0) - x_0 \| + \frac{\varepsilon}{3} \\ &= \| \beta^{-1}(x_0) - {\mathrm{Ad}}(v^* v_0)(x_0) \| + \frac{\varepsilon}{3} \\ &\le \| \beta^{-1}(x_0) - \theta^{-1}(x_0) \| + \| \theta^{-1}(x_0) - {\mathrm{Ad}}(v^* v_0)(x_0)\| + \frac{\varepsilon}{3} \\ &\le \frac{\varepsilon}{3} + \frac{\varepsilon}{3} + \frac{\varepsilon}{3} = \varepsilon.\end{aligned}$$ Therefore, condition (vii) holds. \[4.2\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate $\mathrm{C}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Let $\{ a_n\}_{n=1}^{\infty}$, $\{ b_n\}_{n=1}^{\infty}$ and $\{ \xi_n\}_{n=0}^{\infty}$ be dense subsets in $A_1$, $B_1$ and $H_1$, respectively. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{{\mathrm{w}}}$. If $d(A,B)<\gamma<10^{-5}$, then there exist finite subsets $\{ X_n\}_{n=0}^{\infty}, \{Y_n\}_{n=0}^{\infty}\subseteq B_1$, $\{Z_n\}_{n=0}^{\infty}\subseteq A_1$, positive constants $\{\delta_n\}_{n=0}^{\infty}$, unitaries $\{u_n\}_{n=0}^{\infty}\subseteq C'\cap C^*(A,B)$ and $C$-fixed surjective $*$-isomorphisms $\{\theta_n\colon B\to A\}_{n=0}^{\infty}$ with the following conditions$\colon$ 1. For $n\ge 1$, $b_1,\dots,b_n\in X_n;$ 2. For $n\ge 0$, $X_n\subseteq_{2^{-n}/3}Y_n$ and $\delta_n<2^{-n};$ 3. For $n\ge 1$, $\theta_n\approx_{X_{n-1},2^{-(n-1)}}\theta_{n-1};$ 4. For $n\ge 0$, $\theta_n\approx_{Y_n,\delta_n}{\mathrm{Ad}}(u_n);$ 5. For $1\le j \le n$, $\|(u_n- u_{n-1})\xi_j\|<2^{-n}$ and $\| (u_n- u_{n-1})^* \xi_j \| <2^{-n};$ 6. For $1\le j\le n$, there exists $x\in X_n$ such that $\| \theta_n(x)-b_j\| \le 9/10;$ 7. For $n\ge 0$ and a $C$-fixed surjective $*$-isomorphism $\phi\colon B\to A$ with $\phi^{-1}\approx_{Z_n, 365\gamma} {\mathrm{id}}_A$, there exists a unitary $w\in C'\cap A$ such that $\mathrm{Ad} (w) \circ \phi \approx_{Y_n, \delta_n/2} \theta_n$ and $\| w - u_n\| \le 665\gamma;$ 8. For $n\ge 0$, a finite subset $S\subseteq H_1$ and a unitary $v\in C'\cap C^*(A,B)$ with $\mathrm{Ad} (v) \approx_{Y_n,\delta_n}\theta_n$ and $\| v -u_n\| \le 740\gamma$, there exists a unitary $\tilde{v} \in C'\cap A$ such that $\mathrm{Ad}(\tilde{v} v) \approx_{X_n, 2^{-(n+1)}} \theta_n$, $\| \tilde{v}- I\| \le 740\gamma$ and $$\| (\tilde{v}v-u_n)\xi \| < \frac{1}{2^{n+1}} \ \ and \ \ \| (\tilde{v}v-u_n)^* \xi\| <\frac{1}{2^{n+1}}, \ \ \xi \in S;$$ 9. For $n\ge 0$, there is a unitary $z\in A$ such that $\| z - u_n\| \le 75\gamma$. We prove this lemma by using the induction. Denote by (a)$_n$ the condition (a) for $n$. Let $X_0=Y_0=Z_0=\emptyset$, $\delta_0=1/2$, $u_0=I$ and $\theta_0\colon B\to A$ be any $C$-fixed surjective $*$-isomorphism by Proposition \[3.2\]. Then, conditions (1)$_0$, (3)$_0$, (5)$_0$ and (6)$_0$ do not be defined. Conditions (2)$_0$ and (4)$_0$ are clear, since $X_0=Y_0=\emptyset$. In conditions (7)$_0$, (8)$_0$ and (9)$_0$, by taking $w=I$, $\tilde{v}=v^*$ and $z=I$, that conditions are satisfied. Assume the statement holds for $n$; we will prove it for $n+1$. By (9)$_n$, there exists a unitary $z\in A$ such that $\| z - u_n\| \le 75\gamma$. For $1\le j\le n+1$, there exists $x_j\in B_1$ such that $\| x_j- z^* a_j z\| \le \gamma$. Define $X_{n+1}:=X_n\cup Y_n\cup \{b_n\}\cup \{x_1,\ldots,x_{n+1}\}$. In Lemma \[4.1\], let $X=X_{n+1}$, $Z_A=Z_n$, $\varepsilon=\delta_n/6$ and $\mu=2^{-(n+2)}$ and so there exist $Y_{n+1}\subseteq B_1$, $Z_{n+1}\subseteq A_1$, $\delta_{n+1}>0$, $u\in C'\cap C^*(A,B)$ and $\theta\colon B\to A$ with conditions (i)-(vii) of that lemma. By Lemma \[4.1\] (i), $\delta_{n+1}<\varepsilon=\delta_n/6<2^{-(n+1)}/3$. By Lemma \[4.1\] (ii), $X_{n+1}\subseteq_{2^{-(n+1)}/3} Y_{n+1}$. Thus, condition (2)$_{n+1}$ holds. By applying $\theta$ to condition (7)$_n$, we may find a unitary $w\in C'\cap A$ such that $$\begin{aligned} \label{4.2.1} {\mathrm{Ad}}(w) \circ \theta\approx_{Y_n,\delta_n/2}\theta_n\end{aligned}$$ and $\| w-u_n\|\le665\gamma$. Fix $y\in Y_n$. Since $Y_n\subseteq X_{n+1}\subseteq_{\delta_n/6}Y_{n+1}$, there exists $\tilde{y}\in Y_{n+1}$ such that $\| y-\tilde{y}\|\le \delta_n/6$. Then, by Lemma \[4.1\] (iv), $$\begin{aligned} \| {\mathrm{Ad}}(u)(y)- \theta(y)\| &\le \| {\mathrm{Ad}}(u) (y)-{\mathrm{Ad}}(u) (\tilde{y})\|+\|{\mathrm{Ad}}(u) (\tilde{y})-\theta(\tilde{y})\|+\|\theta(\tilde{y})-\theta(y)\| \\ &\le \frac{\delta_n}{6}+\delta_{n+1}+\frac{\delta_n}{6}\le \frac{\delta_n}{2}. \end{aligned}$$ This and (\[4.2.1\]) give $$\begin{aligned} \label{4.2.2} {\mathrm{Ad}}(w u)\approx_{Y_n,\delta_n} \theta_n.\end{aligned}$$ Moreover, $$\begin{aligned} \label{4.2.3} \begin{split} \| w u - u_n\| \le \| w (u - I) \| + \| w - u_n \| \le 75\gamma+665\gamma=740\gamma. \end{split}\end{aligned}$$ By (\[4.2.2\]) and (\[4.2.3\]), we can apply $w u$ and $\{\xi_1,\ldots,\xi_{n+1}\}$ to condition (8)$_n$. Hence, there exists a unitary $\tilde{v}\in C'\cap A$ such that $$\begin{aligned} \label{4.2.4} \mathrm{Ad}(\tilde{v} w u) \approx_{X_n, 2^{-(n+1)}} \theta_n,\end{aligned}$$ $\| \tilde{v}- I\| \le 740\gamma$ and $$\begin{aligned} \label{4.2.5} \| (\tilde{v}w u-u_n)\xi_j \| < \frac{1}{2^{n+1}} \ \ \mathrm{and} \ \ \| (\tilde{v}w u-u_n)^* \xi_j\| <\frac{1}{2^{n+1}}, \ \ 1\le j\le n+1.\end{aligned}$$ Define $\theta_{n+1}:={\mathrm{Ad}}(\tilde{v}w)\circ \theta$ and $u_{n+1}:= \tilde{v}w u$. By (\[4.2.5\]), condition (5)$_{n+1}$ is trivial. Since $\tilde{v}w\in A$ and $$\| \tilde{v}w-u_{n+1}\|=\|\tilde{v}w-\tilde{v}w u\|=\|I-u\|\le 75\gamma,$$ condition (9)$_{n+1}$ holds. By Lemma \[4.1\] (iv), $\theta_{n+1}={\mathrm{Ad}}(\tilde{v}w)\circ\theta\approx_{Y_{n+1},\delta_{n+1}}{\mathrm{Ad}}(\tilde{v}w u)={\mathrm{Ad}}(u_{n+1})$. Thus, condition (4)$_{n+1}$ is satisfied. Fix $x\in X_n$. Let $y\in Y_{n+1}$ satisfy $\| x- y\|\le 2^{-(n+1)}/3$. Then, by (4)$_{n+1}$, $$\begin{aligned} &\| \theta_{n+1}(x)- {\mathrm{Ad}}(u_{n+1}) (x) \| \\ &\le \| \theta_{n+1}(x) -\theta_{n+1}(y)\|+\|\theta_{n+1}(y)-{\mathrm{Ad}}(u_{n+1}) (y)\| +\| {\mathrm{Ad}}(u_{n+1}) (y)- {\mathrm{Ad}}(u_{n+1}) (x) \| \\ &\le \frac{1}{3\cdot 2^{n+1}}+\delta_{n+1}+\frac{1}{3\cdot 2^{n+1}} < \frac{1}{2^{n+1}}.\end{aligned}$$ Therefore, $$\begin{aligned} \theta_{n+1}\approx_{X_n,2^{-(n+1)}} {\mathrm{Ad}}(u_{n+1}).\end{aligned}$$ This and (\[4.2.4\]) give $\theta_{n+1}\approx_{X_n,2^{-n}}\theta_n$. Hence, condition (3)$_{n+1}$ holds. For any $x\in A_1$, $$\begin{aligned} \label{4.2.6} \begin{split} &\| {\mathrm{Ad}}(\tilde{v}w)(x)- {\mathrm{Ad}}(z)(x) \| \\ &\le \| {\mathrm{Ad}}(\tilde{v}w)(x)- {\mathrm{Ad}}(w) (x)\| + \| {\mathrm{Ad}}(w) (x)- {\mathrm{Ad}}(u_n) (x)\| + \| {\mathrm{Ad}}(u_n) (x)- {\mathrm{Ad}}(z) (x)\| \\ &\le 2\|\tilde{v}-I\|+ 2\| w-u_n\|+ 2\|u_n-z\| \\ &\le (1480+ 1330+ 150)\gamma=2960\gamma. \end{split}\end{aligned}$$ For $1\le j\le n+1$, there exists $x_j\in X_{n+1}$ such that $\| x_j- z^* a_j z\| \le \gamma$ by the definition of $X_{n+1}$. (\[4.2.6\]) and Lemma \[4.1\] (v) give $$\begin{aligned} &\| \theta_{n+1}(x_j) - a_j\| \\ &\le \| \theta_{n+1}(x_j)-{\mathrm{Ad}}(\tilde{v}w)(x_j)\|+\|{\mathrm{Ad}}(\tilde{v}w)(x_j)-{\mathrm{Ad}}(z) (x_j)\| +\|{\mathrm{Ad}}(z) (x_j)-a_j\| \\ &\le \| ({\mathrm{Ad}}(\tilde{v}w)\circ\theta)(x_j)-{\mathrm{Ad}}(\tilde{v}w)(x_j)\| + 2960\gamma + \| x_j - z^* b_j z\| \\ &\le \| \theta(x_j) - x_j\| + 2960\gamma+\gamma \\ &\le 3078\gamma <\frac{9}{10}.\end{aligned}$$ Therefore, condition (6)$_{n+1}$ is proved. Let $\phi\colon B\to A$ be a $C$-fixed surjective $*$-isomorphism with $\phi^{-1}\approx_{Z_{n+1},365\gamma}{\mathrm{id}}_A$. By Lemma \[4.1\] (vi), there exists a unitary $\tilde{w}\in C'\cap A$ such that $$\begin{aligned} \label{4.2.7} {\mathrm{Ad}}(\tilde{w}) \circ \phi \approx_{Y_{n+1},\delta_{n+1}/2} \theta\end{aligned}$$ and $\| \tilde{w}- u \| \le 665\gamma$. For any $y\in Y_{n+1}$, by (\[4.2.7\]), $$\begin{aligned} \| ({\mathrm{Ad}}(\tilde{v}w \tilde{w})\circ \phi)(y) - \theta_{n+1}(y) \| =\| ({\mathrm{Ad}}(\tilde{w}) \circ \phi)(y) - \theta(y) \| \le \frac{\delta_{n+1}}{2}.\end{aligned}$$ Furthermore, we have $$\begin{aligned} \| \tilde{v}w\tilde{w}-u_{n+1}\| =\| \tilde{v}w\tilde{w} - \tilde{v}w u \| =\| \tilde{w}-u\| \le 665 \gamma.\end{aligned}$$ Thus, $\tilde{v}v\tilde{w}$ satisfies (7)$_{n+1}$. It remains to prove condition (8)$_{n+1}$. Let $S\subseteq H_1$ be a finite set and $v\in C'\cap C^*(A,B)$ be a unitary with $\| v-u_{n+1}\|\le 740\gamma$ and ${\mathrm{Ad}}(v) \approx_{Y_{n+1},\delta_{n+1}}\theta_{n+1}$. Then, we have $$\| w^*\tilde{v}^*v - u\|=\| v-\tilde{v}w u\| =\| v- u_{n+1}\|\le 740\gamma$$ and ${\mathrm{Ad}}(w^*\tilde{v}^*v)\approx_{Y_{n+1},\delta_{n+1}} {\mathrm{Ad}}(w^*\tilde{v}^*)\circ\theta_{n+1} =\theta$. Hence, by applying Lemma \[4.1\] (vii) to $w^*\tilde{v}^*v$ and $S':=S\cup \{ w^*\tilde{v}^*\xi : \xi\in S\}$, there exists a unitary $v'\in C'\cap A$ such that ${\mathrm{Ad}}(v' w^*\tilde{v}^*v)\approx_{X_{n+1},\delta_n/6}\theta$, $\| v'-I\|\le 740\gamma$ and $$\begin{aligned} \| (v' w^*\tilde{v}^*v- u) \xi'\|<\frac{1}{2^{n+2}} \ \ \mathrm{and} \ \ \| (v' w^*\tilde{v}^*v- u)^* \xi'\|<\frac{1}{2^{n+2}}, \ \ \xi'\in S'.\end{aligned}$$ For any $x\in X_n$, we have $$\begin{aligned} \| {\mathrm{Ad}}(\tilde{v}w v' w^* \tilde{v}^* v)(x)- \theta_{n+1}(x) \| =\| {\mathrm{Ad}}( v' w^* \tilde{v}^* v)(x) - \theta(x) \|\le \frac{\delta_n}{6}<\frac{1}{2^{n+2}}.\end{aligned}$$ and $$\begin{aligned} \| \tilde{v}w v' w^* \tilde{v}^* -I\| =\| v'-I \| \le 740\gamma.\end{aligned}$$ For $\xi\in S$, we have $$\begin{aligned} \| ( \tilde{v}w v' w^* \tilde{v}^* v - u_{n+1})\xi \| = \| (v' w^* \tilde{v}^* v - u)\xi\| < \frac{1}{2^{n+2}}.\end{aligned}$$ and $$\begin{aligned} \| (\tilde{v} w v' w^* \tilde{v}^* v - u_{n+1})^*\xi \| =\| (v' w^*\tilde{v} v - u )^* w^* \tilde{v}^* \xi\| <\frac{1}{2^{n+2}}.\end{aligned}$$ Therefore, $\tilde{v}w v' w^*\tilde{v}^*$ satisfies $(8)_{n+1}$, and the lemma follows. \[4.3\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting non-degenerately on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate ${\mathrm{C}}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap C^*(A,B)\subseteq \overline{C'\cap A}^{{\mathrm{w}}}$. If ${d(A,B)}<10^{-5}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. Let $\{ a_n\}_{n=1}^{\infty}$, $\{ b_n\}_{n=1}^{\infty}$ and $\{ \xi_n\}_{n=0}^{\infty}$ be dense subsets in $A_1$, $B_1$ and $H_1$, respectively. In Lemma \[4.2\], we may choose $\{X_n\}_{n=0}^{\infty}$, $\{Y_n\}_{n=0}^{\infty}$, $\{Z_n\}_{n=0}^{\infty}$, $\{\delta_n\}_{n=0}^{\infty}$, $\{u_n\}_{n=0}^{\infty}$ and $\{\theta_n\}_{n=0}^{\infty}$ with (1)–(8). For any $b_k$ and $\varepsilon>0$, there is $N\in\mathbb{N}$ such that $2^{-(N-1)}<\varepsilon$ and $k< N$. For $m \ge n \ge N$, $$\| \theta_m(b_k)-\theta_n(b_k)\| \le \sum_{j=n}^{m-1} \| \theta_{j+1}(b_k) - \theta_j(b_k) \| \le\sum_{j=n}^{m-1} \frac{1}{2^j} <\frac{1}{2^{N-1}} <\varepsilon.$$ Thus, for any $b_k$, $\{\theta_n(b_k)\}_{n=0}^{\infty}$ is a Cauchy sequence. Since $\| \theta_n\| \le 1$, the sequence $\{\theta_n\}$ converges to a $C$-fixed $*$-homomorphism $\theta\colon B\to A$ in the point-norm topology. For any $a_j$ and $n\ge j$, there is $x\in X_n$ such that $\| \theta_n(x) - a_j\| \le 9/10$. $$\begin{aligned} \| a_j - \theta(x) \| &\le \| a_j - \theta_n(x) \| + \sum_{m=n}^{\infty} \| \theta_{m+1}(x) - \theta_m(x) \| \\ &\le \frac{9}{10}+ \sum_{m=n}^{\infty}\frac{1}{2^m} \le \frac{9}{10}+\frac{1}{2^{n-1}}.\end{aligned}$$ Since $n\ge j$ was arbitrary and $\{a_n\}$ is dense set in $A_1$, we have $d(A,\theta(B))<1$. Therefore, $\theta$ is surjective by Corollary \[surjective\]. By Lemma \[4.2\] (5), $\{u_n\}$ converges to a unitary $u\in C'\cap (A\cup B)''$ in the $*$-strong topology. Moreover, by Lemma \[4.2\] (4), we have $\theta={\mathrm{Ad}}(u)$. Therefore, $A=u B u^*$, since $\theta$ is surjective. Finally, we show Theorem D by using Proposition \[4.3\] and Corollary \[2.4\]. \[main\] Let $C\subseteq D$ be a unital inclusion of $\mathrm{C}^*$-algebras acting on a separable Hilbert space $H$. Let $A$ and $B$ be separable intermediate ${\mathrm{C}}^*$-subalgebras for $C\subseteq D$ with a conditional expectation $E\colon D\to B$. Suppose that $C\subseteq A$ is crossed product-like by a discrete amenable group and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{{\mathrm{w}}}$. If ${d(A,B)}<10^{-7}$, then there exists a unitary $u\in C'\cap (A\cup B)''$ such that $u A u^* = B$. Let ${d(A,B)}<\gamma<10^{-7}$. By Corollary \[2.4\], there exists a unitary $u_0\in (C^{**})'\cap W^*(A^{**}, B^{**})$ such that $u_0 A^{**} u_0^* = B^{**}$ and $\| u_0-I\| \le 19\gamma$. Let $e_D$ be the support projection of $D$ and define $K:=\mathrm{ran}(e_D) \subseteq H$. Now restrict $A,B,C$ and $D$ to $K$. By the universal property, there exists a unique normal representation $\pi\colon D^{**}\to \mathbb{B}(K)$ such that $\pi|_D={\mathrm{id}}_D$ and $\pi(D^{**})=D''$. Define $\tilde{A}:= \pi(u_0) A \pi(u_0^*)\subseteq \mathbb{B}(K)$, then $d(\tilde{A},B)\le 2\|u_0-I\| + {d(A,B)}<39\gamma<10^{-5}$. Since $\tilde{A}'' = \pi(u_0) \pi(A^{**}) \pi(u_0^*)=\pi(B^{**}) = B''$ and $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{{\mathrm{w}}}$, $$C'\cap C^*(\tilde{A},B)\subseteq C'\cap \tilde{A}'' = \pi(u_0) ( C'\cap A'')\pi(u_0)^* = \pi(u_0) (\overline{C'\cap A}^{{\mathrm{w}}}) \pi(u_0)^* =\overline{C'\cap \tilde{A}}^{{\mathrm{w}}}.$$ Therefore, there exists a unitary $u_1\in C'\cap B''\subseteq \mathbb{B}(K)$ such that $u_1 \tilde{A} u_1^* = B$ by Proposition \[4.3\]. Hence, the unitary $u$ is given by $$u=u_1\pi(u_0)+(I_K-e_D) \in C'\cap (A\cup B)''\subseteq \mathbb{B}(H),$$ so that $u A u^*=B$. Let $C=C(\mathbb{T})$ and $A=C(\mathbb{T})\rtimes \mathbb{Z}$ act on $H=\mathcal{L}^2(\mathbb{T})\otimes \ell^2(\mathbb{Z})$. Then we have $C'\cap A=C$ and $C'\cap \overline{A}^{{\mathrm{w}}}=\mathcal{L}^{\infty}(\mathbb{T})$, that is, $C'\cap A$ is weakly dense in $C'\cap \overline{A}^{{\mathrm{w}}}$. But we should be careful that $C'\cap \overline{A}^{{\mathrm{w}}}$ may not be equal to the weak closure of $C'\cap A$ in general. Let $\alpha$ be a free action of a group $G$ on a simple C$^*$-algebra $C$ and $A=C\rtimes_{\alpha}G$ act irreducibly on a Hilbert space $H$. Then $C'\cap A=\mathbb{C}$ but $C'\cap \overline{A}^{{\mathrm{w}}}=C'\cap \mathbb{B}(H)$. The author would like to thank Professor Yasuo Watatani for his encouragement and advise. [99]{} W. B. Arveson, Notes on extensions of $C^*$-algebras, [*Duke Math. J.*]{} 44 (1977) 329–355. J. Cameron, E. Christensen, A. M. Sinclair, R. R. Smith, S. White and A. D. Wiggins, Kadison-Kastler stable factors, *Duke Math. J.* 163 (2014) 2639–2686. W. K. Chan, Perturbations of certain crossed product algebras by free groups, *J. Funct. Anal.* 267 (2014) 3994–4027. M. D. Choi and E. Christensen, Completely order isomorphic and close $C^*$-algebras need not be $*$-isomorphic, *Bull. London Math. Soc.* 15 (1983) 604–610. E. Christensen, Perturbations of type I von Neumann algebras, *J. London Math. Soc.* 9 (1974/75) 395–405. E. Christensen, Perturbation of operator algebras, *Invent. Math.* 43 (1977) 1–13. E. Christensen, Perturbation of operator algebras II, *Indiana Univ. Math. J.* 26 (1977) 891–904. E. Christensen, Near inclusions of $C^*$-algebras, *Acta Math.* 144 (1980) 249–265. E. Christensen, A. M. Sinclair, R. R. Smith, S. A. White and W. Winter, Perturbations of nuclear $C^*$-algebras, *Acta Math.* 208 (2012) 93–150. L. Dickson, A Kadison Kastler row metric and intermediate subalgebras, *Internat. J. Math.* 25 (2014) 16 pp. S. Ino, Perturbations of von Neumann subalgebras with finite index, [*Canad. Math. Bull.*]{} http://dx.doi.org/10.4153/CMB-2015-081-8 S. Ino and Y. Watatani, Perturbations of intermediate C$^*$-subalgebras for simple C$^*$-algebras, [*Bull. London Math. Soc.*]{} 46 (2014) 469–480. B. Johnson, Perturbations of Banach algebras, [*Proc. London Math. Soc.*]{} 34 (1977) 439–458. B. Johnson, A counterexample in the perturbation theory of $C^*$-algebras, *Canad. Math. Bull.* 25 (1982) 311–316. R. V. Kadison and D. Kastler, Perturbations of von Neumann algebras. I. Stability of type, *Amer. J. Math.* 94 (1972) 38–54. M. Khoshkam, On the unitary equivalence of close $C^*$-algebras, [*Michigan Math. J.*]{} 31 (1984) 331–338. V. Paulsen, [*Completely bounded maps and operator algebras*]{}, Cambridge Studies in Advanced Mathematics, 78. Cambridge University Press, Cambridge, (2002) xii+300 pp. J. Phillips, Perturbations of type I von Neumann algebras, [*Pacific J. Math.*]{} 31 (1979) 1012–1016. J. Phillips and I. Raeburn, Perturbations of AF-algebras, [*Canad. J. Math.*]{} 31 (1979) 1012–1016. J. Phillips and I. Raeburn, Perturbations of $C^*$-algebras II, *Proc. London Math. Soc.* 43 (1981) 46–72. M. Pimsner and S. Popa, Entropy and index for subfactors, [*Ann. Sci. Ecole Norm. Sup.*]{} 19 (1986) 57–106. I. Raeburn and J. L. Taylor, Hochschild cohomology and perturbations of Banach algebras, [*J. Func. Anal.*]{} 25 (1977) 258–266. Y. Watatani, [*Index for $C^*$-subalgebras*]{}, Mem. Amer. math. Soc. 424 (1990) vi+117 pp.
harvmac \#1[ N\_[\#1]{}]{} \#1[[*Nucl. Phys.,*]{} [**B\#1**]{},]{} \#1[[*Phys. Rev. Lett.,*]{} [**\#1**]{},]{} \#1[[*Phys. Rev.,*]{} [**D\#1**]{},]{} \#1[[*Phys. Lett.,*]{} [**B\#1**]{},]{} David Kastor and K. Z. Win[^1][kastor@phast.umass.edu, win@phast.umass.edu]{} *Department of Physics and Astronomy* *University of Massachusetts* *Amherst, MA 01003-4525* **Abstract** Non-extreme black hole solutions of four dimensional, $N=2$ supergravity theories with Calabi-Yau prepotentials are presented, which generalize certain known double-extreme and extreme solutions. The boost parameters characterizing the nonextreme solutions must satisfy certain constraints, which effectively limit the functional independence of the moduli scalars. A necessary condition for being able to take certain boost parameters independent is found to be block diagonality of the gauge coupling matrix. We present a number of examples aimed at developing an understanding of this situation and speculate about the existence of more general solutions. Considerable effort has been devoted recently to studying black hole solutions in four-dimensional, $N=2$ supergravity theories . Interest has been focused, so far, on extreme black holes, which satisfy additional supersymmetry constraints and saturate a BPS bound. A key discovery  in this case is that the values of the scalar moduli fields of the $N=2$ vector multiplets are actually fixed at the black hole horizon in terms of the electric and magnetic charges carried by the black hole. In particular, the horizon values of the scalar fields are independent of the values of the scalar fields at infinity. The evolution of the scalar fields moving inward from infinity towards the horizon can then be thought of as motion in a kind of attractor . Of particular interest are the “double-extreme" solutions, for which the scalar fields stay fixed at their horizon values throughout the spacetime . These are “doubly" extreme in the sense that, in addition to having degenerate horizons, the black hole mass, for these solutions, is minimized for the given charges. “Singly" extreme solutions with non-constant scalars are given in . In this paper we will look at non-extreme black hole solutions in $N=2$ theories in four dimensions, obtained by dimensional reduction of Type II supergravity on a Calabi-Yau threefold. Since the basic form of the extreme solutions in this case is quite similar to certain supersymmetric, intersecting brane solutions of torus compactifications , a simple ansatz for the non-extreme $N=2$ black holes arises from the known non-extreme intersecting brane solutions in torus compactifications . This ansatz is also analogous to the non-extreme generalization of the extreme black branes solution of M-theory . In this ansatz, given below, there is a single “non-extremality" parameter $\mu$ and a number of “boost parameters" $\gamma_\Lambda$ related to the individual charges. We find below, however, that this ansatz does not in general solve the equations of motion. Rather, the equations of motion reduce to a condition which may be regarded as a constraint on the boost parameters. The only general ([*i.e.*]{} for all Calabi-Yau manifolds) solution to this constraint, which we have found, is when all the boost parameters are taken to be equal. For specific models, such as the $STU$ model and others discussed below, it is possible to take separate boost parameters. We have not yet explored these constraints fully. In the case of torus compactifications of $D=11$ supergravity, the general non-extreme solutions of  may be obtained from the $D=10$ Schwarzschild solution via various combinations of boosts, dimensional upliftings and reductions and duality symmetries. We note that these same methods cannot be used to similarly construct the non-extreme $N=2$ solutions. We give only a brief summary of the formalism here. A more complete treatment may be found in, [*e.g.*]{}, . An $N=2$ supergravity theory in four dimensions includes, in addition to the graviton multiplet, $n_v$ vector multiplets and $n_h$ hypermultiplets. In our work we consistently take the hypermultiplet fields to be constant and will ignore them below. The bosonic part of the action is then given by where $G_{\mu\nu}$ is the spacetime metric, $z^A$ with $A=1,\dots ,n_v$ are complex scalar moduli fields parametrizing a special Kähler manifold and $F_{\mu\nu}^\Lambda =2\partial_{[\mu} A_{\nu]}^\Lambda$ with $\Lambda=0,1,\dots,n_v$ are the field strengths of $n_v+1$ $U(1)$ gauge fields $A^\Lambda_\mu$. Here, the complex scalars are related to the holomorphic symplectic sections $X^\Lambda$ by the inhomogeneous coordinates condition The Kähler potential $K$, scalar metric $g_{A\bar B}$ and gauge couplings $N_{\Lambda\Sigma}$ are all determined in terms of the prepotential $F(X)$, which is a holomorphic, second-order homogeneous function. The Kähler potential $K$ is given by where $F_\Lambda= \partial F/\partial X^\Lambda$. The Kähler metric on the scalar moduli space is then given by $g_{A\bar B}= \partial_A\partial_{\bar B}K(z,\bar z)$ where $\partial_{\bar A}=\partial/\partial{\bar z^A}$ and the gauge field couplings $N_{\Lambda\Sigma}$ by where $F_{\Lambda\Sigma}=\partial F_\Lambda/\partial X^\Sigma$. For type II supergravity compactified on a Calabi-Yau space, the prepotential takes the form where the constants $d_{ABC}$, with $ABC$ completely symmetric, are the topological intersection numbers of the manifold. We further restrict our interest here to the axion free case, in which all the moduli scalars $z^A$ are pure imaginary. The gauge coupling matrix $N_{\Lambda\Sigma}$ is then pure imaginary, having nonzero components and the Kähler metric is given by The equations of motion following from the action (with ${\rm Re}N=0$) are given by $$\eqalignno{ \partial_\mu\left(\sqrt{-G}F^{\Lambda\mu\nu}{\Im} N_{\Lambda\Sigma}\right)&=0 &\eomA\cr 16g_{A\bar B}\nabla^\nu\partial_\nu\bar z^B +8(\partial_A g_{B\bar C})\partial^\mu z^B\partial_\mu\bar z^C -\left(\partial_A\n{\Lambda\Sigma}\right)F^\Lambda_{\mu\nu}F^{\Sigma\mu\nu}&=0&\eomB\cr R_{\mu\nu}-2g_{A\bar B}(\partial_\mu z^A)\partial_\nu \bar z^B-\hbox{${1\over 2}$}\left( F^\Lambda_{\mu\sigma}F^{\Sigma\sigma}_\nu-{g_{\mu\nu}\over 4} F^\Lambda_{\rho\sigma}F^{\Sigma\rho\sigma}\right)\n{\Lambda\Sigma}&=0.&\eomC\cr }$$ We want to generalize certain double-extreme and extreme solutions, which were given in  and  respectively. In these solutions, the gauge field $F_{\mu\nu}^0$ carries only electric charge, while each gauge field $F_{\mu\nu}^A$ carries only magnetic charge. As discussed in , regarded as a compactification of M-theory on $S^1\times CY$, these solutions correspond to fivebranes wrapping 4-cycles of the Calabi-Yau space, with a boost along the common string. For the special case of a torus compactification, the corresponding non-extreme solutions are given in . It is straightforward to modify the solutions there to get an ansatz for the non-extreme solutions in the present case, $$\eqalign{ds^2=-e^{-2U}fdt^2+e^{2U}\left(f^{-1}dr^2+r^2d\Omega^2\right) \quad,&\quad e^{2U}=\sqrt{H_0d_{ABC}H^AH^BH^C}\cr f=1-{\mu\over r}\quad,\quad z^A=iH^AH_0e^{-2U}\quad,&\quad H^A=h^A\left(1+{\mu\over r}\sinh^2\gamma_A\right)\cr A^0_t={r\tilde H_0'\over h_0 H_0}\quad,\quad A^C_\varphi=r^2\cos\vartheta\ \tilde H^{C^\prime}\quad,&\quad\tilde H^A=h^A\left(1+{\mu\over r}\cosh\gamma_A\sinh\gamma_A\right)\cr H_0=h_0\left(1+{\mu\over r}\sinh^2\gamma_0\right)\quad ,&\quad \tilde H_0=h_0\left(1+{\mu \over r}\cosh\gamma_0\sinh\gamma_0\right),\cr }\eqno\nonextreme$$ where prime denotes $\partial_r$. Nonzero components of the gauge field strengths are The ansatz  reduces to the “singly” extreme solutions given in  when the limit $\mu\rightarrow 0, \gamma_\Lambda\rightarrow\infty$ is taken with $\mu\sinh^2\gamma_\Lambda\equiv k_\Lambda$ held fixed and further to the “doubly” extreme solutions, with constant moduli scalars, in  when all the $k_\Lambda$ are the same. It can also be shown that, if the solution   with $H_0=\tilde H_0=1$ satisfies the equations of motion, then the solution with more general $H_0$ and $\tilde H_0$, as given in , satisfies the equations of motion. This corresponds to a boost transformation in M-theory compactified on $S^1\times CY$. Henceforth, in checking the equations of motion, we will set $H_0=\tilde H_0=1$. It is straightforward to check that the ansatz  satisfies the gauge field equation of motion . Equation  for the curvature reduces to the condition and the scalar field equation  leads to In deriving these last two equations we have made use of the fact that the extreme solutions, with $f=1$ and $(H_0,H^A)=(\tilde H_0,\tilde H^A)$, satisfy the equations of motion. Note that both sides of equations  and  vanish identically in this case. We also note that $\n{BC}=-iN_{BC}$ by virtue of   is a first order homogeneous function of $z^A$ and that, in particular, $z^A\partial_A\n{BC}=\n{BC}$. This property can be used to “contract" equation  with $z^A$ to obtain equation . Thus it is only necessary to show that the ansatz  (with $H_0=\tilde H_0=1$) satisfies . It is not difficult to see that, for an arbitrary choice of the constants $d_{ABC}$ in the prepotential, the condition  is not satisfied unless the parameters $\gamma_A$ are taken to be equal. This differs from the case of intersecting branes on a torus , for parameters $\gamma_A$ may be specified independently for each set of branes. We do not at present fully understand the significance of the restrictions placed by  on the parameters $\gamma_A$. Note that, if all the boost parameters, including $\gamma_0$, are set equal to some common value $\gamma$ in , then the scalars $z^A$ will be constant, having values where the asymptotic flatness condition, $h_0d_{ABC}h^Ah^Bh^C=1$, has been used. This case is then a non-extreme version of the “doubly” extreme black holes in . Taking $\gamma_0$ to be different, as may always be done, makes the scalars $z^A$ non-constant, but keeps their ratios constants. Clearly, if some, or all, of the $\gamma_A$’s may also be taken unequal, then there will be additional functional independence between the scalars. In the next section, we will explore some simple examples of prepotentials for which some, or all, of the $\gamma_A$’s may be specified independently. We list below some choices for the $d_{ABC}$ which allow some of the $\gamma_A$’s also to be different from each other. It follows from , that a [*necessary*]{} condition for (at least) some of the $\gamma_A$’s to be independent is that the gauge coupling matrix $\n{AB}$ be block diagonal. In this case there turns out to be one independent parameter per block. From this point of view, it seems consistent that $\gamma_0$ may always be specified independently of $\gamma_A$, since $N_{0A}$ vanishes as evident by , and hence $N_{00}$ forms a $1\times 1$ block. Our first example is the $STU$ model $\behrndtC$ for which the only nonzero $d_{ABC}$ is $d_{123}$. In this case the coupling matrix $\n{BC}$ is diagonal and all three parameters $\gamma_1,\gamma_2,\gamma_3$ may all be specified independently. However, when quantum corrections are added to the $STU$ model   $d_{333}$ becomes nonzero. This makes the coupling matrix $\n{BC}$ completely nondiagonal, which in turn implies that the $\gamma_A$’s must be taken equal. As a second example, we can take only the constants $d_{1AB}$ to be nonzero, where $A,B\neq 1$ (a similar model is considered in ). The coupling matrix $\n{BC}$ in this case is block diagonal, having a $1\times 1$ block and an $(n_v-1)\times (n_v-1)$ block. It follows that $\gamma_1$ can be chosen independently of the $\gamma_A$ for $A\ne 1$, which must all be the same. A specialization of the previous example is to take only $d_{12B}$ nonzero with $B=3\dots n_v$. This makes $\n{BC}$ block diagonal with two $1\times 1$ blocks and one $(n_v-2)\times (n_v-2)$ block and one can have three different $\gamma$’s: $\gamma_1$, $\gamma_2$ and one more $\gamma_B$ for $B=3\dots n_v$. As a final example we consider a simple toy model where only $d_{112}$ and $d_{111}$ are nonzero. In this case $\n{BC}$ is diagonal if and only if $d_{111}=0$, [*i.e.*]{} $\gamma_1=\gamma_2$ is required unless $d_{111}=0$. In each of these cases block diagonality of the gauge coupling matrix $\n{BC}$ appears to be both a necessary and a sufficient condition to be able to take independent $\gamma$’s, though we have not been able to show this generally. We examine the physical properties of the non-extreme solutions. In particular, we want to check, given the restrictions on the $\gamma_A$’s, that the charges may still be specified arbitrarily, as they can in the extreme limit . We will first display all formulae as if the $\gamma_A$’s can be specified independently and then discuss the actual solutions, in which the $\gamma_A$’s are restricted. After imposing the asymptotic flatness condition, the set of independent parameters for the solutions can be taken to be $\{\mu,\gamma_0,h^A,\gamma_A\}$. These can be exchanged for the more physical set $\{E,q_0,p^A,\gamma_A\}$, where $E$ is the ADM mass, $q_0$ the electric charge for $F_{\mu\nu}^0$ and $p^A$ the magnetic charges for $F_{\mu\nu}^A$. The ADM energy is given by where ${\cal K}^C\equiv h^Ck_C$ and $k_\Lambda=\mu\sinh^2\gamma_\Lambda$ as above. The electric charge $q_0$ and magnetic charges $p^A$ are defined by We find The Hawking temperature is where $\lambda_0=h_0\cosh^2\gamma_0$ and $\lambda^A=h^A\cosh^2\gamma_A$ and the Bekenstein entropy is First, note that equation  implies that, even in the case that all boost parameters are set equal, the charges $q_0,p^A$ may still be chosen arbitrarily by virtue of the constants $h^A$ and the single boost parameter $\gamma$. As we observed above, the restrictions on the $\gamma_A$ should be regarded as restrictions on the functional independence of the scalars $z^A$, with respect to one another. Next, we note that, for all the examples discussed in the last section, the formulae for the temperature  and the entropy  simplify considerably. The square roots in  and  can be “gotten rid of", in these cases, because the $\lambda$ factors appearing in the each term of the sums are identical. For example, in the $d_{1AB}$ model, the entropy  reduces to where $\gamma=\gamma_A$ for $A=2\dots n_v$. It remains an open question, whether, or not, more general non-extreme solutions (static, axion-free and carrying only the charges $q_0$ and $p^A$) exist. These might, for example, have independent boost parameters for each of the Calabi-Yau $4$-cycles. In the case of orthogonally intersecting branes on a torus , there are at most four independent parameters corresponding to a boost and three sets of branes. However, the most general black hole solutions in type II theory compactified to $4$-dimensions on a torus are described by $28$ electric and $28$ magnetic charges (see [*e.g.*]{} ). The extreme solutions in this case arise via collections of branes intersecting [*non-orthogonally*]{} . It may be necessary to look at a non-extreme solution based on branes intersecting at angles to get the most general solution in the Calabi-Yau case as well. It would also be interesting to try to construct the solutions, which we have found here, using the available symmetry transformations, which in the present case include boosts in the time direction and symplectic transformations. Finally, it should also be possible to find nonextreme solutions in $N=2$ theories with prepotentials not of the Calabi-Yau form. We note that since  and  are derived using the extreme solution and since they are displayed not in terms of the particular prepotential we have used in this paper, they are generally applicable to finding non-extreme black hole solutions for other prepotentials. In particular the block diagonality of $\n{AB}$ is a [*necessary*]{} condition for the existence of more than one $\gamma_A$. We emphasize that the derivation of  and  does not depend on any specific expression for $e^{2U}$ and depends only on the fact that ${\rm Re}N=0$, $F^0_{\mu\nu}=0$, and $F^A_{\mu\nu}$ carries only magnetic charge. [^1]:
--- abstract: 'We prove that the von Neumann algebras generated by $n$ $q$-Gaussian elements, are factors for $n{\ensuremath{\geqslant}}2$.' author: - Éric Ricard nocite: '[@*]' title: 'Factoriality of $q$-Gaussian von Neumann algebras' --- Introduction ============ In the early 70’s, Frish and Bourret considered operators satisfying the $q$-canonical commutation relations, for $-1<q<1$ : $$l(e)l^*(f)- q l^*(f)l(e)= (e,f) Id.$$ Nevertheless their existence was proved only 20 years later by Bożejko and Speicher in [@BS2]. Since, many people studied the von Neumann algebra ${\Gamma_q(\mathcal H_\R)}$, generated by $q$-Gaussian random variables $\{l(e)+l^*(e) ; e\in{\mathcal H}_\R\}$, and some of their generalizations. It is well know that ${\Gamma_q(\mathcal H_\R)}$ is type $II_1$. One of the interesting point is that these algebras realize a kind of interpolating scale between $\Gamma_1({\mathcal H})$ which is commutative and $\Gamma_{-1}(H)$ the hyperfinite $II_1$ factor. For $q=0$, we recover the algebra generated by Voiculescu’s semicircular elements, which is a central object in the free probability theory. Among the known results, Bożejko and Speicher showed that ${\Gamma_q(\mathcal H_\R)}$ is non injective under some condition on the dimension of ${\mathcal H}$, which was removed by Nou [@Nou]. Recently, Shlyakhtenko [@Sh] proved that they are solid for some values of $q$. The question of the factoriality of ${\Gamma_q(\mathcal H_\R)}$ was studied by Bożejko, Kümmerer and Speicher [@BKS], they showed that if ${\mathcal H}$ is infinite dimensional then ${\Gamma_q(\mathcal H_\R)}$ is a factor. This condition was partially released by Śniady [@Sn], who showed that this is still true if the dimension of ${\mathcal H}$ is greater than a function of $q$. Preliminaries ============= In this paper, $-1<q<1$ is a fixed real number, we will use standard notation and refer to the papers [@BS; @BKS; @Nou] for general background. Let ${\mathcal H}$ be the complexification of some real Hilbert space ${\mathcal H}_\R$. By ${\mathcal H}^{\tens_2n}$ ($n{\ensuremath{\geqslant}}1$), we denote the hilbertian $n$-tensor product of ${\mathcal H}$ with itself, this space is equipped with a scalar product that we write $(.,.)$. Let $P_n: {\mathcal H}^{\tens_2n}\to {\mathcal H}^{\tens_2n}$ be given by $$P_n(e_1\tens ...\tens e_n)=\sum_{\sigma\in S_n} q^{|\sigma|} e_{\sigma(1)} \tens...\tens e_{\sigma(n)}=\sum_{\sigma\in S_n} q^{|\sigma|} \phi(\sigma) ( e_1\tens ...\tens e_n),$$ where $S_n$ is the symmetric group on $n$ elements, $|\sigma|$ is the number of inversion of $\sigma$, and $\phi$ is the natural action of $S_n$ on ${\mathcal H}^{\tens_2n}$. It was shown in [@BS], that this operator is bounded and strictly positive, therefore we denote by ${\mathcal H}^{\tens_n}$, the hilbert space ${\mathcal H}^{\tens_2n}$ equip with the new scalar product $\bl .,.\br$ given by $$\forall x,y\in {\mathcal H}^{\tens_n} \qquad \bl x,y\br = ( x , P_n(y)).$$ From now on, if $x\in{\mathcal H}^{\tens_n}$, $\|x\|$ is the norm of $x$ with respect to this new scalar product. For instance, if $e\in {\mathcal H}$ and $\|e\|=1$, then $$\|e^{\tens n}\|^2=[n]_q!,$$ where $[k]_q=\frac {1-q^k}{1-q}$ and $[n]_q!=[1]_q...[n]_q$. We will use as a key point that the sequence $([n]_q!)$ behave like a geometric sequence. Moreover, it is known that the following algebraic relation holds : $$P_n= R_{n,k} ( P_{n-k}\tens P_k) \quad \textrm{with } R_{n,k}=\sum_{\sigma \in S_{n}/S_{n-k}\times S_{k}}q^{|\sigma|} \phi(\sigma^{-1}),$$ and the sum runs over the representatives of the right cosets of $S_{n-k}\times S_{k}$ in $S_{n}$ with minimal number of inversions. As a consequence, since $\|R_{n,k}\|_{B({\mathcal H}^{\tens_2n})}{\ensuremath{\leqslant}}C_q=\prod_{i{\ensuremath{\geqslant}}1} (1-|q|^i)^{-1}$, we get that the formal identity map $$Id : {\mathcal H}^{\tens n-k}\tens_2 {\mathcal H}^{\tens k} \to {\mathcal H}^{\tens n}$$ has norm bounded by $\sqrt{C_q}$. \[norm\] As an application, we get that, if $e_1,...e_n$ and $e$ are norm 1 vectors in ${\mathcal H}$, then $$\| e_1\tens...\tens e_n \tens e^{\tens m}\|_{{\mathcal H}^{\tens{n+m}}}{\ensuremath{\leqslant}}C_q^{n/2} \sqrt{[m]_q!}.$$ The $q$-deformed Fock space is the Hilbert space defined by $${\mathcal F_q(\mathcal H_\R)}= \C \Omega \oplus \oplus_{n{\ensuremath{\geqslant}}1} {\mathcal H}^{\tens_n},$$ where $\Omega$ is a unital vector, considered as the vacuum. Vectors in ${\mathcal H}$ will be called letters and an elementary tensor of letters in ${\mathcal H}^{\tens n}$ will be called a word of length $n$. For $e\in {\mathcal H}_\R$, we consider left and right creation operators on ${\mathcal F_q(\mathcal H_\R)}$, given by : $$\begin{aligned} l(e)(e_1\tens...\tens e_n)&=& e\tens e_1\tens...\tens e_n \\ l_r(e)(e_1\tens...\tens e_n)&=& e_1\tens...\tens e_n\tens e\end{aligned}$$ They are bounded endomorphisms of ${\mathcal F_q(\mathcal H_\R)}$, more precisely if $\|e\|=1$ then $$\|l_r(e)\|=\| l(e)\| = \left\{\begin{array}{ll} 1 & \textrm{if } q{\ensuremath{\leqslant}}0 \\ \frac 1 {\sqrt{1-q}} & \textrm{if } q{\ensuremath{\geqslant}}0 \end{array}\right.$$ Their adjoints in $B({\mathcal F_q(\mathcal H_\R)})$ are the annihilation operators : $$\begin{aligned} l^*(e)(e_1\tens...\tens e_n)&=& \sum_{1{\ensuremath{\leqslant}}i{\ensuremath{\leqslant}}n} q^{i-1}(e,e_i) \tens e_1\tens...\tens \hat{e_i}\tens ..\tens e_n \\ l_r^*(e)(e_1\tens...\tens e_n)&=&\sum_{1{\ensuremath{\leqslant}}i{\ensuremath{\leqslant}}n} q^{n-i}(e,e_i) \tens e_1\tens...\tens \hat{e_i}\tens ..\tens e_n \end{aligned}$$ where $\hat{e_i}$ denotes a removed letter, if $n=0$, we put $l^*(e)\Omega=l_r^*(e)\Omega=0$. The operators $l(e)$ satisfy the $q$-commutation relations : $$l(e)l^*(f)- q l^*(f)l(e)= (e,f) Id$$ For $e\in {\mathcal H}_\R$, let $$W(e)=l(e)+l^*(e) \qquad \textrm{and}\qquad W_r(e)=l_r(e)+l_r^*(e).$$ So for $e\in {\mathcal H}_\R$, $W(e)$ is self-adjoint. ${\Gamma_q(\mathcal H_\R)}$ stands for the von Neumann algebra generated by $(W(e))_{e\in {\mathcal H}_r}$ $${\Gamma_q(\mathcal H_\R)}= \{ \; W(e) \; ;\; e\in {\mathcal H}_\R \;\}''.$$ And, ${\Gamma_{q,r}(\mathcal H_\R)}$ stands for the von Neumann algebra generated by $(W_r(e))_{e\in {\mathcal H}_\R}$ $${\Gamma_{q,r}(\mathcal H_\R)}= \{ \; W_r(e) \; ;\; e\in {\mathcal H}_\R \;\}''.$$ We recall some classical results on those algebras, - The commutant of ${\Gamma_q(\mathcal H_\R)}$ is ${\Gamma_q(\mathcal H_\R)}'={\Gamma_{q,r}(\mathcal H_\R)}$. - The vacuum vector $\Omega$ is separating and cyclic for both ${\Gamma_q(\mathcal H_\R)}$ and ${\Gamma_{q,r}(\mathcal H_\R)}$. - The vector state $\tau(x)=\bl x\Omega, \Omega\br$ is a trace for both ${\Gamma_q(\mathcal H_\R)}$ and ${\Gamma_{q,r}(\mathcal H_\R)}$. According to the second point, any $x\in {\Gamma_q(\mathcal H_\R)}$ is uniquely determined by $\ksi=x.\Omega\in {\mathcal F_q(\mathcal H_\R)}$, so we will call it $x=W(\ksi)$ (and similarly for ${\Gamma_{q,r}(\mathcal H_\R)}$, $x=W_r(\ksi)$). This notation is consistent with the definition of $W(e)=l(e)+l^*(e)$. The subspace ${\Gamma_q(\mathcal H_\R)}. \Omega \subset {\mathcal F_q(\mathcal H_\R)}$ of all such $\ksi$ contains all tensors of finite rank, so it contains all words. If $e_1\tens...\tens e_n$ is a word in ${\mathcal F_q(\mathcal H_\R)}$, there is a nice description of $W( e_1\tens...\tens e_n)$ in terms of $l(e_i)$ called the Wick formula : $$W(e_{1}\tens ...e_{n})= \sum_{m=0}^n \sum_{\sigma \in S_n/S_{n-m}\times S_m} q^{|\sigma|} l(e_{{\sigma(1)}})... l(e_{{\sigma(n-m)}})l^*(e_{{\sigma(n-m+1)}}) ...l^*(e_{{\sigma(n)}}),$$ where $\sigma$ is the representative of the right coset of $S_{n-m}\times S_{m}$ in $S_{n}$ with minimal number of inversions. There is a similar formula for $W_r$. Actually, the algebras ${\Gamma_q(\mathcal H_\R)}$ and ${\Gamma_{q,r}(\mathcal H_\R)}$ are in standard form in $B({\mathcal F_q(\mathcal H_\R)})$, but we won’t use it. If we denote by $S$, the anti-symmetry that inverses the order of words in ${\mathcal H}_\R$, then for any $\ksi\in {\Gamma_q(\mathcal H_\R)}. \Omega$ : $$W(\ksi)^*=W(S\ksi)\qquad \textrm{and} \qquad S.W(\ksi).S=W_r(S\ksi).$$ In particular $ {\Gamma_q(\mathcal H_\R)}. \Omega= {\Gamma_{q,r}(\mathcal H_\R)}. \Omega$. For $\ksi,\eta\in {\Gamma_q(\mathcal H_\R)}.\Omega$, we will frequently use $$W(\ksi)\eta=W(\ksi)W_r(\eta)\Omega=W_r(\eta)W(\ksi)\Omega=W_r(\eta)\ksi.$$ Let $T:{\mathcal H}_\R\to {\mathcal H}_\R$, be a $\R$-linear contraction, then there is a canonical $\C$-linear contraction, ${\mathcal}F_q(T)$, on ${\mathcal F_q(\mathcal H_\R)}$ extending $T$, called the first quantization ; formally $${\mathcal}F_q(T) = Id_{\C \Omega} \oplus \oplus_{n{\ensuremath{\geqslant}}1} \tilde T^{\tens n}$$ with $\tilde T$, the complexification of $T$ on ${\mathcal H}$. The second quantization of $T$, is the unique unital completely positive map $\Gamma_q(T)$ on ${\Gamma_q(\mathcal H_\R)}$ satisfying, for $\ksi \in {\Gamma_q(\mathcal H_\R)}.\Omega$ $$\Gamma_q(T)(W(\ksi))=W({\mathcal}F_q(T)\ksi).$$ For instance, if ${\mathcal}K_\R\subset H_\R$, the second quantization associated to the orthogonal projection $P_{{\mathcal}K_\R}$ on ${\mathcal}K_\R$ is a conditional expectation $$\Gamma_q(P_{{\mathcal}K_\R}): {\Gamma_q(\mathcal H_\R)}\to \Gamma_q({\mathcal}K_\R)= \{W(e) \,;\, e\in {\mathcal}K_\R\,\}''.$$ The main result =============== Let $e\in {\mathcal H}_\R$ of norm one and denote by $E_e$ the closed subspace of ${\mathcal F_q(\mathcal H_\R)}$ spanned by the elements $\{e^{\tens_n} \,;\, n{\ensuremath{\geqslant}}0\}$, that is $E_e={\mathcal}F_q(\R e)$. It is easy to check that for any $x=W(\ksi)\in W(e)''$, we have $\ksi \in E_e$. Conversely, assume $x=W(\ksi)$ and that $\ksi\in E_e$, then $x\in W(e)''$ : by the second quantization, we have a conditional expectation $\Gamma_q(P_{\R e}): {\Gamma_q(\mathcal H_\R)}\to W(e)''$, but then $$\Gamma_q(P_{\R e})(x).\Omega={\mathcal}F_q(P_{\R e}).\ksi=P_{E_e}.\ksi= \ksi=x.\Omega,$$ as $\Omega$ is separating, $x=\Gamma_q(P_{\R e})(x)\in W(e)''$. Assume that $\dim {\mathcal H}{\ensuremath{\geqslant}}2$ and let $e\in {\mathcal H}_\R$, $\|e\|=1$, then $W(e)''$ is a maximal abelian subalgebra in ${\Gamma_q(\mathcal H_\R)}$. ${\Gamma_q(\mathcal H_\R)}$ is a factor as soon as $\dim {\mathcal H}{\ensuremath{\geqslant}}2$. Let $x\in {\Gamma_q(\mathcal H_\R)}\cap {\Gamma_q(\mathcal H_\R)}'$, then there is $\ksi\in {\mathcal F_q(\mathcal H_\R)}$ such that $x=W(\ksi)$. By the theorem, we must have $x\in W(e)''$ for every $e\in {\mathcal H}_\R$, but then $\ksi\in E_e$, so necessarily $x\in \C\Omega$. Fix $(e_i)_{ i{\ensuremath{\geqslant}}0}$ an orthonormal basis in ${\mathcal H}_\R$, with $e_0=e$. Let $x=W(\ksi)\in {\Gamma_q(\mathcal H_\R)}\cap W(e)'$, we have to show that $\ksi\in E_e$. For any $y=W(\eta)$ with $\eta \in E_e$, we have $$\begin{aligned} xy -yx =0 \\ (W(\ksi)W(\eta)-W(\eta)W(\ksi)).\Omega=0 \\ (W_r(\eta)-W(\eta)) \ksi=0\end{aligned}$$ So $\ksi\in \cap_{y=W(\eta)\in W(e)''} \ker (W_r(\eta)-W(\eta))$. By duality, we have to prove that $$\overline{{\mathrm{span}}} \{ \ran (W_r(\eta)-W(\eta)) \,;\, y=W(\eta)\in W(e)''\} \supset E_e^\bot.$$ $E_e^\bot$ is the closed linear span of the set of elementary tensors $$F=\{ e_{i_1}\tens...\tens e_{i_n} ; n{\ensuremath{\geqslant}}1, \textrm{ and } (i_1,...,i_n)\in \N^n\backslash \{(0,...,0)\}\}$$ Let $z=e_{i_1}\tens...\tens e_{i_n}$ be a word in $F$, it suffices to prove that $z$ is a weak-limit of elements in ${\mathrm{span}}\{ \ran (W_r(\eta)-W(\eta)) \,;\, y=W(\eta)\in W(e)''\}$. The von Neumann algebra $W(e)''$ is commutative and diffuse and separably generated (see [@BKS]), so we can assume that $W(e)''= L_\infty([0,1],dm)$, where $dm$ is the Lebesgue measure. With this identification, the Rademacher functions $r_i$ belong to $W(e)''$, so we have $r_i=W(\eta_i)$ for some $\eta_i\in E_e$. Obviously $W(\eta_i)$ is a self-adjoint symmetry and $W(\eta_i)^2=1$. Moreover, the sequence $(\eta_i)_{i{\ensuremath{\geqslant}}1}$ converges to 0 for the weak topology on ${\mathcal F_q(\mathcal H_\R)}$, since $r_i$ is an orthormal basis in $L_2([0,1],dm)$. Consider $$z_i=(W(\eta_i)-W_r(\eta_i))(W(\eta_i)(z)),$$ obviously $z_i \in {{\mathrm{span}}} \{ \ran (W_r(\eta)-W(\eta)) \,;\, y=W(\eta)\in W(e)''\}$ and a simple calculation gives $$z_i = W(\eta_i)^2 (z)- W_r(\eta_i)W(\eta_i)(z)=z- W_r(\eta_i)W(\eta_i)(z).$$ We will show that $y_i= W_r(\eta_i).W(\eta_i)(z)$ tends weakly to 0 in ${\mathcal F_q(\mathcal H_\R)}$. As $\|y_i\|{\ensuremath{\leqslant}}\|z\|$, it suffices to prove that for any word $t=e_{j_1}\tens...\tens e_{j_p}$, $\bl y_i, t \br\to 0$. We have, $$\begin{aligned} \bl y_i,t\br&=& \bl W_r(\eta_i)).W(\eta_i)(z), t \br\\ &=& \bl W_r(z)(\eta_i), W(t)(\eta_i)\br \end{aligned}$$ This is the point where we use the Wick formula : $$W(e_{j_1}\tens ...\tens e_{j_n})= \sum_{m=0}^n \sum_{\sigma \in S_n/S_{n-m}\times S_m} q^{|\sigma|} l(e_{j_{\sigma(1)}})... l(e_{j_{\sigma(n-m)}})l^*(e_{j_{\sigma(n-m+1)}}) ...l^*(e_{j_{\sigma(n)}})$$ and similarly for $z$. Since the number of terms appearing after developing the sums is finite (it depends only on $n$ and $p$), we only need to show that $$I_i=\bl l_r(e_{i_1})...l_r(e_{i_m}) l_r^*(e_{i_{m+1}})...l_r^*(e_{i_n})(\eta_i), l(e_{j_1})...l(e_{j_r}) l^*(e_{j_{r+1}})...l^*(e_{j_p})(\eta_i)\br \to 0,$$ as soon as at least one of the $i_k$’s is non zero. Let $v$ be the first $k$ such that $i_k\neq0$. Since the letters in $\eta_i$ are only $e$, we can suppose that $v{\ensuremath{\leqslant}}m$, otherwise $ l_r(e_{i_1})...l_r(e_{i_m}) l_r^*(e_{i_{m+1}})...l_r^*(e_{i_n})(\eta_i)=0$ (we have to cancel some $e_{i_v}$ in $\eta_i$ !). More generally, we can assume that $e_{i_{m+1}}=...=e_{i_{n}}=e_{j_{r+1}}=...=e_{j_{p}}=e$. Recall that $$l(e)^* e^{\tens n}=[n]_q e^{\tens n-1}.$$ Now, we write $\eta_i=\sum_{k{\ensuremath{\geqslant}}0} a_k^i e^{\tens k}$, interchanging the sums and making simplifications gives that ( with $a_{-n}=0$ if $n>0$. The $a_n$ are reals since $r_i$ is self adjoint), $$\begin{array}{cc} I_i=&\bl l_r(e_{i_1})...l_r(e_{i_m}) l_r^*(e_{i_{m+1}})...l_r^*(e_{i_n})(\eta_i), l(e_{j_1})...l(e_{j_r}) l^*(e_{j_{r+1}})...l^*(e_{j_p})(\eta_i)\br\\[12pt] = & \displaystyle{\sum_{k{\ensuremath{\geqslant}}r,m}} a_{k+n-2m}^i a_{k+p-2r}^i [k+n-2m]_q!/[k-m]_q![k+p-2r]_q!/[k-r]_q! \\ &. \bl l_r(e_{i_1})...l_r(e_{i_m}) e^{\tens k-m}, l(e_{j_1})...l(e_{j_r}) e^{\tens k-r}\br\\[12pt] =& \displaystyle{\sum_{k{\ensuremath{\geqslant}}r,m}} a_{k+n-2m}^i a_{k+p-2r}^i [k+n-2m]_q!/[k-m]_q![k+p-2r]_q!/[k-r]_q! \\& . \bl l_r(e_{i_{v+1}}) ...l_r(e_{i_m})e^{\tens k-m}, l_r^*(e_{i_v})...l_r^*(e_{i_1}) (e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r})\br \end{array}$$ Assume that $k$ is big (say $k>N>2(n+p)$), by the definition of $v$, we have that $i_1=...=i_{v-1}=e$, so $$l_r^*(e_{i_{v-1}})...l_r^*(e_{i_1}) (e_{j_1}\tens...\tens e_{j_q}\tens e^{\tens k-r})$$ is obtained by cancelling $(v-1)$ times the letter $e$ in the word $e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r}$ using some geometric weight $q^\alpha$, $$\sum_{1{\ensuremath{\leqslant}}{h_{v-1}} {\ensuremath{\leqslant}}k-v-2}... \sum_{1{\ensuremath{\leqslant}}{h_2} {\ensuremath{\leqslant}}k-1}\sum_{1{\ensuremath{\leqslant}}{h_1} {\ensuremath{\leqslant}}k} \delta_{h_1,...} q^{(\sum h_i)-v+1} (e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r})_{(h_1,...,h_{v-1})}$$ where $(e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r})_{(h_1,...,h_{v-1})}$ is obtained from $e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r}$ by removing the letter on the $h_1$-th position from the right, then the letter at $h_2$-th position in the remaining word and so on and where $\delta_{h_1,...}$ is one if all the removed letters are $e$ and $0$ otherwise. To have a non zero term in $$l_r^*(e_{i_v}) (e_{j_1}\tens...\tens e_{j_r}\tens e^{\tens k-r})_{(h_1,...,h_{v-1})}$$ we have to cancel a letter that is not an $e$, so it can happen only for the terms coming from $e_{j_1}\tens...\tens e_{j_r}$ (if there are some left !), as this word of length $k-v+1$ ends with at least $(k-r-v+1)$ $e$, we end up with a sum of at most $r$ words in front of which there is a factor less than $|q|^{k-r-v+1}$. Moreover, by the remark \[norm\], the norm of such a word is less than $C_q^{r/2}\sqrt{[k-r-v+1]_q!}$. If we sum up everything, we get that $$\|l_r^*(e_{i_v})...l_r^*(e_{i_1})(e_{j_1}\tens...\tens e_{j_q} \tens e^{\tens k-r})\| {\ensuremath{\leqslant}}C(n,m,v,q) |q|^k \sqrt{[k]_q!}$$ where $C(n,m,v)$ does not depend on $k$ (because $[k]_q{\ensuremath{\leqslant}}C_q$). Now we can estimate $I_i$, by cutting the sum into two parts $A_i+B_i=\sum_{k{\ensuremath{\leqslant}}N}|.|+ \sum_{k{\ensuremath{\geqslant}}N}|.|$. Since $\eta_i\to 0$ weakly, each $a_{j}^i$ tends to 0, then $$A_i {\ensuremath{\leqslant}}{\sum_{N>k{\ensuremath{\geqslant}}r,m}} |a_{k+n-2m}^i| |a_{k+p-2r}^i| C(k,n,p) \mathop{\to}^{i\to \infty} 0$$ and as $\|\eta_i\|{\ensuremath{\leqslant}}1$, we have $|a^i_k|{\ensuremath{\leqslant}}1/\sqrt{[k]_q!}$, so $$\begin{aligned} B_i&{\ensuremath{\leqslant}}& \sum_{k{\ensuremath{\geqslant}}N} \frac {\sqrt{[k+n-2m]_q! [k+p-2r]_q!}} {[k-m]_q!\,[k-r]_q!} \|l_r^*(e_{i_v})...l_r^*(e_{i_1})(e_{j_1}\tens...\tens e_{j_q} \tens e^{\tens k-q})\|.\\ & & \| l_r(e_{i_{v+1}}) ...l_r(e_{i_m})e^{\tens k-m}\|\\ & {\ensuremath{\leqslant}}& \sum_{k{\ensuremath{\geqslant}}N} \frac {\sqrt{[k+n-2m]_q! [k+p-2r]_q!}} {[k-m]_q!\,[k-r]_q!} C |q|^k \sqrt{[k]_q!} C(q)^m \sqrt{[k-m]_q!}\\ & {\ensuremath{\leqslant}}& \sum_{k{\ensuremath{\geqslant}}N}C |q|^k {\ensuremath{\leqslant}}C |q|^N\end{aligned}$$ Consequently, we get that $\limsup |I_i| {\ensuremath{\leqslant}}C |q|^N$ for every $N$, so $I_i\to 0$. [1]{} Marek Bo[ż]{}ejko, Burkhard K[ü]{}mmerer, and Roland Speicher. -[G]{}aussian processes: non-commutative and classical aspects. , 185(1):129–154, 1997. Marek Bo[ż]{}ejko and Roland Speicher. An example of a generalized [B]{}rownian motion. , 137(3):519–531, 1991. Marek Bo[ż]{}ejko and Roland Speicher. Completely positive maps on [C]{}oxeter groups, deformed commutation relations, and operator spaces. , 300(1):97–120, 1994. Alexandre Nou. Non injectivity of the $q$-deformed von [N]{}eumann algebras. preprint. Dimitri Shlyakhtenko. Some estimates for non-microstates free entropy dimension, with applications to $q$-semicircular families. Math. arXiv OA/0308093. Piotr Śniady. Factoriality of [B]{}ożejko–[S]{}peicher von [N]{}eumann algebras. Math. arXiv OA/0307201. [ Éric Ricard]{}\ Département de Mathématiques de Besan[ç]{}on\ Université de Franche-Comté\ 25030 Besançon cedex\ [*e-mail*]{}: [eric.ricard@univ-fcomte.fr]{}
--- abstract: 'Molecular motors in biological systems are expected to use ambient fluctuation. In a recent Letter \[Phys. Rev. Lett. [**80**]{}, 5251 (1998)\], it was showed that the following question was unsolved, “Can thermal noise facilitate energy conversion by ratchet system?” We consider it using stochastic energetics, and show that there exist systems where thermal noise helps the energy conversion.' address: | Department of Physics, Tohoku University\ Sendai 980-8578, Japan author: - 'Fumiko Takagi[^1] and Tsuyoshi Hondou[^2]' date: Received 26 March 1999 title: Thermal noise can facilitate energy conversion by a ratchet system --- Molecular motors in biological systems are known to operate efficiently[@Yanagida_ea-Slidi; @Ishijima-Multi; @Uyeda_ea-Myosi; @Yasuda_ea]. They convert molecular scale chemical energy into macroscopic mechanical work with high efficiency in water at room temperature, where the effect of thermal fluctuation is unavoidable. These experimental facts lead us to expect the existence of the system where thermal noise helps the operation. To find out the mechanism of these motors is interesting not only to biology but also to statistical and thermal physics. Recently inspired by observations on the molecular motors, many studies have been performed from the viewpoint of statistical physics. Much has been studied in ratchet models[@Vale_ea-Prote; @AHuxley; @Julicher-Model] to consider how the directed motion appears from non-equilibrium fluctuation. One of the best known works among these ratchet models was by Magnasco[@Magnasco-Force]. He studied “forced thermal ratchet,” and claimed that “there is a region of the operating regime where the efficiency is optimized at finite temperatures.” His claim is interesting because thermal noise is usually known to disturb the operation of machines. However, recently it was revealed that this claim was made incorrectly[@Kamegawa_ea-Energ], because it was not based on the analysis of the energetic efficiency but only on that of the probability current, as most of the studies of ratchet systems were. The insufficient analysis was attributed to the lack of systematic method of energetics in systems described by Langevin equation. Recently a method what is called stochastic energetics was formalized, where the heat was described quantitatively in the frame of Langevin equation[@Sekimoto-Energ]. Using this method, some attempts to discuss the energetics of these systems[@Sekimoto_ea-Compl; @Matsuo-Stoch; @Hondou_ea-Irrev; @Sekimoto_ea-Molec] have been made. By the energetic formulation of the forced thermal ratchet[@Kamegawa_ea-Energ] using this stochastic energetics, the following was showed: The behavior of the probability current is qualitatively different than that of energetic efficiency. Thermal noise does [*not*]{} help the energy conversion by the ratchet at least on the condition where the claim was made. Therefore it was revealed that the following question had not yet been solved, “Can thermal noise facilitate operation of the ratchet?” In this Letter, we will show that the thermal noise certainly can facilitate the operation of the ratchet. Let us consider an over-dumped particle in an “oscillating ratchet”, where the amplitude of the 1-D ratchet potential is constant, but the degree of the symmetry breaking oscillates at frequency $\omega$ (Fig. \[Fig:OscillatingPotential\]). Langevin equation is as follows: $$\begin{aligned} \frac{d x}{d t} &=& -\frac{\partial V(x,t)}{\partial x} + \xi(t), \label{LangevinEq} \\ V(x,t) &=& V_{p}(x,t)+ \ell x, \label{OR:potential}\end{aligned}$$ where $x$, $\ell$ and $V_{p}(x,t)$ represent the state of the system, the load and the ratchet potential respectively (Fig. \[Fig:OR:loaded\]) . The white and Gaussian random force $\xi(t)$ satisfies $\left\langle \xi(t) \right\rangle=0$ and $\left\langle \xi(t)\xi(t') \right\rangle=2 \epsilon \delta(t-t')$, where the angular bracket $\left\langle \cdot \right\rangle$ denotes the ensemble average. We use the unit $m=\gamma=1$. We assume that the potential $V(x,t)$ always has basins, and thus a particle cannot move over the potential peak without thermal noise. The ratchet $V_{p}(x,t)$ is assumed to satisfy the temporally and spatially periodic conditions, $$\begin{aligned} V_{p}(x,t+T) &=& V_{p}(x,t), \label{OR:temp_p}\\ V_{p}(x+L,t) &=& V_{p}(x,t), \label{OR:spac_p}\end{aligned}$$ where $L$ is a spatial period of the ratchet potential, and $T \left(\equiv\frac{2 \pi}{\omega}\right)$ is a temporal period of the potential modulation. By potential modulation, energy is introduced into the system, and the system converts it into work against the load[@fn:chemical]. The Fokker-Planck equation[@Risken_FPbook] corresponding to Eq. (\[LangevinEq\]) is written $$\begin{aligned} \frac{\partial P(x,t)}{\partial t} &=& - \frac{\partial J(x,t)}{\partial x},\nonumber\\ &=& - \frac{\partial}{\partial x} \left( -\frac{\partial V(x,t)}{\partial x} P(x,t) \right) + \epsilon \frac{\partial^2 P(x,t)}{\partial x^2}, \label{FPeq}\end{aligned}$$ where $P(x,t)$ and $J(x,t)$ are a probability density and a probability current respectively. We apply the periodic boundary conditions on $P(x,t)$ and $J(x,t)$, $$\begin{aligned} P(x+L,t) &=& P(x,t), \label{P:spac_p}\\ J(x+L,t) &=& J(x,t), \label{J:spac_p}\end{aligned}$$ where $P(x,t)$ is normalized in the spatial period $L$. Except for transient time, $P(x,t)$ and $J(x,t)$ satisfy the temporally periodic conditions $$\begin{aligned} P(x,t+T) &=& P(x,t),\label{P:temp_p}\\ J(x,t+T) &=& J(x,t).\label{J:temp_p}\end{aligned}$$ According to the stochastic energetics [@Sekimoto-Energ], the heat $\widetilde{Q}$ released to the heat bath during the period $T$ is given as, $$\widetilde{Q} = \int_{x(0)}^{x(T)} \left\{-\left(-\frac{dx(t)}{dt}+\xi(t)\right)\right\} dx(t). \label{Def:til_Q}$$ Inserting Eq. (\[LangevinEq\]) into Eq. (\[Def:til\_Q\]), we obtain the energy balance equation, $$\widetilde{Q} = \int_{0}^{T} \frac{\partial V(x(t),t)}{\partial t} dt - \int_{V(0)}^{V(T)} d V(x(t),t). \label{EnergyBalance_1}$$ The first term of RHS is the energy $\widetilde{E_{in}}$ that the system obtain through the potential modulation, and the second term, $\int_{V(0)}^{V(T)} d V(x(t),t)$, is the work $\widetilde{W}$ that the system extracts from the input energy $\widetilde{E_{in}}$, during the period $T$. The ensemble average of $\widetilde{W}$ is given using Eqs. (\[OR:potential\]), (\[OR:temp\_p\]) and (\[P:temp\_p\]) as, $$\begin{aligned} \left\langle \widetilde{W} \right\rangle &=& \left\langle \int_{V(0)}^{V(T)} d V(x(t),t) \right\rangle \\ \nonumber &=& \ell \int_{0}^{T}dt \int^L_0 dx J(x,t) \equiv W, \label{OR:work}\end{aligned}$$ where one can find that $W$ represents the work against the load. Also, using Eqs. (\[OR:potential\]), (\[FPeq\]) and the periodic conditions (Eqs. (\[OR:temp\_p\]), (\[OR:spac\_p\]), (\[J:spac\_p\]) and (\[P:temp\_p\])), the ensemble average of $E_{in}$ is given as $$\begin{aligned} \left\langle \widetilde{E_{in}} \right\rangle &=& \left\langle\int_{0}^{T} \frac{\partial V(x(t),t)}{\partial t} dt \right\rangle \\ \nonumber &=& \int_{0}^{T} dt\int^L_0 dx \left( -\frac{\partial V_{p}(x,t)}{\partial x} \right) J(x,t) \equiv E_{in}. \label{OR:inputenergy}\end{aligned}$$ Taking an ensemble average, Eq. (\[EnergyBalance\_1\]) yields, $$\begin{aligned} Q &=& E_{in} - W, \label{EnergyBalance3}\\ &=& \int_{0}^{T} dt \int^L_0 dx \left( - \frac{\partial V_{p}(x,t)}{\partial x} \right) J(x,t)\\ && - \ell \int_{0}^{T} dt \int^L_0 dx J(x,t), \label{EnergyBalance2}\end{aligned}$$ where $Q\equiv \left\langle \widetilde{Q} \right\rangle$. Therefore we obtain the efficiency $\eta$ of the energy conversion from the input energy $E_{in}$ into the work $W$, as follows, $$\eta = \frac{W}{E_{in}} = \frac{\ell \int_{0}^{T} dt \int^L_0 dx J(x,t)} {\int_{0}^{T}dt \int^L_0 dx \left( -\frac{\partial V_{p}(x,t)}{\partial x} \right)J(x,t)}. \label{OR:efficiency}$$ This expression can be estimated simply by solving the Fokker-Planck equation (Eq. (\[FPeq\])). We solve Eq. (\[FPeq\]) numerically with the following ratchet potential as an example. It satisfies Eqs. (\[OR:temp\_p\]), (\[OR:spac\_p\]) and the condition that the degree of the asymmetry oscillates but the amplitude of the ratchet is constant. It will turn out that the results does not depend on the detailed shape of the potential. The ratchet potential is $$V_{p}(x,t)=\frac{1}{2} V_0 \left( \sin \left(\frac{2 \pi x}{L} + A(t) \sin \left(\frac{2 \pi x}{L} + C_1\sin\left(\frac{2 \pi x}{L}\right) \right)\right) + 1 \right), \label{OR:sim:potential}$$ where $A(t) = C_{2}+C_{3} \sin (\omega t)$, and $V_0$, $C_1$, $C_2$, $C_3$ are constant. The results are shown in Fig. \[Fig:OR:sim\_eff\]. We find that the efficiency is maximized at finite intensity of thermal noise (Fig. \[Fig:OR:sim\_eff\](a)). This shows that thermal noise can certainly facilitate the energy conversion. What is the reason for the behavior of the efficiency $\eta$? Let us see the work $W$ and the input energy $E_{in}$ as a function of the intensity of thermal noise. The work $W$, the numerator of Eq. (\[OR:efficiency\]), has a peak at finite intensity of thermal noise (Fig. \[Fig:OR:sim\_W\](b)), because of the behavior of the flow during the period $T$, $\bar{J}\equiv\int_{0}^{T}dt \int_{0}^{L}dx J$. In the absence of thermal noise ($\epsilon=0$), the particle cannot move over the potential peak (which results in $\bar{J}=0$). As the intensity of thermal noise increases, the effect of non-equilibility emerges and it induces finite asymmetric flow against the load through the asymmetry of the ratchet. When thermal noise is large enough ($\epsilon\rightarrow\infty$), the flow against load is no longer positive, because the effect of the ratchet disappears in this limit. Therefore the flow, and also the work, behave like Fig. \[Fig:OR:sim\_W\](b) as a function of thermal noise intensity. The input energy $E_{in}$, the denominator of Eq. (\[OR:efficiency\]), remains finite at the limit $\epsilon\rightarrow0$ (Fig. \[Fig:OR:sim\_E\](c)), where all input energy dissipates because the oscillation of the local potential minimum makes finite local current even in the absence of thermal noise. Therefore the efficiency starts with $\eta=0$ at $\epsilon=0$, and grows up as the intensity of thermal noise increases, then disappears as $\epsilon\rightarrow\infty$. The efficiency has its peak at finite $\epsilon$. As we have stated above, noise-induced flow and finite dissipation in the absence of thermal noise are the cause for the noise-induced energy conversion. Thus our finding will not depend on the detail of the shape of $V_{p}(x,t)$. We expect that thermal noise can facilitate the energy conversion in a variety of ratchet systems. Finally we discuss the forced thermal ratchet[@Magnasco-Force]. The forced thermal ratchet is a system where a dissipative particle in a ratchet is subjected both to zero-mean external force and to thermal noise. The previous Letter[@Kamegawa_ea-Energ] was the first trial that discussed the energetics in the ratchet. For the analytical estimate, the discussion in that Letter was only on the quasi-static limit where the change of the external force is slow enough. In that case, thermal noise cannot facilitate operation of the ratchet. The energetic efficiency is monotonically decreasing function of thermal noise intensity, in contrast to the oscillating ratchet discussed above. However one notices that the external force of the forced thermal ratchet can also be written by oscillatory modulating potential, when the external force is periodic as in the literature[@Magnasco-Force; @Kamegawa_ea-Energ], It is likely that the difference between the two cases, the oscillating ratchet and the forced thermal ratchet discussed in that Letter[@Kamegawa_ea-Energ], is attributed to the condition of the system, namely, quasi-static or not. We suppose that thermal noise may facilitate the energy conversion in the forced thermal ratchet when the ratchet is [*not*]{} quasi-static. Langevin equation of the forced thermal ratchet is the same as Eq. (\[LangevinEq\]), except for the potential $V$. In this case, the potential is $$V(x,t)=V_{p}(x) + \ell x - F_{ex}(t) x, \label{FTR:potential}$$ where $V_{p}(x)$, $\ell$ and $F_{ex}$ represent the ratchet potential, load and an external force respectively. The periodic external force $F_{ex}(t)$ satisfies $F_{ex}(t+T)=F_{ex}(t)$ and $\int_{0}^{T}dt F_{ex}(t)=0$[@fn:Fex]. The work $W$ is the same as Eq. (\[OR:work\]), and the input energy $E_{in}$ is, $$E_{in} = \int_{0}^{T}dt\int_{0}^{L}dx F_{ex}(t) J(x,t).$$ In quasi-static limit [@Kamegawa_ea-Energ], the probability current $J$ does not depend on the coordinate $x$. Thus, when the current over the potential peak (that causes $W$) vanishes, the local current vanishes anywhere ($J(x,t)=J(t)=0$). However, if the system is not quasi-static, the behavior changes qualitatively. In this case, even when the current over the potential peak vanishes at $\epsilon=0$, local current around the local potential minimum still remains finite. Thus there exists finite energy dissipation even in the limit $\epsilon\rightarrow0$, which means that the input energy $E_{in}$ still remains finite value at this limit (Fig. \[Fig:FTR:sim\_eff\](c)). Therefore, the efficiency is found to be zero at $\epsilon=0$, and has a peak at finite $\epsilon$ (Fig. \[Fig:FTR:sim\_eff\](a)). The result is the same as that of the oscillating ratchet. It must be noted that the energetics can distinguish the behavior of the efficiency in the non-quasistatic case from that in quasi-static case, although the dependences of the flow $\bar{J}$ are the same between the two. We have discussed energetics of the ratchet system using the method of the stochastic energetics, and estimated the efficiency of energy conversion. We found that thermal noise [*can*]{} facilitate the operation of the ratchet system. The mechanism was briefly summarized as follows: Through the ratchet, potential modulation causes noise-induced flow against the load that results in the work. On the other hand, potential modulation with finite speed causes local current around the local potential minimum that makes finite dissipation even in the absence of thermal noise. Thus the efficiency is maximized at finite intensity of thermal noise. The result must be robust and independent of the detail of the potential, because only two factors are essential for the energy conversion activated by thermal noise: One is the noise-induced flow, and the other is the finite dissipation in the absence of thermal noise. Also in the two-state model[@Julicher-Model] that is an other type of ratchet systems, it was reported quite recently that the efficiency could be maximized at finite temperature[@Prost]. We expect it to be examined by experiment whether and how the real molecular motors use thermal noise. We would like to thank K. Sekimoto, J. Prost, A. Parmeggiani, F. Jülicher, S. Sasa, T. Fujieda and T. Tsuzuki for helpful comments. This work is supported by the Japanese Grant-in-Aid for Science Research Fund from the Ministry of Education, Science and Culture (No. 09740301) and Inoue Foundation for Science. T. Yanagida, T. Arata, and F. Oosawa, Nature [**316**]{}, 366 (1985). A. Ishijima, H. Kojima, H. Higuchi, Y. Harada, T. Funatsu, and T. Yanagida, Biophys. J. [**70**]{}, 383 (1996). T. Q. P. Uyeda, S. J. Kron, and J. A. Spudich, J. Mol. Biol. [**214**]{}, 699 (1990). R. Yasuda, H. Noji, K. Kinosita, F. Motojima, and M. Yoshida, J. Bioenerg. Biomembr. [**29**]{}, 207 (1997). R. D. Vale and F. Oosawa, Adv. Biophys., [**26,**]{} 97 (1990). A. F. Huxley and R. M. Simmons, Nature, [**233**]{}, 533 (1971). See, e.g., F. Jülicher, A. Ajdari and J. Prost, Rev. Mod. Phys. [**69**]{} 1269 (1997), and references therein. M. O. Magnasco, Phys. Rev. Lett. [**71,**]{} 1477 (1993). H. Kamegawa, T. Hondou and F. Takagi, Phys. Rev. Lett., [**80,**]{} 5251 (1998). K. Sekimoto, J. Phys. Soc. Jpn. [**66,**]{} 1234 (1997). K. Sekimoto and S. Sasa, J. Phys. Soc. Jpn., [**66,**]{} 3326 (1997). M. Matsuo and S. Sasa, Physica A (to be published). T. Hondou and F. Takagi, J. Phys. Soc. Jpn., [**67**]{}, 2974, (1998) K. Sekimoto, F. Takagi and T. Hondou, cond-mat/9904322. In this paper, we discuss the systems which convert mechanical energy into mechanical work, while real molecular motors in biological systems convert chemical energy into mechanical work. Recently an experiment suggests that the protein can store the chemical energy from ATP hydrolysis[@Ishijima_ea-Simul], the energy of which may be stored in mechanical way, for example, by conformational change of the protein. Some models have been proposed to explain this kind of energy storage[@Nakagawa_et-prepri]. H. Risken, [*The Fokker-Planck Equation*]{} 2nd ed., (Springer-Verlag Berlin, 1989). We consider the low amplitude regime[@Kamegawa_ea-Energ] where the amplitude of $F_{ex}(t)$ is small. In this case, a particle cannot move over the potential peak without thermal noise, as in case of the oscillating ratchet. A. Parmeggiani, F. Jülicher, A. Ajdari and J. Prost, cond-mat/9904153. A. Ishijima, H. Kojima, T. Funatsu, M. Tokunaga, H. Higuchi, H. Tanaka, and T. Yanagida, Cell [**92**]{}, 161 (1998). For example, N. Nakagawa and K. Kaneko, chao-dyn/9903005. R. P. Feynman, R. B. Leighton and M. Sands, [*The Feynman Lectures in Physics*]{} (Addison-Wesley Publ. Co., Reading, Massachussets, 1966), vol. I. \[Fig:OR:sim\_W\] \[Fig:OR:sim\_E\] [^1]: Electronic address: fumiko@cmpt01.phys.tohoku.ac.jp [^2]: Electronic address: hondou@cmpt01.phys.tohoku.ac.jp
--- abstract: 'We construct a model in which the tree property holds in $\aleph_{\omega + 1}$ and it is destructible under $\operatorname{Col}(\omega, \omega_1)$. On the other hand we discuss some cases in which the tree property is indestructible under small or closed forcings.' author: - Yair Hayut - Menachem Magidor title: Destructibility of the tree property at aleph omega plus 1 --- Introduction ============ A partial order $\langle T, \leq_T\rangle$ is called a *tree*, if it has a minimal element and for every $t\in T$, the set $\{s\in T\mid s\leq_T t\}$ is well ordered by $\leq_T$. The order type of the chain of elements that lie below $t$ in the tree order is called the *level* of $t$ and denoted by $\operatorname{Lev}_T (t)$. For a cardinal $\kappa$, $T$ is called a $\kappa$-tree if $\sup_{t\in T} (\operatorname{Lev}_T(t) + 1) = \kappa$ and the cardinality of each level of $T$ is strictly below $\kappa$. By a theorem of Kőnig, every $\omega$-tree has a cofinal branch (namely, a cofinal chain). On the other hand, a theorem of Aronszajn states that there is an $\omega_1$-tree that has no cofinal branches. Such a tree is called *Aronszajn tree*. For any larger successor cardinal, $\kappa > \omega_1$, it is independent with [[ZFC]{}]{}whether there is $\kappa$-tree with no cofinal branches. This question is related to other combinatorial topics and in order to get the consistency of the non-existence of $\kappa$-Aronszajn tree, one must assume the consistency of some large cardinals. If every $\kappa$ tree has a cofinal branch, we say that $\kappa$ has the *tree property*. By a theorem of Silver, if uncountable cardinal $\kappa$ has the tree property then $\kappa$ is weakly compact cardinal in $L$. On the other end, Mitchell proved that if $\kappa$ is weakly compact cardinal and $\mu < \kappa$ is regular then there is a generic extension in which $\kappa = \mu^{++}$ and the tree property holds at $\kappa$, thus showing that the tree property at double successor of regular cardinal is equiconsistent with the existence of weakly compact cardinal. Where $\kappa$ is a successor of singular cardinal, the situation is more complicated. In [@MagidorShelah1996], Magidor and Shelah showed that it is consistent, relative to some large cardinals, that the tree property holds at $\aleph_{\omega+1}$. The large cardinals assumption was later reduced by Sinapova and Neeman to the existence of an $\omega$-sequence of supercompact cardinals (see, e.g. [@Neeman2014] for the Prikry-free version). In both constructions, $\aleph_1$ plays a special role. It reflects, in some sense, the properties of $\aleph_{\omega+1}$. In section \[sec:destructible\] we will show that it is consistent to have a model in which the tree property holds at $\aleph_{\omega+1}$, but after collapsing $\aleph_1$, it fails. This extends a work by Cummings, Foreman and the second author [@CummingsForemanMagidor2001 Theorem 14]. In this paper they show that it is possible that a weak square is added by a small forcing. Our arguments are very similar to the arguments there. In [@Rinot2009], Rinot shows that it is consistent that there is no special Aronszajn tree on $\aleph_{\omega_1 + 1}$ and a $\sigma$-closed $\aleph_2$-Knaster forcing of cardinality $\aleph_3$ introduces one. We note that we do not know how to apply a similar argument for this case. In section \[sec:indestructible\] we discuss three cases in which the tree property at a successor of a singular cardinal is somewhat indestructible. In \[subsec: indestructible omega\^2\] we will show that it is consistent that the tree property holds at $\aleph_{\omega^2+1}$ and it is indestructible under any forcing of cardinality $<\aleph_{\omega^2}$. In \[subsec: indestructible omega closed\] and \[subsec: closed\] we will show that the tree property at $\aleph_{\omega+1}$ can be made indestructible under small $\sigma$-closed forcings or arbitrary large $\aleph_{\omega+1}$-closed forcings. Preliminaries ============= The following notation, due to Magidor and Shelah [@MagidorShelah1996], plays an important role in the investigation of the tree property at successors of singular cardinals. Let $\lambda$ be a regular cardinal. A *system* is a triplet $\mathcal{S} = \langle I, \kappa, \mathcal{R}\rangle$ such that: 1. $I\subseteq \lambda$ unbounded. $\kappa < \lambda$. 2. $\mathcal{R}$ is a collection of partial order relations on $I\times\kappa$. 3. Each $R\in\mathcal{R}$ is a tree like partial order. $R$ respects the lexicographic order on $I\times \kappa$. Namely, $\langle \alpha, \zeta\rangle R \langle \beta, \xi\rangle$ implies $\alpha \leq \beta$ and if $\alpha = \beta$ then $\zeta = \xi$. Moreover, if $\langle \beta, \xi\rangle, \langle \gamma, \rho\rangle R \langle \alpha, \zeta\rangle$ and $\beta \leq \gamma$ then $\langle \beta, \xi\rangle R \langle \gamma, \rho\rangle$. 4. For every $\alpha < \beta$ in $I$ there are $\zeta, \xi < \kappa$ and $R\in\mathcal{R}$ such that $\langle \alpha, \zeta\rangle R \langle \beta, \xi\rangle$. A *branch* through $\mathcal{S}$ is a set of elements on $I\times \kappa$ which is a chain relative to some $R\in\mathcal{R}$. We say that a branch $b$ meets the $\alpha$-th level of $\mathcal{S}$ if $b\cap \{\alpha\}\times \kappa \neq \emptyset$. A branch is *cofinal* if it meets cofinally many levels. A system $\mathcal{S}$ is *narrow* if $\kappa^+, |\mathcal{R}|^+ < \lambda$. Let $\lambda$ be a regular cardinal. We say that the *narrow system property* holds at $\lambda$ if every narrow system of height $\lambda$ has a cofinal branch. Unlike the tree property, the narrow system property is indestructible by any small forcing. Let $\mathbb{P}$ be a forcing notion with $|\mathbb{P}|^+ < \lambda$ and let $\dot{\mathcal{S}}$ be a name for a narrow system. Let $\dot{\mathcal{R}}$ be the collection of names of relations in $\mathcal{S}$ and let $I$ be the set of all ordinals that can be levels of the $\mathbb{P}$. Let us define the narrow system $\hat{\mathcal{S}}$ in the natural way: the relations of $\hat{\mathcal{S}}$ are indexed by $\mathbb{P}\times\dot{\mathcal{R}}$, and let $\langle \alpha, \beta \rangle (p, R) \langle \gamma, \delta\rangle$ iff $p\Vdash \langle \alpha,\beta\rangle R \langle \gamma ,\delta\rangle$ for $R\in\dot{\mathcal{R}}$. A branch in the system $\hat{\mathcal{S}}$ corresponds to a condition $p\in\mathbb{P}$ and a set of element in $\mathcal{S}$ which are forced to be a branch in the generic extension by $p$. Destructible tree property {#sec:destructible} ========================== \[thm: destructible tree property\] Let $\kappa = \kappa_0 < \kappa_1 < \cdots$ be an $\omega$-sequence of supercompact cardinals. Then there is a forcing extension in which the tree property holds at $\aleph_{\omega + 1}$ and the forcing $\operatorname{Col}(\omega, \omega_1)$ adds a special Aronszajn tree. We will prove something slightly stronger. We will define a forcing poset that force that in the generic extension there is a partial weak square on $\aleph_{\omega + 1}$ whose domain contains all ordinals with cofinality above $\omega_1$, while the tree property holds at $\aleph_{\omega+1}$. If we further extend the universe and collapse $\omega_1$ to be countable, then we can complete all the missing ordinals in this square sequence by just adding an $\omega$ sequences. By a theorem of Shelah and Ben-David [@ShelahBenDavid1986], without violating the continuum hypothesis at $\aleph_\omega$, we cannot hope to have this kind of partial square with only one club at each ordinal. Our partial square will be quite wide. Let $\mu = \sup \kappa_n$ and let $\lambda = \mu^+$. We begin with some definitions: A partial square on a set $S\subseteq \lambda$ with width $<\eta$ is a sequence $\mathcal{C} = \langle \mathcal{C}_\alpha \mid \alpha < \lambda\rangle$ such that: 1. For every $\alpha < \lambda$, $\mathcal{C}_\alpha$ is a set of cardinality $<\eta$. If $\alpha \in S$ then $\mathcal{C}_\alpha \neq \emptyset$. 2. Every $D\in\mathcal{C}_\alpha$ is a closed and unbounded subset of $\alpha$ and $\operatorname{otp}D < \alpha$. 3. If $\beta \in \operatorname{acc}D$, $D\in \mathcal{C}_\alpha$ then $D\cap \beta \in \mathcal{C}_\beta$. When $\lambda = \mu^+$, we may assume that $\operatorname{otp}D \leq \mu$ for every $D\in\mathcal{C}_\alpha$. Since successor ordinals are never accumulation points of a club, the values of the square sequence at successor points are irrelevant. We will assume that $\mathcal{C}_{\alpha+ 1} = \{\alpha\}$ for every $\alpha$, for consistency. We want to force a partial square for the set $S^\lambda_{\geq\kappa}$ with width $<\mu$. Let $\mathbb{S}$ be the following forcing notion. A condition $s\in \mathbb{S}$ is a sequence $s = \langle c_i \mid i \leq \gamma\rangle$ for some ordinal $\gamma < \lambda$ such that all three requirements for the partial square sequence hold for every $\alpha \leq \gamma$. Namely, 1. $\forall \alpha \leq \gamma$, $c_\alpha$ is a set of less than $\mu$ sets. If $\operatorname{cf}\alpha \geq \kappa$, then $c_\alpha \neq \emptyset$. 2. For every $D\in c_\alpha$, $\operatorname{otp}D \leq \mu$ and $D$ is a closed and unbounded subset of $\alpha$. 3. If $\beta\in\operatorname{acc}D$, $D\in c_\alpha$ then $D\cap \beta\in c_\beta$. We order $\mathbb{S}$ by end extension. We will think on the conditions $s\in\mathbb{S}$ as functions, so for $s = \langle c_i \mid i \leq \gamma\rangle$ we will write $\operatorname{dom}s = \gamma + 1$ and $s(i) = c_i$ for $i\in\operatorname{dom}s$. $\mathbb{S}$ is $\kappa$-directed closed. Given a partial square $\mathcal{C}$, we will define a threading forcing, $\mathbb{T}_\eta$. This forcing will add a club at $\lambda$ with order type $\eta$ such that all its initial segments are from $\mathcal{C}$. Let $\mathbb{T}_\eta = \{D \mid \exists \alpha,\,D\in\mathcal{C}_\alpha,\,1 < \operatorname{otp}D < \eta\}$, ordered by end extension. The following lemma is standard: Let $\mathbb{S}, \mathbb{T}_\eta$ be as above. Then: 1. $\mathbb{S}$ is $\lambda$-distributive. 2. Let $\mathcal{C}$ be the generic partial square added by $\mathbb{S}$, and let $\eta$ be a regular cardinal. $\mathbb{S}\ast\mathbb{T}_{\eta}$ is equivalent to an $\eta$-closed forcing. Moreover, for every $\rho < \mu$, $\mathbb{S}\ast\mathbb{T}_{\eta}^\rho$ (where we use full support power in $V^{\mathbb{S}}$) contains an $\eta$-closed dense subset. Let us show that $\mathbb{S}$ is $\lambda$-distributive. We will show that it is $\eta$-strategically closed for every regular $\eta < \lambda$. We will do this by showing the second part of the lemma – that $\mathbb{S}\ast\mathbb{T}_\eta$ contains a $\eta$-closed dense set. Let us observe first that the set of conditions $\langle s, \check{t}\rangle \in \mathbb{S}\ast\mathbb{T}_\eta$, $\operatorname{dom}(s) = \gamma + 1$, $t\in s(\gamma)$ is dense. For every condition $\langle s, \dot{t}\rangle$, $s\Vdash \text{"}\dot{t}$ is a member of some set in the square sequence" and therefore $\dot{t}$ is forced to be a member of the ground model. Thus, there is an extension of $s$, $s^\prime$, which decides the value of $\dot{t}$ to be equal to an element in $V$, that we will denote by $t$. $t$ might have no extension in $s^\prime(\max \operatorname{dom}s^\prime)$ but we can extend $s^\prime$ to $s^{\prime\prime}$ where $\operatorname{dom}s^{\prime\prime} = \operatorname{dom}s^\prime + \omega + 1$, and $t$ has an extension in the top element of $s^{\prime\prime}$. Let call this extension $t^\prime$. Thus we have a condition $\langle s^{\prime\prime}, t^{\prime}\rangle \leq \langle s, t\rangle$ and $\langle s^{\prime\prime}, t^{\prime}\rangle$ has the desired form. The set $D = \{\langle s, \check{t}\rangle \in \mathbb{S}\ast\mathbb{T}_\eta \mid \max t = \max \operatorname{dom}s\}$ is $\eta$-closed. Let $\rho < \eta$ and let $\{\langle s_i, \check{t_i}\rangle \mid i < \rho\} \subset D$ be a decreasing sequence, and assume that $\sup \operatorname{dom}s_i$ is limit ordinal (otherwise, the sequence is fixed from some point). The condition $s_\star, t_\star\rangle$, where $t_\star = \bigcup t_i$ and $s_\star = (\bigcup s_i)^\smallfrown \langle t_\star\rangle$ is a condition in $D$, stronger than $s_i$ for all $i$. The stronger statement, that $\mathbb{S}\ast\mathbb{T}_\eta^{\rho}$ contains a $\eta$-closed dense subset (for all $\rho < \mu$), is proved by the same method. Let us move now toward the proof of \[thm: destructible tree property\]. Let $\kappa_0 < \kappa_1 < \cdots < \kappa_n < \cdots$ be supercompact cardinals. By using Laver’s preparation, we may assume that they are Laver-indestructible, i.e. that for every $n < \omega$ and every $\kappa_n$-directed closed forcing $\mathbb{P}$, $\Vdash_{\mathbb{P}} \check{\kappa_n}$ is supercompact. Let $\mathbb{M} = \prod_{i<\omega} \operatorname{Col}(\kappa_i, <\kappa_{i+1})$ a full support product of Levy collapses. \[lem: nsp after partial square\] After forcing with $\mathbb{S}\times\mathbb{M}$, the narrow system property holds at $\lambda$. Let $H_S \subseteq {\mathbb{S}}$, $H_M\subseteq {\mathbb{M}}$ be generic filters. Let $G = H_S \times H_M$. Let $H_i\subseteq \operatorname{Col}(\kappa_i, <\kappa_{i+1})$ be the $i$-th coordinate of the generic filter $H_M$. Let $H^i$ be the generic filters for all the parts of ${\mathbb{M}}$ except the $i$-th coordinate, namely $H^i = \langle H_m \mid m \neq i\rangle$. Let $\mathcal{S}\in V[G]$ be a narrow system on $I \times \eta$, with relations $\mathcal{R}$. Let us assume, towards a contradiction, that $\mathcal{S}$ has no cofinal branch in $V[G]$. Since the set $I$ will play no role later in the proof, we will restrict ourselves to the notation-wise simpler case in which $I = \lambda$. Let $n$ be large enough such that $\kappa_{n - 2} \geq |\eta \times \mathcal{R}|^+$ in $V^{\mathbb{S}\times \mathbb{M}}$. Let $W_n = V[G][H^n].$ Let us force over $W_n$ with ${\mathbb{T}}_{\kappa_n}^{\kappa_{n-2}}$. Let $K = \langle K_i \mid i < \kappa_n\rangle$ be the sequence of pairwise mutually generic filters. We stress that the product, ${\mathbb{T}}_{\kappa_n}^{\kappa_{n-1}}$, is taken over $V[G]$ and not over $W_n$. Fix $\xi < \kappa_n$. $W_n[K_\xi]\models \kappa_n$ is supercompact since: 1. ${\mathbb{S}}\ast {\mathbb{T}}_{\kappa_n}^{\kappa_{n-2}}$ is $\kappa_n$-directed closed, 2. $\prod_{n\leq i < \omega} \operatorname{Col}(\kappa_i, <\kappa_{i+1})$ is $\kappa_n$-directed closed. 3. $\prod_{i < n - 1} \operatorname{Col}(\kappa_i, <\kappa_{i+1})$ has cardinality $<\kappa_n$. Where we are using the indestructibility in the two first forcings and Levy-Solovay in the last one. Let $j\colon W_n[K_\xi] \to M$ be a $\lambda$-supercompact embedding with critical point $\kappa_n$. Since $\operatorname{Col}(\kappa_{n-1}, <\kappa_n)$ is $\kappa_n$-c.c., after forcing with $$\operatorname{Col}(\kappa_{n-1}, j(\kappa_n)) = \operatorname{Col}(\kappa_{n-1}, \kappa_n)\times\operatorname{Col}(\kappa_{n-1}, [\kappa_n, j(\kappa_n)))$$ we may extend the elementary embedding $j$ to a $\lambda$-supercompact elementary embedding $\tilde{j}\colon W_n[H_n][K_\xi] \to M[\tilde{j}(H_n)]$. Since $W_n[H_n] = V[G][H]$, $S\in W_n[H_n]$, so $\tilde{j}(\mathcal{S})$ is defined. Let $L = \langle L_i \mid i < \kappa_{n-2}\rangle$ be a generic filter for $\operatorname{Col}(\kappa_{n-1}, [\kappa_n, j(\kappa_n)))^{\kappa_{n-2}}$. Note that the forcing that adds $L$ is $\kappa_{n-1}$-closed over the $V$. Let $\delta = \sup \tilde{j}^{\prime\prime} \lambda < \tilde{j}(\lambda)$. Let $\leq_i\in \mathcal{R}$ and let $$b_{i,\epsilon} = \{\langle \alpha, \beta\rangle \mid \tilde{j}(\mathcal{S})\models {\left\langlej(\alpha), \beta\right\rangle} \leq_{i} {\left\langle\delta,\epsilon\right\rangle}\}.$$ Since $|\mathcal{R}|, \eta < \kappa_n = \operatorname{crit}\tilde{j}$, for some $i, \epsilon$, $b_{i,\epsilon}$ is a cofinal branch and moreover $\bigcup_{i\in I, \epsilon < \eta} \{\alpha \mid \exists \beta,\,{\left\langle\alpha,\beta\right\rangle}\in b_{i,\epsilon}\} = \lambda$. We say that forcing with $\operatorname{Col}(\kappa_{n-1}, [\kappa_n, j(\kappa_n))) \times \mathbb{T}_{\kappa_n}$ adds a *system of branches* for $\mathcal{S}$. In particular the forcing $\operatorname{Col}(\kappa_{n-1}, [\kappa_n, <j(\kappa_n)) )^{\kappa_{n-2}} \times {\mathbb{T}}^{\kappa_{n-2}}$ introduces $\kappa_{n-2}$ many distinct realizations for the system of branches $\{\dot{b_i} \mid i \in I\}$. Note that in order to claim that there is no pair of system of branches which are equal we only used the pairwise mutually genericity. We conclude that in $V[G][H][K][L]$ there are $\kappa_{n - 2}$ different system of branches. But in this model $\kappa_{n-2} \geq |\eta\times I|^+$ is still regular and $\operatorname{cf}\lambda \geq \kappa_{n-1}$. Since for every two realization, and every relation $\leq_i\in \mathcal{R}$, $b^\alpha_i, b^\beta_i$ split at some point below $\lambda$, and since there are only $\kappa_{n-2}$ realizations and only $|\mathcal{R}|$ relations in $\mathcal{R}$, there is $\rho_\star < \lambda$ such that for every $\xi \geq \rho_\star$, and for every $\alpha, \beta$, $b^\alpha_i(\xi) \neq b^\beta(\xi)$ (where it is possible that only one of them is defined). By the Pigeonhole Principle there are $\alpha ,\beta < \kappa_{n-2}$ such that $\langle \rho_\star, \xi\rangle \in b^\alpha_i, b^\beta_i$ for the same $\xi, i$, because there are only $|\mathcal{R}|\times\eta$ many possibilities for this pair. This is a contradiction to the choice of $\rho_\star$. We conclude that it is impossible that there was not cofinal branch in $\mathcal{S}$ in the ground model, as wanted. Let $W = V^{\mathbb{S}\times\mathbb{M}}$. \[thm: choice of rho\] There is $\rho < \kappa$ such that forcing with $\operatorname{Col}(\omega, \rho^{+\omega})\times \operatorname{Col}(\rho^{+\omega + 1}, < \kappa)$ over $W$ forces the tree property at $\aleph_{\omega + 1}$. Further collapsing the new $\aleph_1$ introduces a weak square at $\aleph_{\omega+1}$. Assume otherwise. Let $\mathbb{L}_{\rho} = \operatorname{Col}(\omega, \rho^{+\omega})\times \operatorname{Col}(\rho^{+\omega + 1}, < \kappa)$. For every $\rho < \kappa$, let $\dot{T}_\rho$, be a $\mathbb{L}_{\rho}$-name for an Aronszajn tree at $\lambda$. Since $\kappa$ is supercompact, there is $j\colon W \to M$ such that $^{\lambda}M \subseteq M$. By our assumption, $M\models j(\mathbb{L})_{\kappa} \Vdash j(\dot{T})_\kappa$ is an Aronszajn tree. Let $\delta = \sup j^{\prime\prime} \lambda < j(\lambda)$, and let $t = {\left\langle\delta, 0\right\rangle}$. Work in $M$. For every $\alpha < \lambda$, pick a condition $p_\alpha = \langle c_\alpha, q_\alpha\rangle$ such that $$\exists \zeta < j(\kappa^{+\omega}),\, p_\alpha \Vdash_{j(\mathbb{L}_\kappa)} \langle j(\alpha), \zeta\rangle \leq_{j(\dot{T})_\kappa} \check{t}$$ Let us denote this $\zeta$ by $\zeta_\alpha$. We may pick the conditions $p_\alpha$ in a way that $q_\alpha$ is a decreasing sequence. Since $\lambda$ is regular and $|\operatorname{Col}(\omega, \kappa^{+\omega})| = \kappa^{+\omega} < \lambda$, there is a cofinal set $I\subseteq \lambda$, $n < \omega$ and $c_\star\in \operatorname{Col}(\omega, \kappa^{+\omega})$ such that for every $\alpha \in I$, $c_\alpha = c_\star$ and $\zeta_\alpha < j(\kappa^{+n})$. By elementariness, for every $\alpha, \beta\in I$, there are $\gamma, \gamma^\prime < \kappa^{+n}$, $\rho < \kappa$ and $p\in \mathbb{L}_\rho$ such that $p\Vdash_{\mathbb{L}_\rho} \langle \alpha, \gamma\rangle \leq_{\dot{T}_\rho}\langle \beta, \gamma^\prime\rangle$. This defines a narrow system in $W$: The domain of the system is $I \times \kappa^{+n}$. The indices set is $\bigcup_{\rho < \kappa} \mathbb{L}_\rho \times \{\rho\}$. $\langle \alpha, \xi\rangle \leq_{p, \rho} \langle \beta, \zeta\rangle$ iff $p\Vdash_{\mathbb{L}_\rho} \langle \alpha, \xi\rangle \leq_{\dot{T}_\rho} \langle \beta, \zeta\rangle$. By the narrow system property there is a cofinal branch in $W$. Namely there are $\rho < \kappa$, $p\in\mathbb{L}_\rho$ and $\gamma < \kappa^{+n}$ such that for every $\alpha, \beta\in I$, $p\Vdash_{\mathbb{L}_\rho} \langle \alpha, \gamma\rangle \leq \langle \beta, \gamma\rangle$. This prove that the tree property holds at $\aleph_{\omega+1}$ in the generic extension. For the last claim, note that after collapsing $\aleph_1$, for every $\gamma < \aleph_{\omega+1}$ either $\operatorname{cf}\gamma = \omega$ or $\mathcal{C}_\gamma\neq\emptyset$. Thus, one can complete the partial square to a full $\square_{\aleph_{\omega}, <\aleph_{\omega}}$ by adding cofinal $\omega$-sequences. Indestructible tree property {#sec:indestructible} ============================ In this section we will build three models in which the tree property at a successor of singular cardinal is indestructible under certain class of forcing notions. We start by building a model in which the tree property holds at $\aleph_{\omega^2 + 1}$ and it is indestructible under any forcing $\mathbb{P}$ of cardinality less than $\aleph_{\omega^2}$. Similarly, we will construct a model for the tree property at $\aleph_{\omega + 1}$ in which the tree property still holds after any $\sigma$-closed forcing of cardinality $<\aleph_{\omega}$. In the last subsection we will show that it is possible to force the tree property at $\aleph_{\omega+1}$ to be indestructible under any $\aleph_{\omega+1}$-closed forcing notions. Indestructible Tree Property for aleph omega2 + 1 {#subsec: indestructible omega^2} ------------------------------------------------- In this subsection, we will show that in Sinapova’s model for the tree property at $\aleph_{\omega^2+1}$ [@Sinapova2012], the tree property is indestructible under small forcings. We start with some simple observations: Let $\lambda$ be a cardinal such that the tree property holds at $\lambda^+$ and it is indestructible by any forcing of the form $\operatorname{Col}(\omega, \mu)$ for $\mu < \lambda$. Then the tree property at $\lambda^+$ is indestructible by any forcing of size $<\lambda$. Moreover, it is enough to assume that for every $\mu < \lambda$ there is $\mu\leq \mu^\prime < \lambda$ such that $\operatorname{Col}(\omega,\mu^\prime)$ forces the tree property at $\lambda^+$. Let $\mathbb{P}$ be a forcing notion of cardinality $<\lambda$. Let $\mu = |\mathbb{P}|$. $\operatorname{Col}(\omega, \mu)$ adds a generic filter for $\mathbb{P}$. Let $G\subseteq \mathbb{P}$ be a generic filter. The quotient forcing $\operatorname{Col}(\omega, \mu)/G$ has cardinality at most $\mu$ and therefore it does not add a cofinal branch to any $\lambda^+$-Aronszajn tree. Since the tree property holds after forcing with $\operatorname{Col}(\omega,\mu)$ and the forcing $\operatorname{Col}(\omega, \mu)/G$ does not add a branch to Aronszajn tree – the tree property holds in $V[G]$ as well. Le t$\kappa = \kappa_0 < \kappa_1 < \cdots$ be a sequence of $\omega$ supercompact cardinals. Let $\mu = \sup \kappa_n$ and $\lambda = \mu^+$. There is a generic extension in which $\kappa = \aleph_{\omega^2}$, $\lambda = \aleph_{\omega^2 +1}$ and for every $\rho < \mu$, the tree property holds after forcing with $\operatorname{Col}(\omega, \rho)$. In order to prove this theorem, we will work with Sinapova’s model for the tree property at $\aleph_{\omega^2+1}$ from [@Sinapova2012]. We will not need to violate [[SCH]{}]{}at this point, so the proof is somewhat simpler at some points. The main idea behind the indestructibility is that one can define a projection $f\colon {\mathbb{P}}\to{\mathbb{P}}$ that shifts the Prikry sequence by one step to the left. This way, we can analyze the sets that were added by a forcing of the form $\operatorname{Col}(\omega, \mu)$ simply by shifting the first element of the Prikry sequence to be above $\mu$. We will show that the quotient forcing between the model $\mathbb{P}\ast \operatorname{Col}(\omega, \mu)$ and the model $\mathbb{P}$ is $\lambda$-centred and therefore cannot add a branch to a $\lambda^+$-Aronszajn tree. We start with a well known fact: \[lem: nsp with collapses\] Let ${\mathbb{M}}= \prod_{n < \omega} \operatorname{Col}(\kappa_n, < \kappa_{n+1})$ - a full support product of Levy collapses. In $V^{{\mathbb{M}}}$ the narrow branch property holds at $\lambda^+$. The proof is similar to the proof of Lemma \[lem: nsp after partial square\]. Work in $V^{\mathbb{M}}$. $\kappa_0 = \kappa$ is still supercompact, by the Laver indestructibility, so we may pick a normal measure on $P_\kappa \lambda$, $\mathcal{U}$. Let $\mathcal{U}_n$ be the projection of $\mathcal{U}$ to $P_\kappa \kappa_n$ for $n < \omega$. Let $j_n\colon W\to N_n \cong \operatorname{Ult}(W, \mathcal{U}_n)$ be the elementary embedding derived from $\mathcal{U}_n$. Let us construct a $N_n$-generic filter $H_n$ for the forcing $\operatorname{Col}(\kappa^{+\omega + 2}, < j(\kappa))^{N_n}$. This is possible by the standard arguments: the forcing notion $\operatorname{Col}(\kappa^{+\omega + 2}, < j(\kappa))^{N_n}$ is $\kappa^{n+1}$-closed in $W$ and has only $\kappa^{+n+1}$-dense subsets in $N_n$. Let’s define the forcing $\mathbb{P}$: A condition $p\in \mathbb{P}$ has the following form $$p = \langle d_0, a_0, c_0, \dots, a_{n-1}, c_{n-1}, A_n, C_n, \dots\rangle$$ where: 1. $a_i\in P_\kappa \kappa^{+i}$ and $A_i \in \mathcal{U}_i$. Let $\rho_i = a_i\cap \kappa$ if $i < n$ and $\rho_i = \kappa$ otherwise. 2. $d_0 \in \operatorname{Col}(\omega, \rho_0^{+\omega})$ if $\rho_0 < \kappa$ and otherwise $d_0\in \operatorname{Col}(\omega, \kappa)$. 3. $c_i \in \operatorname{Col}(\rho_i^{+\omega + 2}, \rho_{i+1})$ 4. $C_i\colon A_i \to W$ such that $C_i(a) \in \operatorname{Col}((a\cap \kappa)^{+\omega + 2}, < \kappa)$ for every $a\in A_i$ and $[C_i]_{\mathcal{U}_i}\in H_i$. $n$ is called the length of $p$ and we denote $\operatorname{len}(p)=n$. A condition $p$ is stronger than $q$ ($p \leq q$) if: 1. $\operatorname{len}(p) \geq \operatorname{len}(q)$ 2. $d^p_0 \leq d^q_0$. 3. $a_i^p = a_i^q$ and $c^p_i \leq c^q_i$ for every $i < \operatorname{len}(q)$. 4. $a_i^p \in A^q_i$ and $c_i^p \leq C_i^q(a_i)$ for $\operatorname{len}(q)\leq i < \operatorname{len}(p)$ 5. $A_i^p \subseteq A_i^q$ for $i\geq\operatorname{len}(p)$. 6. $C^p_i(a) \leq C^q_i(a)$ for every $a\in A^p_i$. \[thm: tree property at aleph omega\^2 + 1\] $\mathbb{P}$ forces that $\lambda = \aleph_{\omega^2 + 1}$ and the tree property holds in $\lambda$. We will give a sketch of the proof. Let $p\in\mathbb{P}$ be a condition and let $\dot{T}$ be a name for a $\lambda$-Aronszajn tree. Let $n$ be the length of $p$. Let $j\colon V\to M$ be a $\lambda$-supercompact embedding, with critical point $\kappa$ which is compatible with $\mathcal{U}_n$ (namely $\mathcal{U}_n$ is the $P_\kappa \kappa^{+n}$ measure which is derived from $j$). In $M$, let us look at the forcing $j(\mathbb{P})$ below the condition $p^\smallfrown \langle j^{\prime\prime}\kappa^{+n}\rangle$ (the maximal extension of $p$ that forces that the $n+1$-th element of the diagonal Prikry sequence is $j^{\prime\prime}\kappa^{+n}$). This forcing preserves $\lambda$ as a regular cardinal and realizes $j(\dot{T})$ to be a $j(\lambda)$-Aronszajn tree. Let us denote $\delta = \sup j^{\prime\prime}\lambda < j(\lambda)$ and let us look at the name of a partial branch $\{\langle j(\alpha), \zeta_\alpha\rangle \mid M^{j(\mathbb{P})} \models \langle j(\alpha), \zeta_\alpha\rangle \leq_{j(\dot{T})} \langle \delta, 0\rangle\}$. Using the Prikry property, we may find a direct extension of $j(p)$, $q$, such that for every $\alpha < \lambda$ the value of $n < \omega$ such that $\zeta_\alpha < j(\kappa^{+n})$ is determined by $q$ up to forcing with the first $m$ lower parts of $\mathbb{P}$ ($m < \omega$). Since there are less than $\lambda$ many possible values for the first $m$ coordinates of the conditions below $q$, there is a cofinal subset of $\lambda$, $I$, a natural number $n_\star < \omega$ large enough and a fixed lower part $a_\star$ of length $n_\star$ such that $$I = \{\alpha < \lambda\mid \exists r \leq j(p), \operatorname{stem}(r) = a_\star, r\Vdash \exists \zeta < j(\kappa^{+n_\star}), \langle \alpha ,\zeta\rangle \leq \langle\delta, 0\rangle\}.$$ In particular, for every $\alpha, \beta\in I$, $M$ thinks that there is a direct extension of $j(p)$, $q$ and ordinals $\zeta, \zeta^\prime < j(\kappa^{+n_\star})$ such that $q\Vdash \langle j(\alpha), \zeta\rangle \leq_{j(\dot{T})} \langle j(\beta), \zeta^\prime\rangle$. Reflecting this to $V$ we conclude that for every $\alpha, \beta\in I$ there is a condition $q\leq p$ with stem of length $n$ and $\zeta, \zeta^\prime < \kappa^{+n_\star}$ such that $q\Vdash \langle \alpha, \zeta\rangle \leq_{\dot{T}} \langle \beta, \zeta^\prime\rangle$. This defines a narrow system on $I\times \kappa^{+n_\star}$, indexed by the stems of length $n$ which are stronger than the stem of $p$. By the narrow system property, there is a cofinal branch. So there is $I^\prime \subseteq I$, a stem $s_\star$ and an ordinal $\zeta_\star < \kappa^{+n_\star}$ such that for every $\alpha < \beta$ in $I^\prime$ there is a condition $q$ with stem $s_\star$ forcing $\langle \alpha, \zeta_\star\rangle \leq_{\dot{T}} \langle \beta, \zeta_\star\rangle$. Next we will build inductively a sequence of conditions $\langle p_\alpha \mid \alpha \in I^\prime \setminus \rho\rangle$ (for some $\rho < \lambda$), such that for every $\alpha < \beta$, $$p_\alpha \wedge p_\beta \Vdash \langle \alpha, \zeta_\star\rangle \leq_{\dot{T}} \langle \beta, \zeta_\star\rangle$$ The construction is done by induction on $m < \omega$, where at each step we define $p_\alpha \restriction m$ in a way that for all $\alpha, \beta$ (except a bounded segment) there is a condition $q$ with $q\restriction m = p_\alpha \restriction m \wedge p_\beta \restriction m$ such that $$q \Vdash \langle \alpha, \zeta_\star\rangle \leq_{\dot{T}} \langle \beta, \zeta_\star\rangle.$$ Extending $p_\alpha\restriction m$ to $p_\alpha\restriction (m+1)$ is done by defining a narrow system corresponding to the possible extension and using the branch in order to define the relevant value for all $\alpha\in I^\prime$ above the first level that the branch meets. Eventually, we obtain a sequence of conditions $\{p_\alpha \mid \alpha \in I^\prime\setminus \rho\}$, for some $\rho < \lambda$, $p_\alpha \leq p$. Using the chain condition of the forcing $\mathbb{P}$ we conclude that there is an extension of $p$ that forces that for unbounded many ordinals $\alpha < \lambda$, $p_\alpha$ will be in the generic filter. But then $\{\langle \alpha, \zeta_\star\rangle \mid p_\alpha \in G\}$ is a cofinal branch in $\dot{T}$ (where $G$ is the generic filter for $\mathbb{P}$). We will introduce a shifted version of the above forcing. For $s < \omega$, we define the forcing $\mathbb{P}_s$. A condition $p\in \mathbb{P}_s$ has the following form $$p = \langle d_0, a_0, c_0, \dots, a_{n-1}, c_{n-1}, A_n, C_n, \dots\rangle$$ where: 1. $a_i\in P_\kappa \kappa^{+ i + s}$ and $A_i \in \mathcal{U}_{i + s}$. Let $\rho_i = a_i\cap \kappa$ if $i < n$ and $\rho_i = \kappa$ otherwise. 2. $d_0 \in \operatorname{Col}(\omega, \rho_0^{+\omega})$ if $\rho_0 < \kappa$ and otherwise $d_0\in \operatorname{Col}(\omega, \kappa)$. 3. $c_i \in \operatorname{Col}(\rho_i^{+\omega + 2}, \rho_{i+1})$ 4. $C_i\colon P_\kappa \kappa^{+ i + s} \to W$ such that $C_i(a) \in \operatorname{Col}((a\cap \kappa)^{+\omega + 2}, < \kappa)$ for every $a\in A_i$ and $[C_i]_{\mathcal{U}_{i + s}}\in H_{i + s}$. We order the conditions in the same way as we did for $\mathbb{P}$. The proof of Theorem \[thm: tree property at aleph omega\^2 + 1\] still works without change and we conclude that: For every $s < \omega$, $\mathbb{P}_s$ forces $\lambda = \aleph_{\omega^2+1}$ and the tree property holds at $\lambda$ In order to show the indestructibility, we need to show that there is a simple connection between the different shifts of the forcing: Let $p\in \mathbb{P}$, $\operatorname{len}(p) = n + 1$, $n\geq 1$. There is a condition $q$, of length one such that $\rho^q_0 = \rho^p_n$, and a projection $\pi\colon \mathbb{P}_n/q\to \mathbb{P}\times \operatorname{Col}(\omega, \rho_{n}^{+\omega})$. Let $\eta = \rho^p_n$. First, let us note that $\mathbb{P}/p$ is the product of $\mathbb{C} = \operatorname{Col}(\omega, \rho_0^{\omega}) \times \prod_{i < n} \operatorname{Col}(\rho_i^{+\omega + 2}, <\rho_{i+1})$ and the forcing $\mathbb{P}^{\geq n}$ which is the set of $n$-upper part of the conditions of $\mathbb{P}$. Namely $p\in \mathbb{P}^{\geq n}$ if $p=\langle a_{n}, c_{n}, \dots a_{l-1}, c_{l-1}, A_l, C_l, \dots \rangle$, $l\geq n$ and $a_i, c_i, A_i, C_i$ are as above. Since $|\mathbb{C}| = \rho_{n} \leq \rho_{n}^\omega = \eta$, the forcing $\operatorname{Col}(\omega, \eta)$ projects onto $\mathbb{C}\times \operatorname{Col}(\omega, \eta)$. Let $\pi_0$ be the projection. Let $r = \langle d_0, a_0, c_0 \dots\rangle\in \mathbb{P}_n$, $r\leq q$. So $d_0\in\operatorname{Col}(\omega, (\rho^q_0)^{+\omega})$ but $\rho^q_0 = \rho^p_n$ so $d_0\in \operatorname{Col}(\omega, \eta)$. Let $\pi(r)$ be $$\pi_0(r)^\smallfrown \langle a_0, c_0, a_1, c_1, \dots\rangle\in \mathbb{P}\times \operatorname{Col}(\omega, \eta)$$ Clearly, $\pi$ is a projection. \[lem: quotient between collapses and shift\] Let $G$ be a generic filter for $\mathbb{P}\times \operatorname{Col}(\omega,\eta)$ above $p$. The separative quotient of the quotient forcing $\{r\in \mathbb{P}_n/q \mid \pi(r)\in G\}$ is equivalent to a forcing of cardinality $\eta$. The lemma follows from the representation of $\mathbb{P}\restriction p$ and $\mathbb{P}_s\restriction q$ as a product. $\mathbb{P}$ forces the tree property at $\aleph_{\omega^2+1}$ to be indestructible by any forcing of size $<\aleph_{\omega^2}$. Is it enough to show that it is the case for $\operatorname{Col}(\omega, \aleph_{\omega\cdot n})$. Recall that $\aleph_{\omega\cdot n} = \rho_n^{+\omega}$ so we are in the situation of Lemma \[lem: quotient between collapses and shift\]. This means that after forcing with $\operatorname{Col}(\omega, \aleph_{\omega\cdot n})$ there is a further forcing $\mathbb{R}$ that restores the tree property (as the iteration is equivalent to forcing with $\mathbb{P}_n$ above some condition). But $|\mathbb{R}| = \aleph_{\omega \cdot n}$ is a small forcing so it can’t add a branch to an Aronszajn tree. So every $\aleph_{\omega^2+1}$-tree in the universe after forcing with $\operatorname{Col}(\omega, \aleph_{\omega n})$ must have a branch. Indestructible Tree property for aleph omega + 1 {#subsec: indestructible omega closed} ------------------------------------------------ Let us construct a model very similar to , in which we have the tree property at $\aleph_{\omega + 1}$ and it will be indestructible under any $\sigma$-closed forcing of cardinality $<\aleph_\omega$. The additional restriction on the forcing notions (namely that we require the forcing to be $\sigma$-closed), implies that those forcing notions cannot collapse $\omega_1$. It is consistent, relative to the existence of $\omega$ many supercompact cardinals, that the tree property holds at $\aleph_{\omega + 1}$ and it is indestructible under any $\sigma$-closed forcing of cardinality $<\aleph_{\omega}$. We will start with a model of the narrow system property at $\kappa^{+\omega+1}$ for $\kappa$ a supercompact cardinal. This can be obtained, for example, by forcing with the product of the Levy collapses between the supercompact cardinals as in Lemma \[lem: nsp with collapses\]. Let $\mathcal{U}_0$ be a normal ultrafilter on $\kappa$ generated from a $\kappa^{+\omega + 1}$-supercompact elementary embedding, $j\colon V\to M$. Let us show that for every $n < \omega$, there is a large set $A_n\in \mathcal{U}_0$ such that for every $\rho \in A_n$, forcing with $\mathbb{L}_\rho = \operatorname{Col}(\omega, \rho^{+\omega})\times \operatorname{Col}(\rho^{+\omega + 1}, \kappa^{+n})$ forces the tree property at $\kappa^{+\omega + 1}$. Assume that this is not the case and let $\dot{T}_\rho$ be a counterexample for every “bad choice” of $\rho$, for a fixed $n < \omega$. Since the set of bad choices is in $\mathcal{U}_0$, $\kappa$ is a bad choice of ordinal in $M$. Let us force with $j(\mathbb{L})_\kappa$, and let $M[H]$ be the generic extension. Let $T = j(\dot{T})_\kappa^H$ be an Aronszajn tree at $j(\kappa^{\omega + 1})$. Let $\delta = \sup j^{\prime\prime}\kappa^{+\omega + 1}$ and for every $\alpha < \kappa^{+\omega + 1}$ let $\beta_\alpha < j(\kappa^{+\omega})$ be the element in the level $j(\alpha)$ below $\langle \delta, 0\rangle$. Using the same arguments as in the proof of Theorem \[thm: choice of rho\], there is a cofinal set $I \subseteq \kappa^{+\omega + 1}$, a decreasing sequence of conditions $q_\alpha\in \operatorname{Col}(\kappa^{+\omega + 1}, j(\kappa)^{+n})$, a condition $p\in \operatorname{Col}(\omega, \kappa^{+\omega})$ and a natural number $N < \omega$ such that for every $\alpha \in I$ there is $\beta < j(\kappa^{+N})$ such that $(p,q_\alpha)\Vdash \langle j(\alpha), \beta\rangle \leq_{T} \langle \delta, 0\rangle$. Reflecting this back to $V$, we conclude that for every $\alpha, \alpha^\prime \in I$: $$\exists \beta, \beta^\prime < \kappa^{+N},\ \rho < \kappa,\, p\in \mathbb{L}_\rho\text{ such that }p\Vdash_{\mathbb{L}_\rho} \langle \alpha, \beta\rangle \leq_{T_\rho} \langle \alpha^\prime, \beta^\prime\rangle.$$ This gives us a narrow system, similar to the one in the proof of Theorem \[thm: choice of rho\]. A branch through this system provides us an ordinal $\rho$ which was a bad choice, a condition $r\in\mathbb{L}_\rho$, a cofinal set $J\subseteq I$ and for all $\alpha\in J$ an ordinal $\beta_\alpha < \kappa^{+N}$ such that for all $\alpha, \alpha^\prime \in J$, $$r\Vdash_{\mathbb{L}_\rho} \langle \alpha, \beta_\alpha\rangle, \langle \alpha^\prime, \beta_{\alpha^\prime}\rangle \text{ are compatible.}$$ This is a contradiction to the fact that this $\dot{T_\rho}$ was a name for an $\lambda$-Aronszajn tree. Let $A = \bigcup_{n < \omega} A_n$ and let $\rho\in A$. Forcing with $\operatorname{Col}(\omega, \rho^{+\omega})\times \operatorname{Col}(\rho^{+\omega + 1}, \kappa)$ forces the tree property. For every small $\sigma$-closed forcing notion $\mathbb{Q}$ there is $n$ such that $\operatorname{Col}(\rho^{+\omega + 1}, \kappa) \ast \mathbb{Q}$ is a regular subforcing of $\operatorname{Col}(\rho^{+\omega + 1}, \kappa^{+n})$ and since the tree property holds after this forcing and since the quotient is small and thus cannot add branches to Aronszajn trees - we are done. Indestructible Tree Property at aleph omega plus one under closed forcings {#subsec: closed} -------------------------------------------------------------------------- Assume that $\langle \kappa_n \mid n < \omega\rangle$ is a sequence of supercompact cardinals. There is a generic extension in which the tree property holds at $\aleph_{\omega+1}$ and it is indestructible under any $\aleph_{\omega+1}$-closed forcing. Let $\lambda = (\sup_{n<\omega} \kappa_n)^+$. We would like to force with the iteration $$\operatorname{Col}(\omega,\rho)\ast\operatorname{Col}(\rho^+,<\kappa_0)\ast\operatorname{Col}(\kappa_0,<\kappa_1) \ast \cdots \ast \operatorname{Col}(\lambda, \mu)$$ and show that for every $\mu\geq \lambda$ there is $\rho < \kappa_0$ such that this iteration forces the tree property at $\lambda$. Then, we claim that for many values of $\mu$, the same $\rho$ works and conclude that in this case, the iteration $$\operatorname{Col}(\omega,\rho)\ast\operatorname{Col}(\rho^+,<\kappa_0)\ast\operatorname{Col}(\kappa_0,<\kappa_1) \ast \cdots \ast \operatorname{Col}(\kappa_m, <\kappa_{m+1})\ast \cdots$$ force the tree property at $\aleph_{\omega+1}$ in an indestructible way. We know that the claim holds when we use product instead of iteration. In this case, the arguments of Neeman shows that for all $\mu \geq \lambda$ there is $\rho < \kappa_0$ such that the forcing with $\rho$ forces the tree property. Luckily, there is a natural projection from the product onto the iteration. The termspace forcing for each component in the iteration contains a dense subset in which every element is a partial function from the domain of the collapse into names of ordinals below the the range of the collapse. Using the chain condition of the components, the number of names of ordinals is the same as the number of ordinals in each component. So, in order to pull all of this together, we need to know that the quotient forcing of the product modulo the iteration cannot add a branch to a $\lambda$-Aronszajn tree. The following lemma will show that this is indeed the case: Let $\lambda_n$, $n < \omega$ be an increasing sequence of regular cardinals, $\mu = \sup\lambda_n$, $\lambda = \mu^+$. Denote $\lambda_{-1} = \omega$. Let $\mathbb{P}_n$ be a $\mathbb{P}_0\ast \cdots \mathbb{P}_{n-1}$-name for a $\lambda_{n-1}$-forcing notion of cardinality $\leq\lambda_n$. $\mathbb{P}_\omega$ is a $\mathbb{P}_0\ast \cdots \ast \mathbb{P}_n \ast \cdots$-name for a $\lambda$-closed forcing (the iteration is with full support). Let $\mathbb{Q}_n$ be a $\lambda_{n-1}$-forcing notion of cardinality $\leq\lambda_n$, and $\mathbb{Q}_{\omega}$ a $\lambda$-closed forcing notion. Assume that for all $n < \omega$, there is a projection $\pi_n\colon \mathbb{Q}_n\to \mathbb{P}_n$ in $V^{\mathbb{P}_0\ast \cdots \ast \mathbb{P}_{n-1}}$ and there is $\pi_\omega\colon \mathbb{Q}_\omega\to\mathbb{P}_\omega$ in the generic extension by $\mathbb{P}_0\ast\cdots \mathbb{P}_n\ast \cdots$. Let $\mathbb{Q} = \prod_{n \leq \omega} \mathbb{Q}_n$ and let $\mathbb{P} = \mathbb{P}_0\ast \cdots \ast \mathbb{P}_n \ast \cdots \ast\mathbb{P}_\omega$, both with full support. Let $\pi\colon \mathbb{Q}\to \mathbb{P}$ be the corresponding projection and let $\mathbb{R}$ be the quotient forcing. Assume that $\pi$ is $\sigma$-continuous. Namely, that if $p_n \leq \pi(q_n)$ in $\mathbb{P}$, $p_n, q_n$ are decreasing and there are limits $p_\omega \leq \bigwedge p_n$, $q_\omega \leq \bigwedge q_n$ then $p_\omega \leq \pi(q_\omega)$. Let $T$ be a $\lambda$-tree in $V^{\mathbb{P}}$. Then $\mathbb{R}$ does not add a branch to $T$. Assume otherwise, and let $\dot{b}$ be a $\mathbb{R}$-name for a new branch. Let us define by induction conditions $p_n\in \mathbb{P}, q_{\eta}\in\mathbb{Q}$ for all $n < \omega$, $\eta \in \bigcup_{n < \omega} \prod_{m < n} \lambda_m$ and ordinals $\zeta_\alpha < \lambda$ such that: 1. $p_{n} \Vdash q_\eta \in \mathbb{R}$ for all $\operatorname{len}\eta < n$. 2. $n < m$ implies $p_m \leq p_n$. 3. For $\alpha < \beta < \lambda_n$ and $\eta\in\prod_{m < n} \lambda_m$, $$\langle p_{n+1}, q_{\alpha}\rangle \Vdash \dot{b}(\zeta_{\beta}) = \dot{x}, \langle p_{n+1}, q_{\beta}\rangle \Vdash \dot{b}(\zeta_{\beta}) = \dot{y} \text { and } p_{n+1} \Vdash \dot{x} \neq \dot{y}.$$ 4. For all $m \geq n$, $p_m\restriction n = p_{n} \restriction n$. For all $\eta \trianglelefteq \eta^\prime$, $q_{\eta} \geq q_{\eta^\prime}$ and $q_{\eta}\restriction (\operatorname{len}(\eta) - 2) = q_{\eta^\prime}\restriction (\operatorname{len}(\eta) - 2)$. The induction works since for every $n < \omega$, one can construct a decreasing sequence of conditions $\tilde{p}_\alpha \leq p_n$, $\alpha < \lambda_n$, with fixed first $n$ coordinates such that for every $\alpha, \beta < \lambda_n$ and $\eta\in \prod_{m < n}\lambda_m$ one can find for every possible extension of the lower $n$-part of $\tilde{p}_\alpha$, $q_{\eta^\smallfrown\langle\alpha\rangle}, q_{\eta^\smallfrown\langle\beta\rangle}$ a further extension in which $q_{\eta^\smallfrown\langle\alpha\rangle}, q_{\eta^\smallfrown\langle\beta\rangle}$ force contradictory information on the branch. Using the closure of the forcing notions and the limitation on their sizes, this can be done. We take $p_{n+1}$ to be a lower bound of $\tilde{p}_\alpha$ for all $\alpha < \lambda_n$. In the end of the process, we have a condition $p_\omega$, $\forall n < \omega, p_\omega \leq p_n$ (well defined, since all but maybe $\mathbb{P}_0$ are $\sigma$-closed and we fix the first coordinate). Using the closure of the upper coordinates of $\mathbb{Q}$, for all $\eta\in\prod_{n < \omega} \lambda_n$, there is $q_\eta\in\mathbb{Q}$ which is stronger than $q_{\eta\restriction n}$ for all $n < \omega$. Let us show that $p_\omega \Vdash q_\eta \in \mathbb{R}$. This is true by our assumption that $\pi$ is $\sigma$-continuous. Finally, $q_\eta$, $\eta\in\prod_{n < \omega} \lambda_n$ define a set of $|\prod \lambda_n|$ many incompatible evaluations for $\dot{b}$ at bounded point of the tree - a contradiction. Let us return to the proof of the theorem. Let $\mathbb{P}_n$ be the $n$-th collapse in the iteration and $\mathbb{Q}_n$ be the $n$-th collapse in the product. It is well known that there are projections $\pi_n\colon \mathbb{Q}_n\to\mathbb{P}_n$ as required (defined by identifying the collapse forcing in the product with the termspace forcing for the $n$-th step of the iteration). Therefore, it is enough to show that for all $\mu$ there is $\rho$ such that the product $\operatorname{Col}(\omega,\rho)\times\operatorname{Col}(\rho^+,<\kappa_0)\times\prod \operatorname{Col}(\kappa_m, <\kappa_{m+1}) \times \operatorname{Col}(\lambda, \mu)$ forces the tree property. Let $\mu \geq \lambda$. There is $\rho < \kappa_0$ such that the forcing $$\mathbb{M} = \operatorname{Col}(\omega,\rho)\times\operatorname{Col}(\rho^+,<\kappa)\times \prod_m \operatorname{Col}(\kappa_m, \kappa_{m+1}) \times \operatorname{Col}(\lambda,\mu)$$ forces the tree property at $\lambda$. For all $n$, $\kappa_n$ is indestructible supercompact. Thus, we can start by forcing with $\operatorname{Col}(\lambda, \mu)$, and work in $W = V^{\operatorname{Col}(\lambda,\mu)}$ in which still for every $n < \omega$, the cardinal $\kappa_n$ is indestructible supercompact. In $W$, we apply Neeman’s proof and conclude that there is $\rho < \kappa_0$ such that $\operatorname{Col}(\omega,\rho)\times\operatorname{Col}(\rho^+,<\kappa)\times \prod_m \operatorname{Col}(\kappa_m, \kappa_{m+1})$ forces the tree property at $\lambda$. Combining those two lemmas, for every $\mu \geq \lambda$ there is $\rho < \kappa_0$ such that forcing with $\mathbb{L}_\rho \ast \operatorname{Col}(\lambda,\mu)$, where $$\mathbb{L}_\rho = \operatorname{Col}(\omega,\rho)\ast \operatorname{Col}(\rho^+, <\kappa_0)\ast \operatorname{Col}(\kappa_0, <\kappa_1)\ast \cdots$$ will force the tree property at $\lambda$. For class many values of $\mu$, the same $\rho$ works. Let us call the elements of this class the *good* cardinals. Let $\rho_\star$ be this value and let us force with $\mathbb{L}_{\rho_\star}$. Now, let $\mathbb{P}$ be a $\lambda$-closed forcing in $V^{\mathbb{L}_{\rho_\star}}$. There is $\mu$ large enough which is good and such that there is a projection from $\operatorname{Col}(\lambda,\mu)$ onto $\mathbb{P}$. Since the tree property holds at $\lambda$ in $V^{\mathbb{L}_{\rho_\star}\ast \operatorname{Col}(\lambda,\mu)}$ and the quotient forcing $\nicefrac{\operatorname{Col}(\lambda,\mu)}{\mathbb{P}}$ cannot add a branch to an Aronszajn tree - the tree property holds at $\lambda$ in $V^{\mathbb{L}_{\rho_\star}\ast \mathbb{P}}$ as well. Open questions ============== In Section \[subsec: indestructible omega\^2\] we proved that the tree property at $\aleph_{\omega^2+1}$ can be made indestructible under any small forcing poset. Is it consistent that the tree property at $\aleph_{\omega+1}$ is indestructible under any forcing of cardinality $<\aleph_{\omega}$? On the other hand, one can ask whether it is possible to extend the results of Theorem \[thm: destructible tree property\]. \[question: destructible with preserving cardinals\] Is it consistent that the tree property holds at $\aleph_{\omega+1}$ but there is a small forcing (of cardinality $<\aleph_{\omega}$), that does not collapse cardinals and adds an Aronszajn tree? Note that in all the currently known models for the tree property at $\aleph_{\omega+1}$, adding a single Cohen real does not add Aronszajn tree at $\aleph_{\omega+1}$. So we ask the following stronger version of Question \[question: destructible with preserving cardinals\]: Is it consistent that the tree property holds at $\aleph_{\omega+1}$ but there adding a Cohen real adds an Aronszajn tree? This question is particularly interesting when we assume that $\aleph_{\omega}$ is strong limit since then adding a Cohen real cannot add a weak square for $\aleph_{\omega}$. [1]{} Shai Ben-David and Saharon Shelah, *Souslin trees and successors of singular cardinals*, Ann. Pure Appl. Logic **30** (1986), no. 3, 207–217. James Cummings, Matthew Foreman, and Menachem Magidor, *Squares, scales and stationary reflection*, J. Math. Log. **1** (2001), no. 1, 35–98. Menachem Magidor and Sharon Shelah, *The tree property at successors of singular*, Archive for Mathematical Logic (1996), 385–404. Itay Neeman, *The tree property up to [$\aleph_{\omega+1}$]{}*, J. Symb. Log. **79** (2014), no. 2, 429–459. Assaf Rinot, *A cofinality-preserving small forcing may introduce a special [A]{}ronszajn tree*, Arch. Math. Logic **48** (2009), no. 8, 817–823. Dima Sinapova, *The tree property and the failure of the singular cardinal hypothesis at $\aleph_{\omega^2}$*, Journal of Simbolic Logic **77** (2012), 729–1056.
--- abstract: 'Let $r(G,H)$ be the smallest integer $N$ such that for any $2$-coloring (say, red and blue) of the edges of $K_n$, $n{\geqslant}N$ there is either a red copy of $G$ or a blue copy of $H$. Let $K_n-K_{1,s}$ be the complete graph on $n$ vertices from which the edges of $K_{1,s}$ are dropped. In this note we present exact values for $r(K_m-K_{1,1},K_n-K_{1,s})$ and new upper bounds for $r(K_m,K_n-K_{1,s})$ in numerous cases. We also present some results for the Ramsey number of Wheels versus $K_n-K_{1,s}$.' address: 'Université Montpellier 2, Institut de Mathématiques et de Modélisation de Montpellier, Case Courrier 051, Place Eugène Bataillon, 34095 Montpellier Cedex 05, France.' author: - Jonathan Chappelon - Luis Pedro Montejano - Jorge Luis Ramírez Alfonsín date: 'October 14, 2013' title: 'Upper bounds and values for $r(K_m,K_n-K_{1,s})$ and $r(K_m-e,K_n-K_{1,s})$' --- Introduction ============ Let $G$ and $H$ be two graphs. Let $r(G,H)$ be the smallest integer $N$ such that for any $2$-coloring (say, red and blue) of the edges of $K_n$, $n{\geqslant}N$ there is either a red copy of $G$ or a blue copy of $H$. Let $K_n-K_{1,s}$ be the complete graph on $n$ vertices from which the edges of $K_{1,s}$ are dropped. We notice that $K_n-K_{1,1}=K_n-e$ (the complete graph on $n$ vertices from which an edge is dropped) and $K_n-K_{1,2}=K_n-P_3$ (the complete graph on $n$ vertices from which a path on three vertices is dropped). In this note we investigate $r(K_m-e,K_n-K_{1,s})$ and $r(K_m,K_n-K_{1,s})$ for a variety of integers $m,n$ and $s$. In the next section, we prove our main result (Theorem \[mainth\]). In Section \[sec:1\], we will present exact values for $r(K_m-e,K_n-K_{1,s})$ when $n=3$ or $4$ and some values of $m$ and $s$. In Section \[sec:2\], new upper bounds for $r(K_m,K_n-P_3)$ for several integers $m$ and $n$ are given. In Section \[sec:3\], we give new upper bounds for $r(K_m,K_n-K_{1,s})$ when $m,s{\geqslant}3$ and several values of $n$. In Section \[sec:4\], we present some equalities for $r(K_4,K_n-K_{1,s})$ extending the validity of some results given in [@BE89]. Finally, in Section \[sec:7\], we will present results concerning the Ramsey number of the Wheel $W_5$ versus $K_n-K_{1,s}$. We present exact values for $r(W_5,K_6-K_{1,s})$ when $s=3$ and $4$ and the equalities $r(W_5,K_n-K_{1,s})=r(W_5,K_{n-1})$ when $n=7$ and $8$ for some values of $s$. Some known values/bounds for specific $r(K_m,K_n)$ needed for this paper are given in the Appendix. Main result =========== Let $G$ be a graph and denote by $G^v$ the graph obtained from $G$ to which a new vertex $v$, incident to all the vertices of $G$, is added. Our main result is the following \[mainth\] Let $n$ and $s$ be positive integers. Let $G_1$ be any graph and let $N$ be an integer such that $N{\geqslant}r(G_1^v,K_n)$. If $\left\lceil\frac{(s+1)(N-n)}{n}\right\rceil{\geqslant}r(G_1,K_{n+1}-K_{1,s})$ then $r(G_1^v,K_{n+1}-K_{1,s}){\leqslant}N$. Let $K_N$ be a complete graph on $N$ vertices and consider any 2-coloring of the edges of $K_N$ (say, red and blue). We shall show that there is either a $G_1^v$ red or a $K_{n+1}-K_{1,s}$ blue. Since $N{\geqslant}r(G_1^v,K_n)$ then $K_N$ has a red $G_1^v$ or a blue $K_n$. In the former case we are done, so let us suppose that $K_N$ admit a blue $K_n$, that we will denote by $H$. We have two cases. Case 1) There exists a vertex $u\in V(K_N\setminus H)$ such that $|N_H^r(u)|{\leqslant}s$ where $N_H^r(u)$ is the set of vertices in $H$ that are joined to $u$ by a red edge. In this case, we may construct the blue graph $G'=K_{n+1}-K_{1,|N_H^r(u)|}$, this is done by taking $H$ (containing $n$ vertices) and vertex $u$ together with the blue edges between $u$ and the vertices of $H$. Now, since $|N_H^r(u)|{\leqslant}s$ then the graph $K_{n+1}-K_{1,s}$ is contained in $G'$ (and thus we found a blue $K_{n+1}-K_{1,s}$). Case 2) $|N_H^r(u)|> s$ for every vertex $u\in V(K_N\setminus H)$. Then we have that the number of red edges $\{x,y\}$ with $x\in V(H)$ and $y\in V(K_N\setminus H)$ is at least $(N-n)(s+1)$. So, by the pigeon hole principle, we have that there exists at least one vertex $v\in V(H)$ such that $d_r(v){\geqslant}\left\lceil\frac{(s+1)(N-n)}{n}\right\rceil$, where $d_r(v)=|N_r(v)|$ and $N_r(v)$ denotes the set of vertices incident to $v$ with a red edge. But since $\left\lceil\frac{(s+1)(N-n)}{n}\right\rceil{\geqslant}r(G_1,K_{n+1}-K_{1,s})$ then the graph induced by $N_r(u)$ has either a blue $K_{n+1}-K_{1,s}$ (and we are done) or a red $G_1$ to which we add vertex $v$ to find a red $G^v$ as desired. Some exact values for $r(K_m-e,K_n-K_{1,s})$ {#sec:1} ============================================ Let $s{\geqslant}1$ be an integer. We clearly have that $$r(K_3-e,K_m){\leqslant}r(K_3-e,K_{m+1}-K_{1,s}).$$ Since $$r(K_3-e,K_{m+1}-K_{1,s}){\leqslant}r(K_3-e,K_{m+1}-e)$$ and (see [@Rad]) $$r(K_3-e,K_{m})= r(K_3-e,K_{m+1}-e)=2m-1$$ then $$\hbox{$r(K_3-e,K_m-K_{1,s})=2m-1$ for each $s=1,\dots, m-1$.}$$ Case $m=4$. ----------- \[exactvalues\] \(1) $r(K_4-e,K_5-K_{1,3})=11$. \(2) $r(K_4-e,K_6-K_{1,s})=16$ for any $3{\leqslant}s{\leqslant}4$. \(3) $r(K_4-e,K_7-K_{1,s})=21$ for any $4{\leqslant}s{\leqslant}5$. \(1) It is clear that $r(K_4-e,K_4){\leqslant}r(K_4-e,K_5-K_{1,3})$. Since $r(K_4-e,K_4)=11$ (see [@Rad]) then $11{\leqslant}r(K_4-e,K_5-K_{1,3})$. We will now show that $r(K_4-e,K_5-K_{1,3}){\leqslant}11$. By taking $N=11$, $s=3$ and $n=4$, we have that $\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{4\times 7}{4}\rceil=7= r(K_3-e,K_5-K_{1,3})$ and so, by Theorem \[mainth\], we have $r(K_4-e,K_5-K_{1,3}){\leqslant}11$, and the result follows. The proofs for (2) and (3) are analogues. We just need to check that conditions of Theorem \[mainth\] are satisfied by taking : $N=r(K_4-e,K_5)=16$ for (2) and $N=r(K_4-e,K_6)=21$ for (3). We notice that Corollary \[exactvalues\] (1) is claimed in [@Hen] without a proof. The best known upper bounds for $r(K_4-e,K_7-K_{1,3})$ and $r(K_4-e,K_7-P_3)$ are obtained by applying the following classical recursive formula : $$\label{recursive2} r(K_m-e,K_n-K_{1,s}){\leqslant}r(K_{m-1}-e,K_n-K_{1,s})+r(K_m-e,K_{n-1}-K_{1,s}).$$ Hence $$r(K_4-e,K_7-K_{1,3}){\leqslant}r(K_3-e,K_7-K_{1,3})+ r(K_4-e,K_6-K_{1,3})=11+16=27$$ and $$\begin{array}{ll} r(K_4-e,K_7-P_3) & {\leqslant}r(K_3-e,K_7-P_3)+ r(K_4-e,K_6-P_3)\\ & {\leqslant}11+r(K_4-e,K_6-e)=11+17=28. \end{array}$$ We are able to improve the above upper bounds. \[cort3\] In each of the following cases the previous best known upper bounds are given in parenthesis \(1) $21{\leqslant}r(K_4-e,K_7-K_{1,3}){\leqslant}22$ (27). \(2) $21{\leqslant}r(K_4-e,K_7-P_3){\leqslant}27$ (28). \(1) It is clear that $r(K_4-e,K_6){\leqslant}r(K_4-e,K_7-K_{1,3})$. Since $r(K_4-e,K_6)=21$ (see [@Rad]), then $21{\leqslant}r(K_4-e,K_7-K_{1,3})$. We will now show that $r(K_4-e,K_7-K_{1,3}){\leqslant}22$. By taking $N=22$, $s=3$ and $n=6$, we have that $\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{4\times 16}{6}\rceil=11= r(K_3-e,K_7-K_{1,3})$ and so, by Theorem \[mainth\], we have that $r(K_4-e,K_7-K_{1,3}){\leqslant}22$, and the result follows. The proof for (2) is similar. We just need to check that conditions of Theorem \[mainth\] are satisfied by taking $N=27$. Case $m=5$. ----------- The following equality is claimed in [@Hen] without a proof. $r(K_5-e,K_5-K_{1,3})=19$. It is clear that $r(K_5-e,K_4){\leqslant}r(K_5-e,K_5-K_{1,3})$. It is known that $r(K_5-e,K_4)=19$ (see [@Rad]), then $19{\leqslant}r(K_5-e,K_5-K_{1,3})$. We will now show that $r(K_5-e,K_5-K_{1,3}){\leqslant}19$. By Corollary \[exactvalues\], we have that $r(K_4-e,K_5-K_{1,3})=11$. Then, by taking $N=19$, $s=3$ and $n=4$, we have that$\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{4\times 15}{4}\rceil=15> r(K_4-e,K_5-K_{1,3})=11$ and so, by Theorem \[mainth\], we have $r(K_5-e,K_5-K_{1,3}){\leqslant}19$, and the result follows. $r(K_5-e,K_6-K_{1,s})=r(K_5-e,K_5)$ for $s=3,4$. It is clear that $r(K_5-e,K_5){\leqslant}r(K_5-e,K_6-K_{1,s})$ for all $s{\geqslant}1$. Let us now prove that $r(K_5-e,K_5){\geqslant}r(K_5-e,K_6-K_{1,s})$ for $s=3,4$. Since $r(K_5-e,K_6-K_{1,4}){\leqslant}r(K_5-e,K_6-K_{1,3})$ then it is sufficient to prove that $r(K_5-e,K_6-K_{1,3}){\leqslant}r(K_5-e,K_5)$. For, let $N=r(K_5-e,K_5){\geqslant}30$. Since $N{\geqslant}30$ then if $s=3$ and $n=5$ we obtain that $\lceil\frac{(s+1)(N-n)}{n}\rceil{\geqslant}\lceil\frac{4\times 25}{5}\rceil=20>17{\geqslant}r(K_4-e,K_6-K_{1,3})$ (see [@Rad] for the last inequality). So, by Theorem \[mainth\], we obtain that $r(K_5-e,K_6-K_{1,3}){\leqslant}N=r(K_5-e,K_5)$. We notice that in the case $s=2$, if $r(K_5-e,K_5){\geqslant}32$ then we may obtain that $r(K_5-e,K_6-K_{1,2})=r(K_5-e,K_5)$ (by using the same arguments as above). It is known that $r(K_5-e,K_5){\geqslant}30$. New upper bounds for $r(K_m,K_n-P_3)$ {#sec:2} ===================================== In this section we will apply our main result to give new upper bounds for $r(K_m,K_n-P_3)$ in numerous cases. The value of $r(K_n,K_m-P_3)$ have already been studied in some cases. In [@B11], it is proved that $r(K_5,K_5-P_3)=25$ and in [@Clan] it is shown that $r(K_4,K_5-P_3)=r(K_4,K_4)=18$. Let us first notice that, by taking $G_1=K_m$ in Theorem \[mainth\], we obtain \[mainc\] Let $N$ be an integer such that $N{\geqslant}r(K_{m+1},K_n)$. If $\left\lceil\frac{(s+1)(N-n)}{n}\right\rceil{\geqslant}r(K_m,K_{n+1}-K_{1,s})$ then $r(K_{m+1},K_{n+1}-K_{1,s}){\leqslant}N$. The case when $m=3$ has already been studied in [@BBH98] where it is proved that $$\hbox{$r(K_3,K_{n+1}-K_{1,s})=r(K_3,K_n)$ if $n{\geqslant}s+1>(n-1)(n-2)/(r(3,n)-n)$.}$$ As a consequence, we have $$\begin{aligned} \label{eq1} \hbox{$r(K_3,K_{6}-P_3)=r(K_3,K_5)$ (with $n=5$ and $s=2$),}\nonumber\\ \hbox{$r(K_3,K_{7}-K_{1,3})=r(K_3,K_6)$ (with $n=6$ and $s=3$),}\\ \hbox{$r(K_3,K_{10}-K_{1,s})=r(K_3,K_9)$ (with $n=9$) for any $2{\leqslant}s{\leqslant}9$,}\nonumber\\ \hbox{$r(K_3,K_{11}-K_{1,s})=r(K_3,K_{10})$ (with $n=10$) for any $3{\leqslant}s{\leqslant}10$.}\nonumber\end{aligned}$$ Results on $r(K_m,K_5-P_3)$ --------------------------- In [@BE89 Theorem 4], it was shown that if $n{\geqslant}m{\geqslant}3$ and $m+n{\geqslant}8$, then $$\label{eqer} \hbox{$r(K_{m+1}-K_{1,m-p},K_{n+1}-K_{1,n-q})=r(K_m,K_n)$ where $p=\lceil\frac{m}{n-1}\rceil$ and $q=\lceil\frac{n}{m-1}\rceil$.}$$ This result implies the following \[erdos\] Let $n{\geqslant}m{\geqslant}3$ and $m+n{\geqslant}8$ and let $p=\lceil\frac{m}{n-1}\rceil$ and $q=\lceil\frac{n}{m-1}\rceil$. Then, $$r(K_{m},K_{n+1}-K_{1,n-q})=r(K_{m+1}-K_{1,m-p},K_n)=r(K_m,K_n).$$ We clearly have $$r(K_m,K_n){\leqslant}r(K_{m},K_{n+1}-K_{1,n-q}){\leqslant}r(K_{m+1}-K_{1,m-p},K_{n+1}-K_{1,n-q})\stackrel{\eqref{eqer}}{=}r(K_m,K_n)$$ and thus $r(K_{m},K_{n+1}-K_{1,n-q})=r(K_m,K_n)$ (the proof for $r(K_{m+1}-K_{1,m-p},K_n)=r(K_m,K_n)$ is similar). By taking $m=n=4$ (and thus $q=2$) in Corollary \[erdos\] we have that $$r(K_4,K_5-P_3)=r(K_4,K_4)=18.$$ It is also known [@B11] that $$r(K_5,K_5-P_3)=r(K_5,K_4)=25,$$ and, by Corollary \[erdos\], we have $$\begin{aligned} \label{eq1a} \hbox{$r(K_6,K_{4}-P_3)=r(K_6,K_3)=18$ (with $m=5$ and $n=3$),}\nonumber\\ \hbox{$r(K_7,K_{4}-P_3)=r(K_7,K_3)=23$ (with $m=6$ and $n=3$),}\nonumber\\ \hbox{$r(K_8,K_{4}-P_3)=r(K_8,K_3)=28$ (with $m=7$ and $n=3$),}\\ \hbox{$r(K_9,K_{4}-P_3)=r(K_9,K_3)=36$ (with $m=8$ and $n=3$),}\nonumber\\ \hbox{$r(K_{10},K_{4}-P_3)=r(K_{10},K_3){\leqslant}43$ (with $m=9$ and $n=3$).}\nonumber\end{aligned}$$ The best known upper bounds of $r(K_n,K_5-P_3)$ for $n{\geqslant}6$ are obtained by applying the following classical recursive formula : $$\label{recursive} r(K_m,K_n-K_{1,s}){\leqslant}r(K_{m-1},K_n-K_{1,s})+r(K_m,K_{n-1}-K_{1,s}).$$ By using , we obtain $$r(K_6,K_5-P_3){\leqslant}r(K_5,K_5-P_3)+r(K_6,K_4-P_3)=25+r(K_6,K_3)=25+18=43$$ $$r(K_7,K_5-P_3){\leqslant}r(K_6,K_5-P_3)+r(K_7,K_4-P_3)= 43+23=66$$ $$\begin{array}{ll} r(K_8,K_5-P_3) &{\leqslant}r(K_7,K_5-P_3)+r(K_8,K_4-P_3)\\ & {\leqslant}r(K_6,K_5-P_3)+r(K_7,K_4-P_3)+28=43+23+28=94 \end{array}$$ $$r(K_9,K_5-P_3){\leqslant}r(K_8,K_5-P_3)+r(K_9,K_4-P_3)=94+36=130$$ $$\begin{array}{ll} r(K_{10},K_5-P_3) &{\leqslant}r(K_9,K_5-P_3)+r(K_{10},K_4-P_3)\\ & {\leqslant}r(K_8,K_5-P_3)+r(K_9,K_4-P_3)+43=94+36+43=173 \end{array}$$ We are able to improve all the above upper bounds. \[cor1\] - $r(K_6,K_5-P_3){\leqslant}41$. - $r(K_7,K_5-P_3){\leqslant}61$. - $r(K_8,K_{5}-P_3){\leqslant}85$. - $r(K_9,K_{5}-P_3){\leqslant}117$. - $r(K_{10},K_{5}-P_3){\leqslant}159$. \(1) It is known that $r(K_6,K_4){\leqslant}41$. Then, by taking $N=41$, $s=2$ and $n=4$, we have that $\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{3\times 37}{4}\rceil=28> r(K_5,K_5-P_3)=25$ and so, by Corollary \[mainc\], the result follows. The proofs for the rest of the cases are analogues. We just need to check that conditions are satisfied by taking : $N=61{\geqslant}r(K_7,K_4)$ for (2), $N=85>84{\geqslant}r(K_8,K_4)$ for (3), $N=117>115{\geqslant}r(K_9,K_4)$ for (4) and $N=159>149{\geqslant}r(K_{10},K_4)$ for (5). By applying recursion to $r(K_{11},K_{5}-P_3)$ one may obtain that $r(K_{11},K_{5}-P_3){\leqslant}224$ if the old known values are used in the recursion, and it can be improved to $r(K_{11},K_{5}-P_3){\leqslant}210$ by using the new values given in Corollary \[cor1\]. The latter beats the upper bound $r(K_{11},K_{5}-P_3){\leqslant}215$ obtained via Corollary \[mainc\]. We can also use Corollary \[mainc\] to give the following equality. If $37{\leqslant}r(K_6,K_4)$ then $r(K_6,K_5-P_3)=r(K_6,K_4)$. It is clear that $r(K_6,K_4){\leqslant}r(K_6,K_5-P_3)$. We show that $r(K_6,K_5-P_3){\leqslant}r(K_6,K_4)$. Let $N=r(K_6,K_4){\geqslant}37$. Since $N{\geqslant}37$ and by taking $s=2$ and $n=4$ we have $\lceil\frac{(s+1)(N-n)}{n}\rceil{\geqslant}\lceil\frac{3\times 33}{4}\rceil=25=r(K_5,K_5-P_3)$, and so, by Corollary \[mainc\], $r(K_6,K_5-P_3){\leqslant}N=r(K_6,K_4)$. It is known that $36{\leqslant}r(K_6,K_4)$. In the case when $r(K_6,K_4)=36$ the above result might not hold. Results on $r(K_m,K_6-P_3)$ --------------------------- Since $r(K_3,K_5)=14$ then, by we have $r(K_3,K_6-P_3)=14$. So, by , we have $$r(K_4,K_6-P_3){\leqslant}r(K_3,K_6-P_3)+r(K_4,K_5-P_3)=14+18=32.$$ Moreover, it is known that the upper bound is strict if the terms of the right side are even, which is our case, and so, $r(K_4,K_6-P_3){\leqslant}31$. \[ccc1\] - $25{\leqslant}r(K_4,K_6-P_3){\leqslant}27$. - $r(K_5,K_6-P_3){\leqslant}49$. - $r(K_6,K_6-P_3){\leqslant}87$. \(1) We clearly have that $25=r(K_4,K_5){\leqslant}r(K_4,K_6-P_3)$. It is known that $r(K_4,K_5)=25$. We take $N=27>r(K_4,K_5)$, $s=2$ and $n=5$. So, $\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{3\times 22}{5}\rceil=14= r(K_3,K_6-P_3)$ and so, by Corollary \[mainc\], $r(K_4,K_6-P_3){\leqslant}27$. The proofs for (2) and (3) are analogues. We just need to check that conditions of Corollary \[mainc\] are satisfied by taking : $N=49{\geqslant}r(K_5,K_5)$ for (2) and $N=87{\geqslant}r(K_6,K_5)$ for (3). The recursive formula gives now (by using the new above values) $r(K_{7},K_{6}-P_3){\leqslant}148$ (before, by using the old values, it gave $158$). This new upper bound beats the upper bound $r(K_{7},K_{6}-P_3){\leqslant}149$ obtained by Corollary \[mainc\]. Results on $r(K_m,K_n-P_3)$ for a variety of $m$ and $n$ -------------------------------------------------------- For each $3{\leqslant}m{\leqslant}5$ and each $7{\leqslant}n{\leqslant}16$, we have that $r(K_m,K_n-P_3){\leqslant}u(m,n)$, where the value of $u(m,n)$ is given in the $(m,n)$ entry of the below table (the value in brackets is the best previously known upper bound). We just need to check that conditions of Corollary \[mainc\] are satisfied by taking : $N=41{\geqslant}r(K_4,K_6)$ for (1), $N=87{\geqslant}r(K_5,K_6)$ for (2), $N=61{\geqslant}r(K_4,K_7)$ for (3), $N=143{\geqslant}r(K_5,K_7)$ for (4), $N=222>216{\geqslant}r(K_5,K_8)$ for (5), $N=115{\geqslant}r(K_4,K_9)$ for (6), $N=47>42{\geqslant}r(K_3,K_{10})$ for (7), $N=154>149{\geqslant}r(K_4,K_{10})$ for (8), $N=52>51{\geqslant}r(K_3,K_{11})$ for (9), $N=199>191{\geqslant}r(K_4,K_{11})$ for (10), $N=61>59{\geqslant}r(K_3,K_{12})$ for (11), $N=253>238{\geqslant}r(K_4,K_{12})$ for (12), $N=70>69{\geqslant}r(K_3,K_{13})$ for (13), $N=313>291{\geqslant}r(K_4,K_{13})$ for (14), $N=80>78{\geqslant}r(K_3,K_{14})$ for (15), $N=383>349{\geqslant}r(K_4,K_{14})$ for (16), $N=91>88{\geqslant}r(K_3,K_{15})$ for (17), $N=466>417{\geqslant}r(K_4,K_{15})$ for (18). Some bounds for $r(K_m,K_n-K_{1,s})$ when $s{\geqslant}3$ {#sec:3} ========================================================= Here, we will focus our attention to upper bounds for $r(K_m,K_n-K_{1,3})$ that yields to upper bounds for $r(K_m,K_n-K_{1,s})$ when $s{\geqslant}4$ since $$\hbox{$r(K_m,K_n-K_{1,s}){\leqslant}r(K_m,K_n-K_{1,3})$ for all $s{\geqslant}4$.}$$ Results on $r(K_m,K_6-K_{1,3})$. -------------------------------- In [@BE89] it was proved that $r(K_5,K_6-K_{1,3})=r(K_5,K_5){\leqslant}49$. So by we have $$r(K_6,K_6-K_{1,3}){\leqslant}r(K_5,K_6-K_{1,3})+r(K_6,K_5-K_{1,3})=49+41=90.$$ For each $6{\leqslant}m{\leqslant}15$, we have that $r(K_m,K_6-K_{1,3}){\leqslant}u(m)$, where the value of $u(m)$ is given in the below table (the value in brackets is the best previously known upper bound). It follows by Corollary \[mainc\] and by taking $N$ as the best known upper bound of $r(K_n,K_5)$ for each $n=6,\dots ,15$. We notice that in case (1), by using similar arguments as above, we could prove that $r(K_6,K_6-K_{1,3})=r(K_6,K_5)$ if $66{\leqslant}r(K_6,K_5)$ . Results on $r(K_m,K_7-K_{1,3})$ ------------------------------- In [@BBH98] it was proved that $r(K_3,K_7-K_{1,3})=18$. Since $r(K_3,K_6)=18$ then, by we have $r(K_3,K_7-K_{1,3})=18$. So, by , we have $$r(K_4,K_7-K_{1,3}){\leqslant}r(K_3,K_7-K_{1,3})+r(K_4,K_6-K_{1,3})=18+25=43.$$ For each $4{\leqslant}m{\leqslant}11$, we have that $r(K_m,K_7-K_{1,3}){\leqslant}u(m)$, where the value of $u(m)$ is given in the below table (the value in brackets is the best previously known upper bound). It follows by Corollary \[mainc\], by taking $s=3$ and $N$ equals to the best known upper bound for $r(K_n,K_6)$ when $n=5,6,7,8,9,11$ and $N=1175>1171{\geqslant}r(K_{10},K_6)$ when $n=10$. For instance, for (1) we take $N=41{\geqslant}r(K_4,K_6)$, $s=3$ and $n=6$. Then,$\lceil\frac{(s+1)(N-n)}{n}\rceil=\lceil\frac{4\times 35}{6}\rceil=24> r(K_3,K_7-K_{1,3})$ and, by Corollary \[mainc\], $r(K_4,K_7-K_{1,3}){\leqslant}41$. More equalities {#sec:4} =============== From we have that $r(K_4,K_{n+1}-K_{1,s})=r(K_4,K_n)$ if $s{\geqslant}n-\lceil\frac{n}{3}\rceil$. The latter yields to the following equalities. $$\begin{array}{ll} \hbox{$r(K_4,K_7-K_{1,s})=r(K_4,K_6)$ \ \ if $s{\geqslant}4$,}& \hbox{$r(K_4,K_8-K_{1,s})=r(K_4,K_7)$ \ \ if $s{\geqslant}5$,}\\ \hbox{$r(K_4,K_9-K_{1,s})=r(K_4,K_8)$ \ \ if $s{\geqslant}5$,}& \hbox{$r(K_4,K_{10}-K_{1,s})=r(K_4,K_9)$ \ if $s{\geqslant}6$,}\\ \hbox{$r(K_4,K_{11}-K_{1,s})=r(K_4,K_{10})$ if $s{\geqslant}6$,}& \hbox{$r(K_4,K_{12}-K_{1,s})=r(K_4,K_{11})$ if $s{\geqslant}7$,}\\ \hbox{$r(K_4,K_{13}-K_{1,s})=r(K_4,K_{12})$ if $s{\geqslant}8$,}& \hbox{$r(K_4,K_{14}-K_{1,s})=r(K_4,K_{13})$ if $s{\geqslant}8$,}\\ \hbox{$r(K_4,K_{15}-K_{1,s})=r(K_4,K_{14})$ if $s{\geqslant}9$,}& \hbox{$r(K_4,K_{16}-K_{1,s})=r(K_4,K_{15})$ if $s{\geqslant}10$.}\\ \end{array}$$ We are able to extend all these equalities for further values of $s$. \[coo1\] $$\begin{array}{ll} \hbox{(1) $r(K_4,K_7-K_{1,s})=r(K_4,K_6)$ for $s=3$.} & \hbox{(2) $r(K_4,K_8-K_{1,s})=r(K_4,K_7)$ for $s=3,4$.}\\ \hbox{(3) $r(K_4,K_9-K_{1,s})=r(K_4,K_8)$ for $s=4$.}& \hbox{(4) $r(K_4,K_{10}-K_{1,s})=r(K_4,K_9)$ for $s=4,5$.}\\ \hbox{(5) $r(K_4,K_{11}-K_{1,s})=r(K_4,K_{10})$ for $s=5$.}& \hbox{(6) $r(K_4,K_{12}-K_{1,s})=r(K_4,K_{11})$ for $s=6$.}\\ \hbox{(7) $r(K_4,K_{13}-K_{1,s})=r(K_4,K_{12})$ for $s=6,7$.}& \hbox{(8) $r(K_4,K_{14}-K_{1,s})=r(K_4,K_{13})$ for $s=7$.}\\ \hbox{(9) $r(K_4,K_{15}-K_{1,s})=r(K_4,K_{14})$ for $s=8$.}& \hbox{(10) $r(K_4,K_{16}-K_{1,s})=r(K_4,K_{15})$ for $s=9$.}\\ \end{array}$$ \(1) Since $r(K_4,K_6){\geqslant}36$ it follows that $r(K_4,K_7-K_{1,3}){\geqslant}36$ and by , we have $r(K_3,K_7-K_{1,3})=r(K_3,K_6)=18$. Let us take $N=r(K_4,K_6){\geqslant}36$, $s=3$ and $n=6$. So,$\lceil\frac{(s+1)(N-n)}{n}\rceil{\geqslant}\lceil\frac{4\times 30}{6}\rceil=20>r(K_3,K_7-K_{1,3})=18$ and the result follows by Corollary \[mainc\]. The proofs for the rest of the cases are analogues. We just need to check that conditions of Corollary \[coo1\] are satisfied by taking : $N=r(K_4,K_7){\geqslant}49$ and checking that $r(K_3,K_8-K_{1,3})=r(K_3,K_7)=23$ for (2), $N=r(K_4,K_8){\geqslant}58$ and checking that $r(K_3,K_9-K_{1,4})=r(K_3,K_8)=28$ for (3) and so on. We notice that, by using the same arguments as above, we could improve cases (5) and (7) by showing that $r(K_4,K_{11}-K_{1,4})=r(K_4,K_{10})$ when $r(K_4,K_{10})\neq 92$ and $r(K_4,K_{13}-K_{1,5})=r(K_4,K_{12})$ when $r(K_4,K_{12})\neq 128$. In view of Corollary \[coo1\], we may pose the following question, Let $n{\geqslant}7$ be an integer. For which integer $s$ the equality $r(K_4,K_n-K_{1,s})=r(K_4,K_{n-1})$ holds? Or more ambitious, in view of [@BE89 Theorem 4], we may pose the following, Let $m{\geqslant}4$ and $n{\geqslant}7$ be integers. For which integer $s{\leqslant}n-1$ the equality $r(K_m,K_n-K_{1,s})=r(K_m,K_{n-1})$ holds? Wheels versus $K_n-K_{1,s}$ {#sec:7} =========================== In this section we obtain further relating results by applying Theorem \[mainth\] to other graphs. Indeed, we may consider $G_1$ as the cycle on $n-1$ vertices $C_{n-1}$, and thus $G^v_1$ will be the wheel $W_n$ by taking the new vertex $v$ incident to all the vertices of $C_{n-1}$. \(1) $r(W_5,K_6-K_{1,s})=27$ for $s=3,4,5$. \(2) $r(W_5,K_7-K_{1,s})=r(W_5,K_6)$ for $s=4,5,6$. \(3) $r(W_5,K_8-K_{1,s})=r(W_5,K_7)$ for $s=4,5,6,7$. \(1) It is clear that $r(W_5,K_5){\leqslant}r(W_5,K_6-K_{1,s})$ for any $1{\leqslant}s{\leqslant}5$. Since $r(W_5,K_5)=27$ (see [@Rad]), then $27{\leqslant}r(W_5,K_6-K_{1,s})$. We will now show that $r(W_5,K_6-K_{1,s}){\leqslant}27$ for $3{\leqslant}s{\leqslant}5$. By taking $N=27$, $s{\geqslant}3$ and $n=5$, we have that $\lceil\frac{(s+1)(N-n)}{n}\rceil{\geqslant}\lceil\frac{4\times 22}{5}\rceil=18= r(C_4,K_6){\geqslant}r(C_4,K_6-K_{1,s})$ and so, by Theorem \[mainth\], we have $r(W_5,K_6-K_{1,s}){\leqslant}27$, and the result follows. The proofs for (2) and (3) are analogues. We just need to check that conditions of Theorem \[mainth\] are satisfied by taking : $N=r(W_5,K_6){\geqslant}33$ for (2) and $N=r(W_5,K_7){\geqslant}43$ for (3) (see [@Rad] for the lower bounds of $r(W_5,K_6)$ and $r(W_5,K_7)$). [99]{} L. Boza, The Ramsey Number $r(K_5-P_3,K_5)$, [*The Electronic Journal of Combinatorics* ]{} http:// www.combinatorics.org, P90, 18 (2011), 10 pages. S. Brandt, G. Brinkmann, T. Harmuth, All Ramsey numbers $r(K_3,G)$ for connected graphs of order $9$, [*The Electronic Journal of Combinatorics* ]{} http://www.combinatorics.org, R7, 5 (1998), 20 pages. S.A. Burr, P. Erdős, R.J. Faudree, R.H. Schelp, On the Difference between Consecutive Ramsey Numbers, [*Utilitas Mathematica*]{} [**35**]{} (1989) 115–118. M. Clancy, Some Small Ramsey Numbers, [*Journal of Graph Theory*]{} [**1**]{} (1977) 89-91. G.R.T. hendry, Ramsey numbers for graphs with five vertices, [*J. Graph Theory*]{}, [**13**]{}(2) (1989) 245-248. S.P. Radziszowski, Small Ramsey numbers, [*Electron. J. Combin.*]{} [**1**]{} (1994), Dynamic Survey [**1**]{}, 30 pp (electronic). S.P. Radziszowski and D.L. Kreher, Upper Bounds for Some Ramsey Numbers $R(3, k )$, [*Journal of Combinatorial Mathematics and Combinatorial Computing*]{}, [**4**]{} (1988) 207-212. Appendix ======== The following table was obtained from [@Rad].
--- author: - 'Andrea Boskovic[^1] ().' - 'Qinyi Chen[^2] ().' - 'Dominik Kufel[^3] ().' - 'Zijie Zhou[^4] ().' title: 'Online Learning and Matching for Resource Allocation Problems[^5]' --- [^1]: Department of Statistics, Amherst College, Amherst, MA 01002 [^2]: Department of Mathematics, University of California, Los Angeles, Los Angeles, CA 90095 [^3]: Department of Physics, University College London, Gower St, Bloomsbury, London WC1E 6BT, United Kingdom [^4]: Department of Mathematics, Purdue University, 610 Purdue Mall, West Lafayette, IN 47907 [^5]: Submitted to the editors November 17, 2019. Completed under the guidance of Anna Ma, Department of Mathematics, University of California, Irvine () and Xinshang Wang, DAMO Academy, Alibaba US ().
--- abstract: 'Kupershmidt operator is a key to extend a Leibniz algebra by its representation. In this paper, we investigate several structures related to Kupershmidt operators on Leibniz algebras and introduce (dual) KN-structures on a Leibniz algebra associated to a representation. It is proved that Kupershmidt operators and dual KN-structures can generate each other with certain conditions. It is also shown that a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra gives rise to a dual KN-structure. Finally, the notions of $r-n$ structures, RBN-structures and $\mathcal{B}N$-structures on Leibniz algebras are thoroughly studied and shown to bear interesting interrelations.' address: - 'Department of Mathematics, Zhejiang University of Science and Technology, Hangzhou, Zhejiang 310023, China' - 'Department of Mathematics, North Carolina State University, Raleigh, NC 27695, USA' author: - Qinxiu Sun - 'Naihuan Jing$^*$' title: Kupershmidt operators and related structures on Leibniz algebras --- [^1] Introduction ============ Leibniz algebras were introduced by Loday [@L; @LT] in the study of the periodicity in algebraic K-theory by forgetting the anti-symmetry in Lie algebras. Numerous works have been devoted to various aspects of Leibniz algebras in both mathematics and physics [@BW; @C; @DMS; @DW; @VKO; @KM; @KW]. First, the Kupershmidt operator (or $\mathcal{O}$-operator) was originally meant to generalize the well-known classical Yang-Baxter equation (YBE) on Lie algebras [@K] and provides a solution of the YBE on a larger Lie algebra [@B1]. For associate algebras, the Kupershmidt operators give rise to dendriform algebras, which have played an important role in bialgebra theory [@B2] and operads [@BBGN]. Recently Sheng and Tang [@ST] introduced the notion of Leibniz-dendriform algebras in their study of algebraic structure underlying the Kupershmidt operator and cohomologies of Kupershmidt operators on Leibniz algebras. The Leibniz-dendriform algebra captures the essential algebraic structure underlying a Kupershmidt operator, which in turn leads to a Leibniz structure on itself and most importantly a solution of the classical Leibniz Yang-Baxter equation. Moreover, the twilled Leibniz algebras were also considered and Maurer-Cartan elements of the associated graded Lie algebra (gLa) were given therein. Secondly, Rota-Baxter operators were first introduced by Baxter in his study of fluctuation theory in probability [@Ba]. They have been found useful in many contexts, for example in quantum analogue of poisson geometry and so on, see [@G; @ZBG] for more information. Thirdly, Nijenhuis operators on Lie algebras have been studied in [@D] and [@FF]. In the perspective of deformations of Lie algebras, Nijenhuis operators canonically give rise to trivial deformations [@NR]. Nijenhuis operators have also been studied on pre-Lie algebras [@WSBL] and Poisson-Nijenhuis structures appeared in completely integrable systems [@MM] and were further studied in [@KM; @KR]. The $r-n$ structure over a Lie algebra was studied in [@RAH]. Recently, Hu, Liu and Sheng [@HLS] studied the (dual) KN-structure as generalization of the $r-n$ structure. The associative analogues of Poisson-Nijenhuis structures have also been considered in [@LBS; @U]. In the present work, we would like to study the algebraic structures related to Kupershmidt operators on a Leibniz algebra as well as their relations with other related structures such as the analogous Rota-Baxter operator and criteria in terms of YBEs. The paper is organized as follows. In Section 2, we give some elementary results on Leibniz algebras. The (dual-) Nijenhuis pair is studied in Section 3. Moreover, Nijenhuis pair can generate a trivial deformation. In Section 4, we consider the notions of (dual) KN-structures. Their properties are also explored. In Section 5, the relationship between Nijenhuis and Kupershmidt operators are characterized. More importantly, Kupershmidt operators and dual KN-structures are shown to generate each other under some conditions. In Section 6, we prove that a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra can generate a dual KN-structure. Finally, the notions of $r-n$ structures, RBN-structures and $\mathcal{B}N$-structures on Leibniz algebras and their relationships are studied. Preliminaries on Leibniz algebras ================================== We briefly review some elementary notions on Leibniz algebras [@L; @ST; @B; @FM]. Let $\mathfrak g$ be a complex [*Leibniz algebra*]{}, i.e., the vector space $\mathfrak g$ is equipped with a complex bilinear product $[\ , \ ]: \mathfrak g\times \mathfrak g\longrightarrow\mathfrak g$ satisfying the (left) Leibniz property: for $x_0, x_1, x_2\in\mathfrak g$ $$[x_0, [x_1, x_2]]=[[x_0, x_1], x_2]]+[x_1,[x_0, x_2]].$$ To indicate the bilinear operation, $\mathfrak g$ is often written as $(\mathfrak g, [\ ,\ ])$. Let $V$ be a complex vector space, the endomorphism algebra $\mathrm{End}(V)$ becomes the Lie algebra $\mathfrak{gl}(V)$ under the usual bracket operation: $[f, g]=fg-gf$, $f, ~g\in\mathrm{End}(V)$. Let $\rho^L, \rho^R: \mathfrak g\longrightarrow \mathfrak{gl}(V)$ be two linear maps satisfying the following property: $$\begin{aligned} %\label{e:rep} \rho^{L}([x_0,x_1])&=[\rho^{L}(x_0),\rho^{L}(x_1)], \qquad \rho^{R}([x_0,x_1])=[\rho^{L}(x_0),\rho^{R}(x_1)], \\ \rho^{R}(x_1)\rho^{L}(x_0)&=-\rho^{R}(x_1)\rho^{R}(x_0),~~~\forall~x_0,~x_1\in \mathfrak g,\end{aligned}$$ then the triple $(V, \rho^L, \rho^R)$ is called a [*representation*]{} of the Leibniz algebra $\mathfrak g$. In essence, $V$ is a representation of the Leibniz algebra $\mathfrak g$ iff $V\oplus \mathfrak g$ is a Leibniz algebra under the bracket $[w_0+x_0, w_1+x_1]=[w_0, x_1]+[x_0, w_1]+[x_0, x_1]$, where $[w_0, x_1]=\rho^R(x_1)w_0, [x_0, w_1]=\rho^L(x_0)w_1$ and $[w_0, w_1]=0, \forall~ w_0, ~w_1\in V$. Given a representation $(V, \rho^R, \rho^L)$ of the Leibniz algebra $\mathfrak g$, the triple $(V^{*} ,(\rho^{L})^{*} , -(\rho^{L})^{*}-(\rho^{R})^{*})$ is also a representation of $\mathfrak g$, called the [*dual representation*]{} associated with $V$. Here the dual map $\psi^*$ of a linear map $\psi:\mathfrak g\longrightarrow V$ is defined as usual: $\langle \psi^{*}(y)w^{*},w_0\rangle=\langle w^{*}, \psi(y)w_0\rangle$ for all $y\in\mathfrak g,~ w^*\in V^*,~ w_0\in V$. Let $L$ (resp. $R$) be the left (resp. right) multiplication operator associated to $\mathfrak g$, i.e. $$L(x_0)x_1 = R(x_1)x_0 = [x_0, x_1],~~ \forall ~x_0,x_1\in \mathfrak g.$$ Then $(\mathfrak g, L, R)$ is a priori a representation of $\mathfrak g$ called the [*regular representation*]{}. By the above remark, $(\mathfrak{g}^{*},L^{*},-L^{*}-R^{*})$ is the dual representation. To define a nontrivial Leibniz algebra structure on $V\oplus\mathfrak g$, where $V$ is a representation of the Leibniz algebra $\mathfrak g$, we need the notion of Kupershmidt operators [@K]. A linear map $K:V\longrightarrow \mathfrak g$ is called a [*Kupershmidt operator*]{} associated to the representation $(V,\rho^{L},\rho^{R})$ if for $u, v\in V$ $$\label{ks} [K(u), K(v)]=K(\rho^{L}(K(u))v+\rho^{R}(K(v))u).$$ Writing $u\lhd^{K} v=\rho^{L}(K(u))(v)~~ \hbox{and}~~u\rhd^{K} v=\rho^{R}(K(v))(u)$, then the Kupershmidt condition is $[K(u), K(v)]=K(u\lhd^K v+u\rhd^K v)$, and $V$ becomes a Leibniz algebra with the bracket: $[u, v]^K:=u\rhd^K v+ u\lhd^K v$. Furthermore, the triple $(V,\rhd^{K},\lhd^{K})$ is actually a Leibniz-dendriform algebra (see [@ST] for detail). In view of this, $V$ is referred as the [*sub-adjacent Leibniz algebra*]{} of $(V,\rhd^{K},\lhd^{K})$ and denoted by $(V_{K}, [\ ,\ ]^{K})$. The Leibniz algebra $(V_{K}, [\ ,\ ]^{K})$ has a natural representation $(\varrho_{K}^{L},~\varrho_{K}^{R})$ over $\mathfrak g$ given by $$\varrho_{K}^{L}(v)x=[K(v),x]-K(\rho^{R}(x)v),~~\varrho_{K}^{R}(v)x=[x,K(v)]-K(\rho^{L}(x)v)$$ for any $x\in \mathfrak g,~v\in V$. Then $K$ is also a Kupershmidt operator for the regular representation. With the given Kuperschmidt operator $K$, the space $\mathfrak g\oplus V$ becomes a Leibniz algebra with $$\{x_0+v_0,x_1+v_1\}_{K}=[x_0,x_1]+\varrho_{K}^{L}(v_0)x_1+\varrho_{K}^{R}(v_1)x_0+\rho^{L}(x_0)v_1+\rho^{R}(x_1)v_0+[v_0,v_1]^{K}$$ for any $x_0,~x_1\in \mathfrak g,~v_0,~v_1\in V$. Let $\mathfrak g$ be a Leibniz algebra. Following [@ST], an element $\pi=\sum_ia_i\otimes b_i\in\mathfrak g\otimes\mathfrak g$ is called a [*Leibniz r-matrix*]{} if it satisfies the classical Yang-Baxter equation (YBE): $$[\pi_{12}, \pi_{13}]+[\pi_{12}, \pi_{23}]+[\pi_{23}, \pi_{13}]=0,$$ where $\pi_{12}=\sum_ia_i\otimes b_i\otimes 1\in \mathfrak g^{\otimes 3}$ etc. It is known that if $\pi$ is a symmetric solution of the Leibniz YBE, then the map $\pi^{\sharp} : \mathfrak g^{*} \longrightarrow \mathfrak g$ given by $$\langle \pi^{\sharp}(\alpha_0),\alpha_1\rangle=\pi(\alpha_0,\alpha_1),~~\forall ~\alpha_0,~\alpha_1\in \mathfrak g^{*}$$ is a Kupershmidt operator on $(\mathfrak g^{*},L^{*},-L^{*}-R^{*})$. If the Leibniz algebra $\mathfrak g$ decomposes itself into a sum of two subspaces: ${\mathfrak g}=\mathfrak g_{1}\oplus \mathfrak g_{2}$ such that $\mathfrak g_1$ (resp. $\mathfrak g_2$) is a representation of $\mathfrak g_2$ (resp. $\mathfrak g_1$) through the bracket operation, then it is called a [*twilled Leibniz algebra*]{} and denoted by $\mathfrak g_{1}\bowtie \mathfrak g_{2}$. In such a case, we also call $(\mathfrak g_1, \mathfrak g_2, \rho_1^L, \rho_1^R, \rho_2^L, \rho_2^R)$ a [*matched pair*]{} of Leibniz algebras. One can also define the Leibniz cohomology of $\mathfrak g$ as follows. Let $\hbox{Hom}^{n}(\mathfrak g,\mathfrak g) =\hbox{Hom}(\otimes^{n}\mathfrak g,\mathfrak g)$ and $C^{*}(\mathfrak g,\mathfrak g)=\oplus_{n\geq 1}\hbox{Hom}^{n}(\mathfrak g,\mathfrak g)$. It is known that $C^{*}(\mathfrak g,\mathfrak g)$ is a graded Lie algebra (gLa) with the Balavoine bracket $\{ \ ,\ \}^{B}$ as follows: $$\{\varphi_1,\varphi_2\}^{B}=\varphi_1\bar{\circ}\varphi_2-(-1)^{mn}\varphi_2\bar{\circ}\varphi_1$$ where $\varphi_1\bar{\circ}\varphi_2\in \hbox{Hom}^{m+n+1}(\mathfrak g,\mathfrak g)$ is given by $$\varphi_1\bar{\circ}\varphi_2=\sum_{k=1}^{m+1}(-1)^{(k-1)n}\varphi_1\circ_{k}\varphi_2,$$ and $\circ_{k}$ is as follows: $$\begin{aligned} &&\varphi_1\circ_{k}\varphi_2(x_{0},\cdot\cdot\cdot,x_{m+n})\\&=& \sum_{\sigma\in S_{(k-1,n)}}(-1)^{\sigma}\varphi_1(x_{\sigma(0)},\cdot\cdot\cdot,x_{\sigma(k-2)},\varphi_1(x_{\sigma(k-1)},\cdot\cdot\cdot,x_{\sigma(k+n-1)},x_{k+n}),x_{k+n+1} ,\cdot\cdot\cdot,x_{m+n}),\end{aligned}$$ for any $\varphi_1\in \hbox{Hom}^{m+1}(\mathfrak g,\mathfrak g),~\varphi_2\in \hbox{Hom}^{n+1}(\mathfrak g,\mathfrak g)$. Furthermore, $(C^{*}(\mathfrak g,\mathfrak g), [ \ , \ ]_{\mu},d)$ is a differential graded Lie algebra (dgLa), where $[ \varphi_1 , \varphi_2 ]_{\mu}=(-1)^{m}\{\{\mu,\varphi_1\}^{B},\varphi_2\}^{B}$ for $\varphi_1\in \mathrm{Hom}^{m+1}(\mathfrak g, \mathfrak g)$, and the differential $d$ is defined by $d(\varphi)=\{\mu,\varphi\}^{B}$, where $\mu$ is the Leibniz bracket of $\mathfrak g$. See [@B; @FM] for more details. Let $\mathfrak g_{1}\bowtie \mathfrak g_{2}$ be a twilled Leibniz algebra, denote the Leibniz bracket on $\mathfrak g_{2}\oplus\mathfrak g_{1}$ by $\mu_2$, then $(C^{*}(\mathfrak g_{2}\oplus\mathfrak g_1,\mathfrak g_{2}\oplus\mathfrak g_1),[ \ , \ ]_{\mu_2},d)$ is a dgla with $d (\varphi)=\{\mu_2,\varphi\}^{B}$. Obviously, $C^{*}(\mathfrak g_1,\mathfrak g_2)=\oplus_{n\geq 1}\hbox{Hom}^{n}(\mathfrak g_1,\mathfrak g_2)=\oplus_{n\geq 1}\hbox{Hom}(\otimes^{n}\mathfrak g_1,\mathfrak g_2)$ is a subalgebra of $C^{*}(\mathfrak g_{2}\oplus\mathfrak g_1,\mathfrak g_{2}\oplus\mathfrak g_1)$. Suppose $\Theta:\mathfrak g_{1}\longrightarrow \mathfrak g_{2}$ is a linear map. The equations $$d\Theta+\frac{1}{2}[\Theta,\Theta]_{\mu_2}=0,~~d \Theta=\frac{1}{2}[\Theta,\Theta]_{\mu_2}=0$$ are called the [*Maurer-Cartan equation*]{} and the [*strong Maurer-Cartan equation*]{} respectively. Then $\Theta$ is a solution of the Maurer-Cartan equation iff $$\label{2.1} [\Theta(y),\Theta(z)]_{2}+\rho_{1}^{L}(y)\Theta(z)+\rho_{1}^{R}(z)\Theta(y) =\Theta(\rho_{2}^{L}\Theta(y)(z)+\rho_{2}^{R}\Theta(z)(y))+\Theta([y,z]_{1}),$$ for any $y,~z\in \mathfrak g_{1}$. Here $[\ , \ ]_i$ refers to the Leibniz bracket on $\mathfrak g_i~(i=1, 2)$. It is known [@ST] that $\Theta$ is a solution of the strong Maurer-Cartan equation if and only if and the following extra condition hold: $$\label{2.2} \Theta([y,z]_{1})=\rho_{1}^{L}(y)\Theta(z)+\rho_{1}^{R}(z)\Theta(y).$$ The following result is a special case of the above discussion (cf. [@ST]) with $\mathfrak g_1=\mathfrak g$ and $\mathfrak g_2=V_K$. [**Theorem 2.1.**]{} [@ST] Let $K$ be a Kupershmidt operator on a Leibniz algebra $\mathfrak g$ with a representation $V$, and let $ \Theta: \mathfrak g \longrightarrow V$ be a linear map. Then $\Theta$ satisfies the strong Maurer-Cartan equation on the twilled algebra $\mathfrak g\bowtie V_{K}$ if and only if $$\begin{aligned} \label{2.3} \Theta([y,z])&=\rho^{L}(y)\Theta(z)+\rho^{R}(z)\Theta(y),\\ \label{2.4} [\Theta(y),\Theta(z)]^{K}&=\Theta(\varrho_{K}^{L}(\Theta(y))z+\varrho_{K}^{R}(\Theta(z))y).\end{aligned}$$ (Dual-)Nijenhuis pair ===================== Let $(V,\rho^{L},\rho^{R})$ be a representation of a Leibniz algebra $\mathfrak g$. Suppose $\omega : \mathfrak g\otimes \mathfrak g\longrightarrow \mathfrak g$ is a bilinear map, and $\varpi^{L},\varpi^{R}: \mathfrak g \longrightarrow \mathfrak{gl}(V)$ are two linear maps. We can use these maps to deform the Leibniz bracket $[\ , \ ]$ and the action maps $\rho^L, \rho^R$ as follows. For real $t\geq 0$ and $x_0, x_1\in\mathfrak g$, put $$[x_0,x_1]_{t} = [x_0, x_1] + t\omega(x_0, x_1)$$ $$\rho^{R}_{t}(x_0) = \rho^{R}(x_0) + t\varpi^{R}(x_0),~~\rho^{L}_{t}(x_0) = \rho^{L}(x_0) + t\varpi^{L}(x_0). %~\forall~x_0,~x_1\in \mathfrak g.$$ The triple $(\omega, \varpi^L, \varpi^R)$ is said to [*generate an infinitesimal deformation*]{} of the representation $(V,\rho^{L},\rho^{R} )$ of $\mathfrak g$ if for each $t$, $(\mathfrak g, [\ ,\ ]_{t} )$ is a Leibniz algebra and $(V, \rho^{L}_{t},\rho^{R}_{t})$ is its representation. In particular, $V$ can be viewed as its trivial deformation with $(\omega, \varpi^L, \varpi^R)=(0, 0, 0)$. The following concept generalizes that of Lie algebras studied in [@HLS]. [**Definition 3.1.**]{} Two deformations $(V,\rho^{L}_{t},\rho^{R}_{t})$, $(V ,\rho^{L'}_{t},\rho^{R'}_{t})$ of the representation $(V, \rho^L, \rho^R)$ of $\mathfrak g$ are [*equivalent*]{} if there is an isomorphism $(I_{\mathfrak g} + tN, I_{V} + tS)$ from $(V, \rho^{L'}_{t},\rho^{R'}_{t})$ to $(V,\rho^{L}_{t},\rho^{R}_{t})$, where $N\in\mathrm{End}(\mathfrak g)$ and $S\in\mathrm{End}(V)$, such that for any $x_0,~x_1\in \mathfrak g$, $$\begin{aligned} \label{deform1} (I_{\mathfrak g} + tN)([x_0,x_1]_{t}^{'} )& = [(I_{\mathfrak g} + tN)(x_0),(I_{\mathfrak g} + tN)(x_1)]_{t} , \\ \label{deform2} (I_{V} + tS)\rho^{L'}_{t}(x_0)& = \rho^{L}_{t}((I_{\mathfrak g} + tN)(x_0)) (I_{V} + tS),\\ \label{deform3} (I_{V} + tS)\rho^{R'}_{t}(x_0)& = \rho^{R}_{t}((I_{\mathfrak g} + tN)(x_0)) (I_{V} + tS).\end{aligned}$$ When $(V,\rho^{L}_{t},\rho^{R}_{t})$ is equivalent to $(V,\rho^{L},\rho^{R})$, the former is called a [*trivial deformation*]{} of the latter. By direct calculation, if $(V,\rho^{L}_{t},\rho^{R}_{t})$ is a trivial deformation, then implies that $N$ is a so-called [*Nijenhuis operator*]{}: $$\label{Ni} [N(x_0), N(x_1)]=[N(x_0), x_1]+[x_0, N(x_1)]-N([x_0, x_1]).$$ Denote $[x_0, x_1]_{N} =[N(x_0), x_1]+[x_0, N(x_1)]-N([x_0, x_1]).$ Moreover, $N, S$ also satisfy $$\begin{aligned} \label{3.1} \rho^{L}(N(y)) S(w) &=S(\rho^{L}(N(y)) w)+S((\rho^{L}(y)S(w))-S^{2}(\rho^{L}(y)w),\\ \label{3.2} \rho^{R}(N(y)) S(w) &=S(\rho^{R}(N(y)) w)+S((\rho^{R}(y)S(w))-S^{2}(\rho^{R}(y)w). \end{aligned}$$ Now we are ready to define (dual-)Nijenhuis pairs. [**Definition 3.2.**]{} Let $(V, \rho^{L},\rho^{R})$ be a representation of the Leibniz algebra $\mathfrak g$, and let $N \in\mathfrak{gl}(\mathfrak g)$ and $S\in \mathfrak{gl}(V)$. A pair $(N, S)$ is called a [*Nijenhuis pair*]{} associated to the representation if $N$ is a Nijenhuis operator and $(N, S)$ satisfies the identities , . [**Definition 3.3.**]{} Let $(V, \rho^{L},\rho^{R})$ be a representation of the Leibniz algebra $\mathfrak g$, and let $N \in\mathfrak{gl}(\mathfrak g)$ and $S\in \mathfrak{gl}(V)$. A pair $(N, S)$ is called a [*dual-Nijenhuis pair*]{} associated to the representation if $N$ is a Nijenhuis operator and $(N, S)$ satisfies: $$\begin{aligned} \label{3.3} \rho^{L}(N(y)) S(w) &=S(\rho^{L}(N(y)) w)+\rho^{L}(y)(S^{2}(w))-S(\rho^{L}(y)S(w)),\\ \label{3.4} \rho^{R}(N(y)) S(w) &=S(\rho^{R}(N(y)) w)+\rho^{R}(y)(S^{2}(w))-S(\rho^{R}(y)S(w)). \end{aligned}$$ Obviously, a trivial deformation gives rise to a Nijenhuis pair. Conversely, a Nijenhuis pair also determines a trivial deformation via -. More precisely, we have the following result. [**Theorem 3.4.**]{} Let $(N, S)$ be a Nijenhuis pair associated to a representation $(V, \rho^{L},\rho^{R})$ of a Leibniz algebra $\mathfrak g$. Then the following triple defines a trivial deformation of $(V, \rho^{L},\rho^{R})$: $$\omega(y_1,y_2)=[N(y_1), y_2]+[y_1, N(y_2)]-N[y_1, y_2]=[N(y_1), N(y_2)],$$ $$\varpi^{L}(y)=\rho^{L}(N(y))+\rho^{L}(y)S-S\rho^{L}(y) ,$$ $$\varpi^{R}(y)=\rho^{R}(N(y))+\rho^{R}(y)S-S\rho^{R}(y) .$$ [**Theorem 3.5.**]{} Let $N\in\mathfrak{gl}(\mathfrak g), S\in\mathfrak{gl}(V)$, and $(V, \rho^L, \rho^R)$ a representation of Leibniz algebra $\mathfrak g$. The pair $(N, S^{*})$ is a dual-Nijenhuis one with respect to the dual representation $(V^{*},(\rho^{L})^{*},-(\rho^{L})^{*}-(\rho^{R})^{*})$ if and only if $(N, S)$ is a Nijenhuis pair on the representation $(V,\rho^{L},\rho^{R})$. For any $y\in \mathfrak g,~w\in V,~\alpha\in V^{*}$, $$\begin{aligned} &&\langle \rho^{L}(N(y)) S(w) -S(\rho^{L}(N(y)) (w))-S((\rho^{L}(y)S(w))+S^{2}(\rho^{L}(y)(w)),\alpha\rangle \\&=&\langle w,S^{*}(\rho^{L})^{*}(N(y))(\alpha) -(\rho^{L})^{*}(N(y))(S^{*}(\alpha))-S^{*}(\rho^{L})^{*}(y)(S^{*}(\alpha)) +(\rho^{L})^{*}(y)((S^{*})^{2}(\alpha))\rangle \\&=&0,\end{aligned}$$ that is, $$S^{*}(\rho^{L})^{*}(N(y))(\alpha) -(\rho^{L})^{*}(N(y))(S^{*}(\alpha))-S^{*}(\rho^{L})^{*}(y)(S^{*}(\alpha))+(\rho^{L})^{*}(y)((S^{*})^{2}(\alpha))=0.$$ Similarly, $$S^{*}(\rho^{R})^{*}(N(y))(\alpha) -(\rho^{R})^{*}(N(y))(S^{*}(\alpha))-S^{*}(\rho^{R})^{*}(y)(S^{*}(\alpha))+(\rho^{R})^{*}(y)((S^{*})^{2}(\alpha)=0.$$ It follows that $$\begin{aligned} &&S^{*}((\rho^{R})^{*}+(\rho^{L})^{*})(N(y))(\alpha) -((\rho^{R})^{*}+(\rho^{L})^{*})(N(y))(S^{*}(\alpha))-S^{*}((\rho^{R})^{*}+(\rho^{L})^{*})(y)(S^{*}(\alpha)) \\&&+((\rho^{R})^{*}+(\rho^{L})^{*})(y)((S^{*})^{2}(\alpha))=0, \end{aligned}$$ which implies the conclusion. [**Definition 3.6.**]{} A [*perfect Nijenhuis pair*]{} is a Nijenhuis pair $(N, S)$ with the following identities: $$S^{2}(\rho^{L}(y)(w))+\rho^{L}(y)(S^{2}(w))=2S(\rho^{L}(y)S(w)),$$ $$S^{2}(\rho^{R}(y)(w))+\rho^{R}(y)(S^{2}(w))=2S(\rho^{R}(y)S(w)).$$ [**Proposition 3.7**]{}. Suppose that $(\mathfrak g_{1},\mathfrak g_{2},\rho^{L}_{1},\rho^{R}_{1},\rho^{L}_{2},\rho^{R}_{2})$ is the matched pair of Leibniz algebras $\mathfrak g_i~(i=1,2)$. Let $(N,S)$ be a Nijenhuis pair on $\mathfrak g_{1}$ with respect to the representation $(\mathfrak g_{2},\rho^{L}_{2},\rho^{R}_{2})$, and $(S,N)$ a Nijenhuis pair on $\mathfrak g_{2}$ with respect to the representation $(\mathfrak g_{1},\rho^{L}_{1},\rho^{R}_{1})$. Then \(i) $N+S$ is a Nijenhuis operator on $\mathfrak g_{1}\bowtie \mathfrak g_{2}$. \(ii) If $(N, S)$ is a perfect Nijenhuis pair, then $N + S^{*}$ is a Nijenhuis operator on the corresponding Leibniz algebra $\mathfrak g_{1}\bowtie \mathfrak g_{2}^{*}$. We only check case (i), for any $x_0,x_1\in \mathfrak g_{1},~a_0,a_1\in \mathfrak g_{2}$, $$\begin{aligned} \nonumber &&[(x_0+a_0),(x_1+a_1)]_{N+S}\\ \nonumber &=&[N(x_0)+S(a_0),x_1+a_1]+[x_0+a_0, N(x_1)+S(a_1)]-(N+S)[x_0+a_0,x_1+a_1]\\ \nonumber &=&[N(x_0), x_1]+\rho^{L}_{2}(S(a_0))(x_1)+\rho^{R}_{2}(a_1)(N(x_0))+[S(a_0),a_1]+\rho^{L}_{1}(N(x_0))(a_1) +\rho^{R}_{1}(x_1)(S(a_0))\\ \nonumber &&+[x_0, N(x_1)]+\rho^{L}_{2}(a_0)(N(x_1))+\rho^{R}_{2}(S(a_1))(x_0)+[a_0,S(a_1)]+\rho^{L}_{1}(x_0)(S(a_1))+\rho^{R}_{1}(N(x_1))(a_0) \\&&- N[x_0,x_1]-N(\rho^{L}_{2}(a_0)(x_1))-N(\rho^{R}_{2}(a_1)(x_0))-S[a_0,a_1]-S(\rho^{L}_{1}(x_0)a_1) -S(\rho^{R}_{1}(x_1)a_0). \label{3.5} %(3.10)\end{aligned}$$ At the same time, $$\begin{aligned} \nonumber &&[(N+S)(x_0+a_0), (N+S)(x_1+a_1)] \\ \nonumber &=&[N(x_0), N(x_1)]+\rho^{L}_{2}(S(a_0))(N(x_1))+\rho^{R}_{2}(S(a_1))(N(x_0))+[S(a_0),S(a_1)]\\ \label{3.6} &&+\rho^{L}_{1}(N(x_0))(S(a_1)) +\rho^{R}_{1}(N(x_1))(S(a_0)). %(3.11)\end{aligned}$$ Combining and , we get $$[(x_0+a_0),(x_1+a_1)]_{N+S}-[(N+S)(x_0+a_0),(N+S)(x_1+a_1)]=0.$$ Hence $N+S$ is a Nijenhuis operator on $\mathfrak g_{1}\bowtie \mathfrak g_{2}$. The remaining part can be checked similarly. Let’s consider the following linear maps $\hat{\rho^{L}},~\hat{\rho^{R}},~\tilde{\rho^{L}},~\tilde{\rho^{R}}:\mathfrak g\longrightarrow \mathfrak{gl}(V)$ defined by: $$\begin{aligned} &&\hat{\rho^{L}}(y)=\rho^{L}(N(y))+\rho^{L}(y)S-S\rho^{L}(y),\\ &&\hat{\rho^{R}}(y)=\rho^{R}(N(y))+\rho^{R}(y)S-S\rho^{R}(y),\\ &&\tilde{\rho^{L}}(y)=\rho^{L}(N(y))+S\rho^{L}(y)-\rho^{L}(y)S,\\ &&\tilde{\rho^{R}}(y)=\rho^{R}(N(y))+S\rho^{R}(y)-\rho^{R}(y)S.\end{aligned}$$ [**Corollary 3.8.**]{} Let $(N, S)$ be a (resp. dual-)Nijenhuis pair over a representation $(V,\rho^{L},\rho^{R})$ of a Leibniz algebra $\mathfrak g$. Then (resp. $(V,\tilde{\rho^{L}},\tilde{\rho^{R}})$) $(V,\hat{\rho^{L}},\hat{\rho^{R}})$ becomes a representation of the Leibniz algebra $(\mathfrak g,[\ , \ ]_{N})$. \(i) For the Nijenhuis pair, according to , $$[x_0+v_0,x_1+v_1]_{N+S}=[x_0,x_1]_{N}+\hat{\rho^{L}}(x_0)v_2+\hat{\rho^{R}}(x_1)v_1,$$ which indicates that $(V,\tilde{\rho^{L}},\tilde{\rho^{R}})$ is a representation of $(\mathfrak g,[\ , \ ]_{N})$. \(ii) If $(N,S)$ is a dual-Nijenhuis pair, clearly, the dual maps $\tilde{\rho^{L}}^{*},\tilde{\rho^{R}}^{*}$ of $\tilde{\rho^{L}},\tilde{\rho^{R}}$ are given by $$\tilde{\rho^{L}}^{*}(y)=(\rho^{L})^{*}(N(y))+(\rho^{L})^{*}(y)S^{*}-S^{*}(\rho^{L})^{*}(y),$$ and $$\tilde{\rho^{R}}^{*}(y)=(\rho^{R})^{*}(N(y))+(\rho^{R})^{*}(y)S^{*}-S^{*}(\rho^{R})^{*}(y).$$ According to Proposition 3.7, $(N, S^{*})$ is a Nijenhuis pair over the representation $(V^{*},(\rho^{L})^{*},-(\rho^{L})^{*}-(\rho^{R})^{*})$. In view of (i), we get the conclusion. (dual) KN-structures ==================== Let $K$ be a Kupershmidt operator over the representation $(V,\rho^{L},\rho^{R})$ of a Leibniz algebra $\mathfrak g$. On the sub-adjacent Leibniz algebra $(V_K, [\ , \ ]^K)$, let $S\in \mathfrak{gl}(V)$ and define the deformed operation $[\ , \ ]_{S}^{K}$ on $V$ as follows: $\forall \, w,~u\in V$ $$[w,u]_{S}^{K}=[S(w),u]^{T}+[w,S(u)]^{K}-S[w,u]^{K}.$$ Now let $(N, S)$ be a (dual-)Nijenhuis pair over the representation $(V,\rho^{L},\rho^{R})$. In view of Corollary 3.8, there are two representations $(V,\hat{\rho^{L}},\hat{\rho^{R}})$ and $(V,\tilde{\rho^{L}},\tilde{\rho^{R}})$ of the deformed Leibniz algebra $(\mathfrak g,[\ , \ ]_{N})$. We can define two operations $[\ , \ ]_{\hat{\rho}}^{K},~~[\ , \ ]_{\tilde{\rho}}^{K}:V\otimes V\longrightarrow V$ by $$\begin{aligned} _{\hat{\rho}}^{K}&=\hat{\rho^{L}}(K(w))(u)+\hat{\rho^{R}}(K(u))(w),\\ [w,u]_{\tilde{\rho}}^{K}&=\tilde{\rho^{L}}(K(w))(u)+\tilde{\rho^{R}}(K(u))(w).\end{aligned}$$ In general, $[\ , \ ]_{S}^{K},[\ , \ ]_{\hat{\rho}}^{K}$ and $[\ , \ ]_{\tilde{\rho}}^{K}$ are not Leibniz brackets on $V$. Obviously, if $S$ is a Nijenhuis operator, then $[\ , \ ]_{S}^{K}$ is a Leibniz bracket. At the same time, $[\ , \ ]_{\hat{\rho}}^{K}$ and $[\ , \ ]_{\tilde{\rho}}^{K}$ become Leibniz brackets when $K$ is a Kupershmidt operator on $(V,\hat{\rho^{L}},~\hat{\rho^{R}})$ and $(V,\tilde{\rho^{L}},~\tilde{\rho^{R}})$ respectively. In the following, we will study when these happen. [**Definition 4.1.**]{} Let $K:V\longrightarrow \mathfrak g$ be a Kupershmidt operator and $(N, S)$ a (resp. dual-) Nijenhuis pair over the representation $(V,\rho^{L},\rho^{R})$ of $\mathfrak g$. Then the triple $(K, S, N)$ is called a [*(dual) KN-structure*]{} over the representation $(V,\rho^{L},\rho^{R})$ if the following holds: $$\label{4.1} NK=KS ~~\hbox{and }~~[w,u]^{NK}=[w,u]_{S}^{K}. %\eqno (4.1)$$ [**Proposition 4.2.**]{} If $(K, S, N)$ is a (dual) KN-structure, then $$[w,u]_{S}^{K}=[w,u]_{\hat{\rho}}^{K}.$$ \(i) Suppose $(K, S, N)$ is a KN-structure, it follows from $NK=KS$ and the definitions of $[\ , \ ]_{S}^{K}, [\ , \ ]_{\hat{\rho}}^{K}$ that $$[w,u]_{S}^{K}=[w,u]_{\hat{\rho}}^{K}.$$ \(ii) Using $NK=KS$ we see that $$[w,u]_{S}^{K}+[w,u]_{\tilde{\rho}}^{K}=2[w,u]^{NK}.$$ Due to $[w,u]^{NK}=[w,u]_{S}^{T} $, we also get $$[w,u]_{S}^{K}=[w,u]_{\tilde{\rho}}^{K}.$$ By the above, we see that the operations $[\ , \ ]_{S}^{K}, [\ , \ ]_{\tilde{\rho}}^{K},[\ , \ ]_{\hat{\rho}}^{K}$ and $[\ , \ ]^{NK}$ are the same when $(K, S, N)$ is a (dual) KN-structure. [**Theorem 4.3**]{} If $(K, S, N)$ is a (dual) KN-structure, then $S$ is a Nijenhuis operator on $(V_{K},[\ , \ ]^{K})$. Moreover, the operations $[\ , \ ]_{S}^{K}, [\ , \ ]_{\tilde{\rho}}^{K}$ and $[\ , \ ]_{\hat{\rho}}^{K}$ all give rise to Leibniz algebras. When $(K, S, N)$ is a KN-structure, substituting $y$ by $K(v)$ in , we have that $$\begin{aligned} &&\rho^{L}(NK(v))S(w)-S(\rho^{L}(NK(v)))(w)-S\rho^{L}(K(v))S(w)+S^{2}(\rho^{L}(K(v))w)\\&=& \rho^{L}(KS(v))S(w)-S(\rho^{L}(KS(v)))(w)-S\rho^{L}(K(v))S(w)+S^{2}(\rho^{L}(K(v))w)\\ &=&S(v)\lhd^{K}S(w)-S(S(v)\lhd ^{K}w)-S(v\lhd^{K} S(w))+S^{2}(v\lhd^{K} w)=0.\end{aligned}$$ The same identity holds by replacing $\lhd^K$ with $\rhd^K$. But $[\ , \ ]^{K}=\lhd^{K}+\rhd^{K}$, so $$[S(v),S(w)]^{K}=S([S(v),w]^{K} )+S([v,S(w)]^{K}) -S^{2}([v,w]^{K} ) =0.$$ Thus $S$ is a Nijenhuis operator on $(V_{K}, [\ ,\ ]^{ K })$. If $(K, S, N)$ is a dual KN-structure, using $[v,w]^{NK}=[v,w]_{S}^{K} $ it follows that $$\label{4.2} \rho^{R}(K(w))S(v)+\rho^{L}(K(v))S(w)-S(\rho^{R}K(w)v)-S(\rho^{L}K(v)w)=0.$$ Replacing $v$ by $S(v)$ in , we have $$\label{4.3} \rho^{R}(K(w))S^{2}(v)+\rho^{L}(K(S(v)))S(w)-S(\rho^{R}K(w)S(v))-S(\rho^{L}K(S(v))w)=0.$$ Applying $S$ to both sides of , we get $$\label{4.4} S(\rho^{R}(K(w))S(v))+S(\rho^{L}(K(v))S(w))-S^{2}(\rho^{R}K(w)v)-S^{2}(\rho^{L}K(v)w)=0.$$ Replacing $y$ by $K(w)$ and $w$ by $v$ in the identities and , we get that $$\begin{aligned} \label{4.5} \rho^{L}(KS(w)) S(v) &=S(\rho^{L}(KS(w)) (v))+\rho^{L}(K(w))(S^{2}(v))-S((\rho^{L}(K(w))S(v)),\\ \label{4.6} \rho^{R}(KS(w)) S(v) &=S(\rho^{R}(KS(w)) (v))+\rho^{R}(K(w))(S^{2}(v))-S((\rho^{R}(K(w))S(v)).\end{aligned}$$ In view of -, $$\begin{aligned} &&[S(v),S(w)]^{K}-S([v,w]_{S}^{K}) \\&=&\rho^{L}(KS(v))S(w)+\rho^{R}(KS(w))S(v)-S(\rho^{L}(KS(v))w+\rho^{R}(K(w))S(v)+\rho^{L}(K(v))S(w)\\&& +\rho^{R}(KS(w))v)+S^{2}(\rho^{L}(K(v))w+\rho^{R}(K(w))v)\\ &=&\rho^{L}(KS(v))S(w)+\rho^{R}(KS(w))S(v)-\rho^{R}(K(w))S^{2}(v)-\rho^{L}(KS(v))S(w) -S(\rho^{L}(K(v))S(w)\\&& +\rho^{R}(KS(w))v)+ S(\rho^{R}(Kw)S(v))+S(\rho^{L}(K(v))S(w))=0.\end{aligned}$$ Thus the result follows. [**Proposition 4.4**]{} Suppose that $(K, S, N)$ is a (dual) KN-structure. Then \(i) $K$ is a Kupershmidt operator associated to the representation $(V,\hat{\rho^{L}},\hat{\rho^{R}})~((V,\tilde{\rho^{L}},\tilde{\rho^{R}}))$; \(ii) $NK$ is a Kupershmidt operator associated to the representation $(V,\rho^{L},\rho^{R})$. Use Proposition 4.2 and direct calculation. [**Theorem 4.5.**]{} Suppose that $(K, S, N)$ is a KN-structure with invertible $K$, then $(K, S, N)$ becomes a dual KN-structure. It is enough to show that $(S, N)$ is a dual-Nijenhuis pair. In view of the KN-structure $(K, S, N)$, substituting $S(v)$ for $v$ in we have that $$\label{4.7}\rho^{R}(K(w))S^{2}(v)+\rho^{L}(K(S(v)))S(w)=S(\rho^{L}(K(S(v)))w+\rho^{R}(K(w))S(v)).$$ Taking account of $[v,w]_{S}^{K}=[v,w]^{KS}$, $$S([v,w]^{KS})=S([v,w]_{S}^{K})=[S(v),S(w)]^{K},$$ that is, $$\label{4.8} S(\rho^{L}(KS(v))w+\rho^{R}(KS(w))v)=\rho^{L}(KS(v))S(w)+\rho^{R}(KS(w))S(v).$$ Combining and , we have $$\begin{aligned} \nonumber 0&=&\rho^{R}(K(w))S^{2}(v)+S(\rho^{R}(KS(w))v)-\rho^{R}(KS(w))S(v)-S(\rho^{R}(K(w))S(v))\\ \label{4.9} &=&\rho^{R}(K(w))S^{2}(v)+S(\rho^{R}(NK(w)))v -\rho^{R}(NK(w))S(v)-S(\rho^{R}(K(w))S(v)).\end{aligned}$$ Since $K$ is invertible, we get $$(\rho^{R}(x)S^{2}-\rho^{R}(N(x))S)(v)+S((\rho^{R}(N(x))-\rho^{R}(x)S)(v))=0.$$ Similarly, the same identity holds for $\rho^L$, thus $(S, N)$ is a dual-Nijenhuis pair. Compatible Kupershmidt operators and (dual) KN-structures ========================================================= Let $K_1, K_2 : V \longrightarrow \mathfrak g $ be two Kupershmidt operators associated to a representation $(V,\rho^{L},\rho^{R})$ of a Leibniz algebra $\mathfrak g$. We say that $K_1$ and $K_2$ are [*compatible*]{} if $n_1K_1+n_2K_2$ is a Kupershmidt operator with any complex numbers $n_1,~ n_2$. By direct calculation we get the following. [**Proposition 5.2.**]{} Let $K_1, K_2 : V \longrightarrow \mathfrak g $ be two Kupershmidt operators on a representation $(V,\rho^{L},\rho^{R})$ over a Leibniz algebra $\mathfrak g$. Then $K_1$ and $K_2$ are compatible iff $$\begin{aligned} [K_{1}(w), K_{2}(v)]+[K_{2}(w), K_{1}(v)]&=&K_{1}(\rho^{L}(K_{2}(w))(v)+\rho^{R}((K_{2}(v))(w)))\\&&+K_{2}(\rho^{L}(K_{1}(w))(v)+\rho^{R}((K_{1}(v))(w))),\forall~ w,~v\in V.\end{aligned}$$ Compatible Kupershmidt operators can be constructed from a pair of Kupershmidt and Nijenhuis operators. [**Proposition 5.3.**]{} Suppose that $N$ is a Nijenhuis operator and $K: V\longrightarrow \mathfrak g $ is a Kupershmidt operator associated to a representation $(V,\rho^{L},\rho^{R})$ of a Leibniz algebra $\mathfrak g$. Then \(i) $NK$ is a Kupershmidt operator associated to the representation $(V,\rho^{L},\rho^{R})$ iff $$\begin{aligned} \nonumber &&N([NK(w), K(u)]+[K(w), NK(u)])\\ \label{5.1} &=&NK(\rho^{L}(NK(w))u+\rho^{R}(NK(u))w)+N^{2}T(\rho^{L}(K(w))u+\rho^{R}(K(w))u).\end{aligned}$$ (ii) If $NK$ is Kupershmidt operator with $N$ invertible, then $K$ and $NT$ are compatible. [**Proof**]{}(i) As $K$ is a Kupershmidt operator, $$[K(w), K(u)]=K(\rho^{L}(K(w))(u)+\rho^{R}(K(u))(w)).$$ Note that $N$ is a Nijenhuis operator, then $$\begin{aligned} \nonumber &&[NK(w), NK(u)]=N([NK(w), K(u)]+[K(w), NK(u)])-N^{2}([K(w), K(u)]) \\ \label{5.2} &=&N([NK(w), K(u)]+[K(w), NK(u)])-N^{2}K(\rho^{L}(K(w))u+\rho^{R}(K(u))w).\end{aligned}$$ By , $NT$ is a Kupershmidt operator iff $$\begin{aligned} \nonumber && N([NK(w), K(u)]+[K(w), NK(u)])\\ \label{5.3} &=&NK(\rho^{L}(NK(w))u+\rho^{R}(NK(u))w)+N^{2}K(\rho^{L}(K(w))u+\rho^{R}(K(u))w).\end{aligned}$$ (ii) Applying $N^{-1}$ to both sides of , we obtain that $$\begin{aligned} &&[NK(w), K(u)]+[K(w), NK(u)]\\&=&K(\rho^{L}(NK(w))u+\rho^{R}(NK(u))w)+NK(\rho^{L}(K(w))u+\rho^{R}(K(u))w).\end{aligned}$$ Hence, $K$ and $NK$ are compatible. [**Proposition 5.4.**]{} Let $K_1, K_2$ be compatible Kupershmidt operators associated to a representation $(V,\rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$, and suppose $K_2$ is invertible. Then $N = K_{1}K_{2}^{-1}$ is a Nijenhuis operator. $\forall x_1,~x_2\in \mathfrak g$, since $K_2$ is invertible, there are $w,~ u\in V$ such that $K_{2}(w)=x_1,~K_{2}(u)=x_2$. As $K_{1}=NK_{2}$ is a Kupershmidt operator, $$\label{5.4} [NK_{2}(w), NK_{2}(u)]=NK_{2}(\rho^{L}(NK_{2}(w))(u)+\rho^{R}(NK_{2}(u))(w)).$$ Compatibility of $K_1=NK_{2}$ and $K_2$ implies that $$\begin{aligned} \nonumber &&[NK_{2}(w), K_{2}(u)]+[K_{2}(w), NK_{2}(u)]\\ \nonumber &=&NK_{2}(\rho^{L}(K_{2}(w))u+\rho^{R}(K_{2}(u)w))+K_{2}(\rho^{L}(NK_{2}(w))u+\rho^{R}((NK_{2}(u)))w) \\ \label{5.5} &=&N([K_{2}(w), K_{2}(u)])+K_{2}(\rho^{L}(NK_{2}(w))u+\rho^{R}((NK_{2}(u)))w).\end{aligned}$$ Combining and , we obtain $$\begin{aligned} &&N([NK_{2}(w), K_{2}(u)]+[K_{2}(w), NK_{2}(u)]) \\&=&N^{2}([K_{2}(w), K_{2}(u)])+NK_{2}(\rho^{L}(NK_{2}(w))(u)+\rho^{R}((NK_{2}(u))(w))) \\&=&N^{2}([K_{2}(w), K_{2}(u)])+[NK_{2}(w), NK_{2}(u)].\end{aligned}$$ So $$[N(x_1), N(x_2)]-N([N(x_1), x_2])+N([x_1,N(x_2)]-N[x_1, x_2])=0.$$ [**Proposition 5.5.**]{} Suppose $K: V\rightarrow \mathfrak g$ is a Kupershmidt operator over the representation $V$ of the Leibniz algebra $\mathfrak g$ and $S\in \mathfrak{gl}(V)$. Then $K$ and $KS$ are compatible Kupershmidt operators if $(K, S, N)$ is a (dual) KN-structure. It is enough to check that $K+KS$ is a Kupershmidt operator. In fact, for every $w,~v\in V$, $$\begin{aligned} &&(K+KS)([w,v]^{K+KS})=(K+KS)([w,v]^{K} +[w,v]_{S}^{K} )\\ &=&K([w,v]^{K} )+KS([w,v]_{S}^{K} )+KS([w,v]^{K} )+K([w,v]_{S}^{K} ) \\&=&K([w,v]^{K} )+KS([w,v]^{SK} )+KS([w,v]^{K} )+K([S(w),v]^{K} \\&&+[w,S(v)]^{K}-S([w,v]^{K})) \\&=&[K(w), K(v)]+[KS(w), KS(v)]+[KS(w), K(v)]+[K(w), KS(v)] \\&=&[(K+KS)(w), (K+KS)(v)].\end{aligned}$$ Therefore, $K+KS$ is a Kupershmidt operator. The dual case can be showed analogously. The dual KN-structures can be obtained from the compatible Kupershmidt operators. [**Proposition 5.6.**]{} Let $K_1, K_2$ be Kupershmidt operators associated to a representation $(V,\rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$. If $K_1$ and $K_2$ are compatible and $K_1$ invertible. Then \(i) $(K_1,S=K_1^{-1}K_2,N=K_2K_1^{-1}) $ is a dual KN-structure; \(ii) $(K_2,S=K_1^{-1}K_2,N=K_2K_1^{-1})$ is a dual KN-structure. Prop. 5.4 says that $N=K_2K_1^{-1}$ is a Nijenhuis operator. Applying Proposition 5.2 to $K_1, K_2=K_1S$, we have that $$\begin{aligned} \nonumber [K_{1}(w), K_{1}S(u)]&+&[K_{1}S(w), K_{1}(u)]=K_{1}(\rho^{L}(K_{1}S(w))(u)+\rho^{R}((K_{1}S(u))(w)))\\ \label{5.6} &+&K_{1}S(\rho^{L}(K_{1}(w))(u)+\rho^{R}((K_{1}(u))(w))).\end{aligned}$$ Since $K_1$ is a Kupershmidt operators, $$\begin{aligned} \nonumber &&[K_{1}(w), K_{1}S(u)]+[K_{1}S(w), K_{1}(u)]\\ \nonumber &=&K_{1}([w,S(u)]^{K_{1}} +[S(w),u]^{K_{1}} )\\ \label{5.7} &=&K_{1}(\rho^{L}(K_{1}(w))S(u)+\rho^{R}(K_{1}S(u))w)+K_{1}(\rho^{L}(K_{1}S(w))(u)+\rho^{R}((K_{1}(u))S(w)).\end{aligned}$$ Combining and , we get $$\label{5.8} K_{1}(\rho^{L}(K_{1}(w))S(u)+\rho^{R}((K_{1}(u))S(w)))= K_{1}S(\rho^{L}(K_{1}(w))u+\rho^{R}((K_{1}(u))w)). %\eqno(5.11)$$ Applying $K_1^{-1}$ to both sides of , we obtain that $$\label{5.9} \rho^{L}(K_{1}(w))S(u)+\rho^{R}((K_{1}(u))S(w))- S(\rho^{L}(K_{1}(w))(u)+\rho^{R}((K_{1}(u))(w)))=0, %\eqno(5.12)$$ which implies that $$[w,u]_{S}^{K_1}-[w,u]^{K_1S} =\rho^{R}(K_{1}(u))S(w)+\rho^{L}(K_{1}(w))S(u)-S(\rho^{L}(K_{1}(w))u+\rho^{R}(K_{1}(u))w)=0.$$ Clearly, $K_1S=NK_1$. Hence, $(K_1,S=K_1^{-1}K_2,N=K_2K_1^{-1})$ is a dual KN-structure. \(ii) Recall the definition of $[\ , \ ]^{K_{2}}_{S}$ and $[\ , \ ]^{K_{2}S}$ $$\begin{aligned} &&[w,u]^{K_{2}}_{S}-[w,u]^{K_{2}S}\\&=& \rho^{R}(K_{2}(u))S(w)+\rho^{L}(K_{2}(w))S(u)-S(\rho^{L}(K_{2}(w))u+\rho^{R}(K_{2}(u))w) \\&=& \rho^{R}(K_{1}S(u))S(w)+\rho^{L}(K_{1}S(w))S(u)-S(\rho^{L}(K_{1}S(w))u+\rho^{R}(K_{1}S(u))w) \\&=&[S(w),S(u)]^{K_{1}} -S([w,u]^{K_{1}S} )=0.\end{aligned}$$ Therefore, $(K_2,S=K_1^{-1}K_2,N=K_2K_1^{-1})$ is a dual KN-structure. Strong Maurer-Cartan equation and dual KN-structures ==================================================== According to , a solution $\Theta: \mathfrak g \longrightarrow V$ of the strong Maurer-Cartan equation on the twilled Leibniz algebra $\mathfrak g\bowtie V_{K}$ becomes a Kupershmidt operator on the representation $(\mathfrak g, \varrho_{K}^{L},\varrho_{K}^{R})$ of the Leibniz algebra $(V_{K}, [ \ , \ ]^K )$. The Kupershmidt operator $\Theta$ leads to a Leibniz algebra structure on $\mathfrak g$ with $[y,z]^{\Theta}=\varrho_{K}^{L}(\Theta(y))z+\varrho_{K}^{R}(\Theta(z))y~(\forall ~y,~z\in \mathfrak g)$. Denote this Lebniz algebra by $\mathfrak g_{\Theta}$. Similar to $V_{K}$, $\rho_{\Theta}^{L},~\rho_{\Theta}^{R}:\mathfrak g_{\Theta}\longrightarrow \mathfrak{gl}(V_{K})$ afford a representation of $\mathfrak g_{\Theta}$ on $V_{K}$ with $$\rho_{\Theta}^{L}(y)(w)=[\Theta(y),w]^{K}-\Theta(\varrho_{K}^{R}(w)(y)),~ \rho_{\Theta}^{R}(y)(w)=[w,\Theta(y)]^{K}-\Theta(\varrho_{K}^{L}(w)y)$$ for any $y\in \mathfrak g,~w\in V$. Then $(\mathfrak g\oplus V,\{ \ , \ \}_{K}^{\Theta})$ is a Leibniz algebra with $$\{w_0+x_0,w_1+x_1 \}_{K}^{\Theta}=[w_0,w_1]^{K}+\varrho_{K}^{L}(w_0)x_1+\varrho_{K}^{R} (w_1)x_0+[x_0,x_1]^{\Theta}+\rho_{\Theta}^{L}(x_0)w_1+\rho_{\Theta}^{R}(x_1)w_0.$$ [**Proposition 6.1.**]{} Suppose that $\Theta:\mathfrak g\longrightarrow V$ is a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra $\mathfrak g\bowtie V_{K}$. Then \(i) $(\mathfrak g\oplus V,\{ \ , \ \}_{K}^{\Theta})$ is a Leibniz algebra, denoted as $V_{K}\bowtie \mathfrak g_{\Theta}$. \(ii) $K$ is a solution of the strong Maurer-Cartan equation on $V_{K}\bowtie \mathfrak g_{\Theta}$. \(iii) $K$ is a Kupershmidt operator associated to the representation $(V_{K},\rho_{\Theta}^{L}, \rho_{\Theta}^{R})$. \(i) was made clear earlier. Let’s consider (ii). Note that $K$ is also a Kupershmidt operator for the regular representation $\mathfrak g$ of the subadjacent Leibniz algebra $V_K$: $$\label{6.1} K[w_1,w_2]^{K}=\varrho_{K}^{L}(w_1)K(w_2)+\varrho_{K}^{R}(w_2)K(w_1).$$ Since $\Theta:\mathfrak g\longrightarrow V$ is a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra $\mathfrak g\bowtie V_{K}$, it follows from Theorem 2.1 that $$\label{6.2}[K(w_1),K(w_2)]^{\Theta}=[K\Theta K(w_1),K(w_2)]+[K(w_1),K\Theta K(w_2)]-K\Theta[K(w_1),K(w_2)]$$ Meanwhile, $\Theta$ is also a Kupershmidt operator on the representation $(\mathfrak g,\varrho^{L}_{K},\varrho^{R}_{K})$ of the Leibniz algebra $(V_{K},[ \ , \ ]^{K})$ $$\begin{aligned} \nonumber &&K(\rho_{\Theta}^{L}(K(w_1))w_2+\rho_{\Theta}^{R}(K(w_2))w_1)\\ \nonumber &=&K\left([\Theta K(w_1),w_2]^{K}-\Theta(\varrho_{K}^{R}(w_2)K(w_1))+[w_1,\Theta K(w_2)]^{K}-\Theta(\varrho_{K}^{L}(w_1)K(w_2))\right) \\ \nonumber &=&[K\Theta K(w_1),K(w_2)]+[K(w_1),K\Theta K(w_2)]-K\Theta([K(w_1),K(w_2)]-K(\rho^{L}(K(w_1))w_2\\ \nonumber &&+[K(w_1),K(w_2)]-K(\rho^{R}(K(w_2))w_1\\ \label{6.3} % \nonumber &=&[K\Theta K(w_1),K(w_2)]+[K(w_1),K\Theta K(w_2)]-K\Theta([K(\rho^{L}(K(w_1))w_2+K(\rho^{R}(K(w_2))w_1 \\ &=&[K\Theta K(w_1),K(w_2)]+[K(w_1),K\Theta K(w_2)]-K\Theta[K(w_1),K(w_2)].\end{aligned}$$ Combining and , we have that $$\label{6.4} [K(w_1),K(w_2)]^{\Theta}=K(\rho_{\Theta}^{L}(K(w_1))w_2+\rho_{\Theta}^{R}(K(w_2))w_1).$$ Then and imply (ii) in view of Theorem 2.1. (iii) follows directly from . [**Theorem 6.2.**]{} Let $K:V\longrightarrow \mathfrak g$ be a Kupershmidt operator on a representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$, and $\Theta:\mathfrak g\longrightarrow V$ a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra $\mathfrak g\bowtie V_{K}$. Then \(i) $(K, N, S)$ is a dual KN-structure on a representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$, where $N = K\Theta$ and $S = \Theta K$. \(ii) $(\Theta, S, N)$ is a dual KN-structure over the representation $(\mathfrak g, \varrho_{K}^{L} ,\varrho_{K}^{R} )$ of the Leibniz algebra $(V_K , [ \ , \ ]^{K} )$, where $N = K\Theta$ and $S= \Theta K$. \(i) Note that $\Theta$ is a Kupershmidt operator on the representation $(\mathfrak g, \varrho_{K}^{L},\varrho_{K}^{R})$ of the Leibniz algebra $(V_{K}, [ \ , \ ]^K )$. It follows from that $$\begin{aligned} &=&K[\Theta(y),\Theta(z)]^{K}\\&=&K\Theta(\varrho_{K}^{L}(\Theta(y))z+\varrho_{K}^{R}(\Theta(z))y) \\&=&K\Theta([K\Theta(y),z]-K(\rho^{R}(z)\Theta(y))+K\Theta([y,K\Theta(z)]-K(\rho^{L}(y)\Theta(z)) \\&=&K\Theta([K\Theta(y),z]+[y,T\Theta(z)]-K\Theta[y,z]),\end{aligned}$$ which implies that $K\Theta$ is a Nijenhuis operator. We now check that , and hold for $N = K\Theta$ and $S = \Theta K$. In fact, by and $K$ being a Kupershmidt operator it follows that for every $w,~u\in V$ $$\begin{aligned} \nonumber \Theta K(\rho^{L}(K(w))u+\rho^{R}(K(u))w)&=& \Theta [K(w),K(u)] \\ \label{6.5} %\label{6.2} &=&\rho^{L}(K(w))\Theta K(u)+\rho^{R}(K(u))\Theta K(w).\end{aligned}$$ Replacing $y$ with $K(w)$ in and using , we have that $$\begin{aligned} &&[\Theta K(w),\Theta (z)]^{K}-\Theta(\varrho_{K}^{R}(\Theta (z))K(w)+\Theta_{K}^{L}(\Theta K(w))z) \\&=& \rho^{R}(K\Theta (z))\Theta K(w)+\rho^{L}(K\Theta K(w))\Theta (z)-\Theta ([K(w),K\Theta(z)]-K(\rho^{L}(K(w))\Theta(z)))\\&&- \Theta([K\Theta K(w),z]-K(\rho^{R}(z)\Theta K(w))) \\&=&\rho^{R}(K\Theta (z))\Theta K(w)+\rho^{L}(K\Theta K(w))\Theta (z)-\rho^{L}(K(w))\Theta K\Theta (z)-\rho^{R}(K\Theta z)\Theta K(w) \\&&+\Theta K(\rho^{L}(K(w))\Theta (z)) -\rho^{L}(K \Theta K(w))\Theta (z)-\rho^{R}(z)\Theta K \Theta K(w) +\Theta KT(\rho^{R}(z)\Theta K(w)) \\&=&-\rho^{L}(K(w))\Theta K\Theta (z)+\Theta K(\rho^{L}(K(w))\Theta (z))-\rho^{R}(z)\Theta K \Theta K(w) +\Theta K(\rho^{R}(z)\Theta K(w))=0 %\\&=&0.\end{aligned}$$ Therefore, $$\label{6.6a} %\label{6.3} -\rho^{L}(K(w))\Theta K\Theta (z)+\Theta K(\rho^{L}(K(w))\Theta (z))=\rho^{R}(z)\Theta K \Theta K(w) -\Theta K(\rho^{R}(z)\Theta K(w)).$$ Combining and , we obtain that $$\label{6.7} %\label{6.4} \rho^{R}(K\Theta(z))\Theta K(w)-\Theta K(\rho^{R}(K\Theta(w))z)=\rho^{R}(z)\Theta K\Theta K(w)-\Theta K(\rho^{R}(z)\Theta K(w)).$$ On the other hand, replacing $z$ with $K(u)$ in , it follows from that $$\label{6.8aa} %\label{6.5} -\rho^{R}(K(u))\Theta K\Theta(y)+\Theta K(\rho^{R}(K(u))\Theta(y))=\rho^{L}(y)\Theta K\Theta K(u)-\Theta K(\rho^{L}(y)\Theta K(u)). %\eqno (6.6)$$ Combining and , we find that $$\label{6.6} \rho^{L}(K\Theta(y))\Theta K(u)-\Theta K(\rho^{L}(K\Theta(y))u) =\rho^{L}(y)\Theta K\Theta K(u)-\Theta K(\rho^{L}(y)\Theta K(u)).%\eqno(6.7)$$$$ Thus, and yield and . At the same time, $$\begin{aligned} ^{K}_{S}-[w,u]^{KS}&=&\rho^{L}(K(w))S(u)+\rho^{R}(K(u))S(w)-S(\rho^{L}(K(w))u+\rho^{R}(K(u))w)\\&=& \rho^{L}(K(w))\Theta K(u)+\rho^{R}(K(u))\Theta K(w)-\Theta K(\rho^{L}(K(w))u+\rho^{R}(K(u))w)\\&=&0,\end{aligned}$$ where we have used in the last equation. Similarly, (ii) can be verified. Let $(K, N, S)$ be a dual KN-structure over the representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$. Define the operations $\tilde{\varrho^{L}_{K}},~\tilde{\varrho^{R}_{K}}:V_{K}\longrightarrow \mathfrak{gl}(\mathfrak g)$ by $$\tilde{\varrho^{L}_{K}}(v):=\varrho^{L}_{K}(S(v))-[\varrho^{L}_{K}(v),N], ~~\tilde{\varrho^{R}_{K}}(v):=\varrho^{R}_{K}(S(v))-[\varrho^{R}_{K}(v),N].$$ [**Proposition 6.3.**]{} Let $(K, N, S)$ be a dual KN-structure on the representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$. Then \(i) $K$ is a Kupershmidt operator with respect to the representation $(V,\tilde{\rho^{L}_{K}},\tilde{\rho^{R}_{K}})$ of the Leibniz algebra $(\mathfrak g, [ \ , \ ]_{N})$. \(ii) $(\mathfrak g \oplus V, \{ \ , \ \}^{N}_{S} )$ is the Leibniz algebra with the bracket operation $$\label{6.7a} \{v_0+x_0,v_1+x_1\}^{N}_{S}:=[v_0,v_1]_{S}^{K}+\tilde{\varrho^{L}_{K}}(v_0)x_1+\tilde{\varrho^{R}_{K}}(v_1)x_0+[x_0,x_1]_{N} +\tilde{\rho^{L}}(x_0)v_1+\tilde{\rho^{R}}(x_1)v_0 %\eqno(6.8)$$ for any $x_0,~x_1\in \mathfrak g,~v_0,~v_1\in V$. It is an easy consequence of Proposition 5.6 and Theorem 2.1. [**Theorem 6.4.**]{} Suppose that $(K, N, S)$ is a dual KN-structure over the representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$ and $K$ is invertible, then $\Theta=K^{-1}N=SK^{-1}:\mathfrak g\longrightarrow V$ is a solution of the strong Maurer-Cartan equation on $\mathfrak g\bowtie V_{K}$. As $N = K \Theta$ is a Nijenhuis operator, $$\label{6.8a} [K\Theta(x_0),K\Theta(x_1)]=T\Theta([K\Theta(x_0),x_1]+[x_0,K\Theta(x_1)]-K\Theta[x_0,x_1]).$$ It follows from the definitions of $\varrho_{K}^{L}$ and $\varrho_{K}^{R}$ that $$\begin{aligned} \nonumber \varrho^{L}_{K}(\Theta(x_0))x_1+\varrho^{R}_{K}(\Theta(x_1))x_0&=&[K\Theta(x_0),x_1]-K(\rho^{R}(x_1)\Theta(x_0)) +[x_0,K\Theta(x_1)]\\ \nonumber &&-K(\rho^{R}(x_1)\Theta(x_0)) \\\label{6.9} &=& [K\Theta(x_0),x_1]+[x_0,K\Theta(x_1)]-K\Theta[x_0,x_1]. %(6.10)\end{aligned}$$ Note that $K$ is a Kupershmidt operator, and imply that $$K[\Theta(x_0),\Theta(x_1)]^{K}=[K\Theta(x_0),K\Theta(x_1)]=K\Theta(\varrho^{L}_{K}(\Theta(x_0))x_1+\varrho^{R}_{K}(\Theta(x_1))x_0).$$ As $K$ is invertible, $$\label{6.10} [\Theta(x_0),\Theta(x_1)]^{K}=\Theta(\varrho^{L}_{K}(\Theta(x_0))x_1+\varrho^{R}_{K}(\Theta(x_1))x_0).%\eqno(6.11)$$$$ Since $(K, N, S)$ is a dual KN-structure over the representation $(V, \rho^{L},\rho^{R})$ of Leibniz algebra $\mathfrak g$, $$[w,u]_{S}^{K}=[w,u]^{KS}~~\hbox{with}~~S=\Theta K.$$ Hence, $$\Theta K(\rho^{L}(K(w))u+\rho^{R}(K(u))w)=\rho^{L}(K(w))\Theta K(u)+\rho^{R}(K(u))\Theta K(w).$$ Therefore, $$\label{6.11} \Theta[K(w),K(u)]=\Theta K(\rho^{L}(K(w))u+\rho^{R}(K(u))w)=\rho^{L}(K(w))\Theta K(u)+\rho^{R}(K(u))\Theta K(w).%\eqno(6.12)$$$$ Replacing $K(w),~K(u)$ by $x_0,~x_1$ respectively in , $$\label{6.13} \Theta[x_0,x_1]=\rho^{L}(x_0)\Theta(x_1)+\rho^{R}(x_1)\Theta(x_0).%\eqno(6.13)$$$$ Therefore $\Theta$ is a solution of the strong Mauer-Cartan equation by Theorem 2.1. In view of Theorem 4.4, Theorem 6.2 and Proposition 5.5, the following result is clear. [**Theorem 6.5.**]{} Suppose that $K : V\longrightarrow \mathfrak g$ is a Kupershmidt operator over the representation $(V, \rho^{L},\rho^{R})$ and $\Theta:\mathfrak g \longrightarrow V$ is a solution of the strong Maurer-Cartan equation on the twilled Leibniz algebra $\mathfrak g\bowtie V_{K}$. Then $K\Theta K$ is a Kupershmidt operator. Moreover, $K$ and $N K$ are compatible. Leibniz $r-n$ structures, RBN-structures and $\mathcal{B}N$-structures ======================================================================= Let $\pi$ be a classical Leibniz $r$-matrix, and $N: \mathfrak g\longrightarrow \mathfrak g$ a Nijenhuis operator. We say that $(\pi,N)$ is a [*Leibniz $r-n$ structure*]{} of the Leibniz algebra $\mathfrak g$ if for every $\alpha,\beta\in \mathfrak g^{*}$ $$\begin{aligned} \label{7.1} N\pi^{\sharp}&=\pi^{\sharp}N^{*},\\ \label{7.2} [\alpha,\beta]^{N\pi^{\sharp}}&=[\alpha, \beta]_{N^{*}}^{\pi^{\sharp}}.\end{aligned}$$ We immediately have the following. [**Theorem 7.2.**]{} Suppose $(\pi,N)$ is a Leibniz $r-n$ structure of $\mathfrak g$, then $(\pi^{\sharp}, N^{*}, N)$ is a dual KN-structure on the representation $(\mathfrak g^{*},L^{*},-L^{*}-R^{*})$. [**Definition 7.3.**]{} Let $\mathcal{R}:\mathfrak g\longrightarrow \mathfrak g$ be a Rota-Baxter operator and $N: \mathfrak g\longrightarrow \mathfrak g$ a Nijenhuis operator. A pair $(\mathcal{R},N)$ is called a [*RBN-structure*]{} of the Leibniz algebra $\mathfrak g$ if $$\begin{aligned} \label{7.3} N\mathcal{R}&=\mathcal{R}N,\\ \label{7.4} [x, y]^{N\mathcal{R}}&=[x,y]_{N}^{\mathcal{R}}.\end{aligned}$$ [**Example 7.4.**]{} Let $\mathfrak g$ be the Leibniz algebra with basis $\{\varepsilon_{0},\varepsilon_{1}\}$ and multiplication given by $$[\varepsilon_{0}, \varepsilon_{0}]=[\varepsilon_{0}, \varepsilon_{1}]=0,~~[\varepsilon_{1}, \varepsilon_{0}]=[\varepsilon_{1}, \varepsilon_{0}]=\varepsilon_{0}$$ Define the linear maps $\mathcal{R} $ and $N$ on $\mathfrak g$ as follows: $$\mathcal{R}(\varepsilon_{0}, \varepsilon_{1})= (\varepsilon_{0}, \varepsilon_{1})\left( \begin{array}{ccc} 0& a \\ 0 & -a \\ \end{array} \right)$$ and $$N(\varepsilon_{0}, \varepsilon_{1})= (\varepsilon_{0}, \varepsilon_{1})\left( \begin{array}{ccc} b_{11}& b_{11}-b_{22} \\ 0 & b_{22} \\ \end{array} \right),$$ then $\mathcal{R}$ is a Rota-Baxter operator and $N: \mathfrak g\longrightarrow \mathfrak g$ is a Nijenhuis operator on $\mathfrak g$. By direct computation, $(\mathcal{R},N)$ is a RBN-structure. [**Definition 7.5.**]{} [@ST] A [*quadratic Leibniz algebra*]{} is a Leibniz algebra with a nondegenerate skew-symmetric bilinear form $\mathfrak{q}\in \mathfrak g^{*}\otimes \mathfrak g^{*}$ satisfying the following invariant condition: $$\mathfrak{q}(x_0, [x_1, x_2]) = \mathfrak{q}([x_0, x_2] + [x_2, x_0], x_1),~~ \forall~ x_0, ~x_1, ~x_2 \in \mathfrak g.$$ Let $(\mathfrak g,[\ ,\ ],\mathfrak{q})$ be a quadratic Leibniz algebra. Then $({\mathfrak{q}}^{\sharp})^{-1}:\mathfrak g\longrightarrow \mathfrak g^{*}$ is an isomorphism from the regular representation $(\mathfrak g, L, R)$ to its dual representation $(\mathfrak g^{*},L^{*},-R^{*}-L^{*})$, moreover, $$\label{7.5} ({\mathfrak{q}}^{\sharp})^{-1}R=(-R^{*}-L^{*})({\mathfrak{q}}^{\sharp})^{-1}, ~~({\mathfrak{q}}^{\sharp})^{-1}L=L^{*}({\mathfrak{q}}^{\sharp})^{-1}.$$ [**Theorem 7.6.**]{} Let $(\mathfrak g,[\ ,\ ],\mathfrak{q})$ be a quadratic Leibniz algebra, $\mathcal{R}:\mathfrak g\longrightarrow \mathfrak g$ a linear map, and $N:\mathfrak g\longrightarrow \mathfrak g$ a Nijenhuis operator. Suppose that $\pi^{\sharp}=\mathcal{R}\mathfrak{q}^{\sharp}$ and $$\label{7.6} \mathfrak{q}^{\sharp}N^{*}=N\mathfrak{q}^{\sharp}.$$ Then $(\mathcal{R},N)$ is a RBN-structure on $(\mathfrak g,[\ ,\ ],\mathfrak{q})$ iff $(\pi,N)$ is a Leibniz $r-n$ structure. Since $\mathfrak{q}^{\sharp}$ is bijective, then for any $\alpha_0,~\alpha_1\in \mathfrak g^{*}$, there are $x_0,~x_1\in \mathfrak g$, such that $$\alpha_0=(\mathfrak{q}^{\sharp})^{-1}(x_0),\alpha_1=(\mathfrak{q}^{\sharp})^{-1}(x_1).$$ By [@ST Cor. 7.22], $\pi$ is a Leibniz $r$-matrix if and only if $\pi(\mathfrak{q}^{\sharp})^{-1}$ is a Rota-Baxter operator on $(\mathfrak g,[\ ,\ ],\mathfrak{q})$. $(\Longrightarrow)$ By and , we have $$N\pi^{\sharp}=\pi^{\sharp}N^{*}.$$ We now verify that $$[\alpha_0, \alpha_1]^{N\pi^{\sharp}} =[\alpha_0,\alpha_1] _{N^{*}}^{\pi^{\sharp}}.$$ By , we have $$\begin{aligned} \nonumber [\alpha_0, \alpha_1]^{\pi^{\sharp}} &=&L^{*}(\pi^{\sharp}(\alpha_0))\alpha_1-(L^{*}+R^{*})(\pi^{\sharp}(\alpha_1))\alpha_0\\ \nonumber &=&L^{*}(\pi^{\sharp}(\mathfrak{q}^{\sharp})^{-1}(x_0))(\mathfrak{q}^{\sharp})^{-1}(x_1) -(L^{*}+R^{*})(\pi^{\sharp}(\mathfrak{q}^{\sharp})^{-1}(x_1))(\mathfrak{q}^{\sharp})^{-1}(x_0)\\ \nonumber &=&L^{*}(\mathcal{R}(x_0))(\mathfrak{q}^{\sharp})^{-1}(x_1)-(L^{*}+R^{*})(\mathcal{R}(x_1))(\mathfrak{q}^{\sharp})^{-1}(x_0)\\ \nonumber &=&(\mathfrak{q}^{\sharp})^{-1}(L(\mathcal{R}(x_0))(y)+R(\mathcal{R}(x_1))(x_0))\\ \label{7.7} &=&(\mathfrak{q}^{\sharp})^{-1}([\mathcal{R}(x_0) ,x_1]+[x_0, \mathcal{R}(x_1)])=(\mathfrak{q}^{\sharp})^{-1}[x_0,x_1]^{\mathcal{R}}. %(7.7)\end{aligned}$$ According to and , $$\begin{aligned} &&[\alpha_0, \alpha_1]^{N\pi^{\sharp}} -[\alpha_0, \alpha_1]_{N^{*}}^{\pi^{\sharp}}=[\alpha_0, \alpha_1]^{N\pi^{\sharp}} -[N^{*}(\alpha_0), \alpha_1]^{\pi^{\sharp}}-[\alpha_0, N^{*}(\alpha_1)]^{\pi^{\sharp}}+N^{*}[\alpha_0,\alpha_1] ^{\pi^{\sharp}}\\ &=&(\mathfrak{q}^{\sharp})^{-1}([x_0,x_1]^{N\mathcal{R}} )-[N^{*}(\mathfrak{q}^{\sharp})^{-1}(x_0),(\mathfrak{q}^{\sharp})^{-1}(x_1)]^{\pi^{\sharp}}\\&&- [(\mathfrak{q}^{\sharp})^{-1}(x_0),N^{*}(\mathfrak{q}^{\sharp})^{-1}(x_1)]^{\pi^{\sharp}} + N^{*}[(\mathfrak{q}^{\sharp})^{-1}(x_0),(\mathfrak{q}^{\sharp})^{-1}(x_1)]^{\pi^{\sharp}} \\ &=&(\mathfrak{q}^{\sharp})^{-1}[x_0,x_1]^{N\mathcal{R}} -(\mathfrak{q}^{\sharp})^{-1}[N(x_0),(\mathfrak{q}^{\sharp})^{-1}(x_1)]^{\pi^{\sharp}} -[(\mathfrak{q}^{\sharp})^{-1}(x_0),(\mathfrak{q}^{\sharp})^{-1}N(x_1)]^{\pi^{\sharp}} \\&&+ N^{*} [(\mathfrak{q}^{\sharp})^{-1}(x_0),(\mathfrak{q}^{\sharp})^{-1}(x_1)]^{\pi^{\sharp}}\end{aligned}$$ $$\begin{aligned} &=&(\mathfrak{q}^{\sharp})^{-1}([x_0,x_1]^{N\mathcal{R}} )-(\mathfrak{q}^{\sharp})^{-1}([N(x_0),x_1]^{\mathcal{R}} )-(\mathfrak{q}^{\sharp})^{-1}[x_0, N(x_1)]^{\mathcal{R}}+ N^{*} (\mathfrak{q}^{\sharp})^{-1}([x_0,x_1]^{\mathcal{R}}) \\ &=&(\mathfrak{q}^{\sharp})^{-1}[x_0,x_1]^{N\mathcal{R}} -(\mathfrak{q}^{\sharp})^{-1}[N(x_0),x_1]^{\mathcal{R}} -(\mathfrak{q}^{\sharp})^{-1}[x_0,N(x_1)]^{\mathcal{R}}+ (\mathfrak{q}^{\sharp})^{-1}N [x_0,x_1]^{\mathcal{R}} \\ &=&(\mathfrak{q}^{\sharp})^{-1}[x_0,x_1]^{N\mathcal{R}} -(\mathfrak{q}^{\sharp})^{-1}([N(x_0),x_1]^{\mathcal{R}} +[x_0, N(x_1)]^{\mathcal{R}} - N([x_0,x_1]^{\mathcal{R}}) \\ &=&(\mathfrak{q}^{\sharp})^{-1}([x_0,x_1]^{N\mathcal{R}} -[x_0,x_1]_{N}^{\mathcal{R}})=0.\end{aligned}$$ Therefore $$[\alpha_0,\alpha_1]^{N\pi^{\sharp}} =[\alpha_0,\alpha_1] _{N^{*}}^{\pi^{\sharp}}.$$ Hence $(\pi,N)$ is a Leibniz $r-n$ structure. $(\Longleftarrow)$ By and , we get $$N\mathcal{R}=\mathcal{R}N.$$ The remaining part is similar to the converse argument. A symmetric nondegenerate bilinear form $\mathcal{B}\in \mathfrak g^{*}\otimes \mathfrak g^{*}$ on a Leibniz algebra $\mathfrak g$ induces a linear map $\mathcal{B}^{\sharp}: \mathfrak g^{*}\longrightarrow \mathfrak g$ by $$\langle (\mathcal{B}^{\sharp})^{-1}(x_0),x_1\rangle=\mathcal{B}(x_0,x_1),~~\forall ~x_0,~x_1\in \mathfrak g.$$ Here $\mathcal{B}$ is nondegenerate if $\mathcal{B}^{\sharp}: \mathfrak g^{*}\longrightarrow \mathfrak g$ is an isomorphism. [**Definition 7.7.**]{} Let $\mathcal{B}$ be a symmetric nondegenerate bilinear form satisfying the closeness condition $$\label{7.8} \mathcal{B}(x_2,[x_0,x_1])=-\mathcal{B}(x_1,[x_0,x_2])+\mathcal{B}(x_0,[x_1,x_2])+\mathcal{B}(x_0,[x_2,x_1]),$$ and $N$ a Nijenhuis operator on a Leibniz algebra $\mathfrak g$. We say that $(\mathcal{B}, N)$ is a [*$\mathcal{B}N$-structure*]{} on the Leibniz algebra if $$\label{7.9}\mathcal{B}(N(x_0),x_1)=\mathcal{B}(x_0,N(x_1)),$$ and $\mathcal{B}_{N}:\mathfrak g\otimes g\longrightarrow \mathfrak g$ with $\mathcal{B}_{N}(x_0,x_1)=\mathcal{B}(N(x_0),x_1)$ is also [*closed*]{}, that is, $$\label{7.10} \mathcal{B}(x_2,N[x_0,x_1])=-\mathcal{B}(x_1,N[x_0,x_2])+\mathcal{B}(x_0,N[x_1,x_2])+\mathcal{B}(x_0,N[x_2,x_1]).$$ [**Theorem 7.8.**]{} Let $N : \mathfrak g\longrightarrow \mathfrak g$ be a Nijenhuis operator and $\mathcal{B}\in \mathfrak g^{*}\otimes \mathfrak g^{*}$ a nondegenerate symmetric bilinear form over a Leibniz algebra $\mathfrak g$. If $(\mathcal{B}, N)$ is a $\mathcal{B}N$-structure, then $(\mathcal{B}^{\sharp}, S=N^{*},N)$ is a dual KN-structure on $(\mathfrak g^{*},L^{*}, -R^{*}-L^{*})$. In view of (7.9), $\mathcal{B}^{\sharp}N^{*}=N\mathcal{B}^{\sharp}.$ By [@ST Thm. 7.15], $\mathcal{B}^{\sharp} $ is a Kupershmidt operator. Now we claim that $$\label{7.11} [\alpha_0,\alpha_1]^{N^{*}\mathcal{B}^{\sharp}}=[\alpha_0,\alpha_1]_{N^{*}}^{\mathcal{B}^{\sharp}}.%\eqno (7.11)$$ Since $\mathcal{B}^{\sharp}$ is bijective, write $\alpha_0=(\mathcal{B}^{\sharp})^{-1}(x_0),~\alpha_1=(\mathcal{B}^{\sharp})^{-1}(x_1)$ for some $x_0,~x_1\in \mathfrak g$. For any $x_2\in \mathfrak g$, in view of and , $$\begin{aligned} &&\langle[\alpha_0,\alpha_1]^{N^{*}\mathcal{B}^{\sharp}} -[\alpha_0,\alpha_1]_{N^{*}}^{\mathcal{B}^{\sharp}},x_2\rangle \\&=&\langle L^{*}(\mathcal{B}^{\sharp}(\alpha_0))N^{*}(\alpha_1)-L^{*}(\mathcal{B}^{\sharp}(\alpha_1))N^{*}(\alpha_0) -R^{*}(\mathcal{B}^{\sharp}(\alpha_1))N^{*}(\alpha_0) -N^{*}(L^{*}(\mathcal{B}^{\sharp}(\alpha_0))\alpha_1\\&&-L^{*}(\mathcal{B}^{\sharp}(\alpha_1))\alpha_0 -R^{*}(\mathcal{B}^{\sharp}(\alpha_1))\alpha_0),x_2\rangle \\&=&-\langle N^{*}(\alpha_1),L(\mathcal{B}^{\sharp}(\alpha_0))x_2\rangle+\langle N^{*}(\alpha_0),L(\mathcal{B}^{\sharp}(\alpha_1))x_2\rangle+ \langle N^{*}(\alpha_0),R(\mathcal{B}^{\sharp}(\alpha_1))x_2\rangle\\&&-\langle L^{*}(\mathcal{B}^{\sharp}(\alpha_0))\alpha_1,N(x_2)\rangle+\langle L^{*}(\mathcal{B}^{\sharp}(\alpha_1))\alpha_0,N(x_2)\rangle+\langle R^{*}(\mathcal{B}^{\sharp}(\alpha_1))\alpha_0,N(x_2)\rangle \\&=&-\langle \alpha_1,N[\mathcal{B}^{\sharp}(\alpha_0),x_2]\rangle +\langle \alpha_0,N[\mathcal{B}^{\sharp}(\alpha_1),x_2]\rangle +\langle \alpha,N[x_2,\mathcal{B}^{\sharp}(\alpha_1)]\rangle-\langle\alpha_1,[\mathcal{B}^{\sharp}(\alpha_0),N(x_2)]\rangle \\&&+\langle\alpha_0,[\mathcal{B}^{\sharp}(\alpha_1),N(x_2)]\rangle+\langle\alpha_0,[N(x_2),\mathcal{B}^{\sharp}(\alpha_1)]\rangle \\&=&-\mathcal{B}(x_1,N[x_0,x_2])+\mathcal{B}(x_0,N[x_1,x_2])+\mathcal{B}(x_0,N[x_2,x_1])-\mathcal{B}(x_1,[x_0,N(x_2)])\\&& +\mathcal{B}(x_0,[x_1,N(x_2)])+\mathcal{B}(x_0,[N(x_2),x_1]) \\&=&-\mathcal{B}(x_1,N[x_0,x_2])+\mathcal{B}(x_0,N[x_1,x_2])+\mathcal{B}(x_0,N[x_2,x_1])+\mathcal{B}(N(x_2),[x_0,x_1])=0,\end{aligned}$$ which implies that holds. The following result is a consequence of Proposition 5.6 and Theorem 7.8. [**Corollary 7.9.**]{} Let $N : \mathfrak g\longrightarrow \mathfrak g$ be a Nijenhuis operator and $\mathcal{B}\in \mathfrak g^{*}\otimes \mathfrak g^{*}$ be a nondegenerate symmetric bilinear form on a Leibniz algebra $\mathfrak g$. If $(\mathcal{B}, N)$ is a $\mathcal{B}N$-structure, then $\mathcal{B}^{\sharp}$ and $N\mathcal{B}^{\sharp}$ are compatible Kupershmidt operators on $(\mathfrak g^{*},L^{*}, -R^{*}-L^{*})$. Acknowledgments {#acknowledgments .unnumbered} =============== The work is supported by the National Natural Science Foundation of China (grant no. 11401530), Simons Foundation (grant no. 523868) and the Natural Science Foundation of Zhejiang Province of China (grant no. LY19A010001). [ABCD]{} D. Balavoine, Deformation of algebras over a quadratic operad, Contemp. Math. AMS [202]{} (1997), 207–234. C. Bai, A unified algebraic approach to the classical Yang-Baxter equation, J. Phys. A [40]{} (36) (2007), 11073–11082. C. Bai, Double constructions of Frobenius algebras, Connes cocycles and their duality. J. Noncommut. Geom. [4]{} (4) (2010), 475-530. G. Baxter, An analytic problem whose solution follows from a simple algebraic identity, Pac. J. Math. 10 (1960), 731-742. C. Bai, O. Bellier, L. Guo, X. Ni, Splitting of operations, Manin products, and Rota-Baxter operators, Int. Math. Res. Not. 3 (2013), 485–524. M. Bordemann, F. Wagemann, Global integration of Leibniz algebras, J. Lie Theory 27 (2)(2017), 555–567. S. Covez, The local integration of Leibniz algebras, Ann. Inst. Fourier (Grenoble) 63 (2013), no. 1, 1–35. I. Ya. Dorfman, Dirac Structures and Integrability of Nonlinear Evolution Equations, John Wiley, 1993. I. Demir, K. C. Misra, E. Stitzinger, On some structures of Leibniz algebras, Recent advances in representation theory, quantum groups, algebraic geometry, and related topics, pp. 41-54, Contemp. Math. 623, Amer. Math. Soc., Providence, RI, 2014. B. Dherin, F. Wagemann, Deformation quantization of Leibniz algebras, Adv. Math. 270 (2015), 21–48. B. Fuchssteiner, A. S. Fokas, Symplectic structures, their Backlund transformations and hereditary symmetries, Physica D 4 (1) (1981), 47–66. A. Fialowski, A. Mandal, Leibniz algebra deformations of a Lie algebra. J. Math. Phys. 49 (9) (2008), 093511. L. Guo, Introduction to Rota-Baxter algebras, Higher Education Press, Beijing, 2012. Y. W. Hu, J. F. Liu, Y. H. Sheng, Kupershmidt-(dual-)Nijenhuis structures on a Lie algebra with a representation, J. Math. Phys. 59 (2018), 081702£¬14pp. B. A. Kupershmidt, What a classical r-matrix really is? J. Nonlinear Math. Phys. 6 (4) (1999), 448–488. Y. Kosmann-Schwarzbach, F. Magri, Poisson-Nijenhuis structures, Ann. Inst. Henri Poincare, Sect. A 53 (1990), 35–81. Y. Kosmann-Schwarzbach, V. Rubtsov, Compatible structures on Lie algebroids and Monge-Amp‘ere operators, Acta. Appl. Math. 109 (2010), 101–135. M. Kinyon, A. Weinstein, Leibniz algebras, Courant algebroids, and multiplications on reductive homogeneous spaces, Amer. J. Math. 123 (3) (2001), 525–550. J. L. Loday, Une version non commutative des algèbres de Lie: les algèbres de Leibniz, Enseign. Math. 39 (2) (1993), 269–293. J. Liu, C. Bai, Y. Sheng, Compatible $\mathcal{O}$-operators on bimodules over associative algebras, J. Algebra 532 (2019), 80–118. J. L. Loday, T. Pirashvili, Universal enveloping algebras of Leibniz algebras and (co)homology, Math. Ann. 296 (1993), 139–158. F. Magri, C. Morosi, A Geometrical Characterization of Integrable Hamiltonian Systems Through the Theory of Poisson- Nijenhuis Manifolds (Quaderno S/19, Milan, 1984), reissued Universit‘a di Milano Bicocca, Quaderno 3, 2008, (Quaderni di Dipartimento/2008-3). A. Nijenhuis, R. W. Richardson, Deformation of Lie algebra structures, J. Math. Mech. 17 (1967), 89–105. Z. Ravanpak, A. R. Aghdam, G. Haghighatdoost, Invariant Poisson-Nijenhuis structures on Lie groups and classification, Int. J. Geom. Methods Mod. Phys. 15(4) (2018), 1850059. Y. Sheng, R. Tang, Deformation of Kupershmidt operators on Leibniz algebras and Leibniz bialgebras, arXiv:1902.03033v1. K. Uchino, Twisting on associative algebras and Rota-Baxter type operators, J. Noncommut. Geom. 4 (3) (2010), 349–379. S. Gomez-Vidal, A. Khudoyberdiyev, B. A. Omirov, Some remarks on semisimple Leibniz algebras, J. Algebra 410 (2014), 526–540. Q. Wang, Y. Sheng, C. Bai, J. Liu, Nijenhuis operators on pre-Lie algebras, Commun. Contemp. Math. (21) (2019), 1850050. Y. Zhang, C. Bai, L. Guo, Totally compatible associative and Lie dialgebras, tridendriform algebras and PostLie algebras. Sci. China Math. 57 (2014), 259–273. [^1]: \*Corresponding author: Naihuan Jing
--- abstract: 'Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function. This basic network helps us understand the impacts of BN in three aspects. First, by viewing BN as an implicit regularizer, BN can be decomposed into population normalization (PN) and gamma decay as an explicit regularization. Second, learning dynamics of BN and the regularization show that training converged with large maximum and effective learning rate. Third, generalization of BN is explored by using statistical mechanics. Experiments demonstrate that BN in convolutional neural networks share the same traits of regularization as the above analyses.' author: - | Ping Luo$^{1,3}$[^1]Xinjiang Wang$^{2\ast}$Wenqi Shao$^{1\ast}$Zhanglin Peng$^2$\ $^1$The Chinese University of Hong Kong$^2$SenseTime Research$^3$The University of Hong Kong\ title: | Towards Understanding Regularization in\ Batch Normalization --- Introduction {#sec:intro} ============ Batch Normalization (BN) is an indispensable component in many deep neural networks [@resnet; @densenet]. BN has been widely used in various areas such as machine vision, speech and natural language processing. Experimental studies [@BN] suggested that BN improves convergence and generalization by enabling large learning rate and preventing overfitting when training deep networks. Understanding BN theoretically is a key question. This work investigates regularization of BN as well as its optimization and generalization in a single-layer perceptron, which is a building block of deep models, consisting of a kernel layer, a BN layer, and a nonlinear activation function such as ReLU. The computation of BN is written by $$\label{eq:BN} \small y=g({{ \hat{h} }}),~~{{ \hat{h} }}=\gamma\frac{h-\mu_\mathcal{B}}{\sigma_\mathcal{B}}+\beta~~\mathrm{and}~~h={{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}.$$ This work denotes a scalar and a vector by using lowercase letter ([*e.g.* ]{}$x$) and bold lowercase letter ([*e.g.* ]{}$\textbf{x}$) respectively. In Eqn., $y$ is the output of a neuron, $g(\cdot)$ denotes an activation function, $h$ and ${{ \hat{h} }}$ are hidden values before and after batch normalization, ${{ \mathbf{w} }}$ and ${{ \mathbf{x} }}$ are kernel weight vector and network input respectively. In BN, $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ represent the mean and standard deviation of $h$. They are estimated within a batch of samples for each neuron independently. $\gamma$ is a scale parameter and $\beta$ is a shift parameter. In what follows, Sec.\[sec:overview\] overviews assumptions and main results, and Sec.\[sec:related\] presents relationships with previous work. Overview of Results {#sec:overview} ------------------- We overview results in three aspects. $\bullet$ First, Sec.\[sec:view\] decomposes BN into population normalization (PN) and gamma decay. To better understand BN, we treat a single-layer perceptron with ReLU activation function as an illustrative case. Despite the simplicity of this case, it is a building block of deep networks and has been widely adopted in theoretical analyses such as proper initialization [@krogh_generalization_1992; @advani_high-dimensional_2017], dropout [@wager_dropout_2013], weight decay and data augmentation [@bos_statistical_1998]. The results in Sec.\[sec:view\] can be extended to deep neural networks as presented in Appendix \[app:deep-reg\]. Our analyses assume that neurons at the BN layer are independent similar to [@WN; @l2BN; @BayeUnEs], as the mean and the variance of BN are estimated individually for each neuron of each layer. The form of regularization in this study does not rely on Gaussian assumption on the network input and the weight vector, meaning our assumption is milder than those in [@WNdynamic; @LN; @WN]. Sec.\[sec:view\] tells us that BN has an explicit regularization form, gamma decay, where $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ have different impacts: (1) $\mu_\mathcal{B}$ [discourages]{} reliance on a single neuron and [encourages]{} different neurons to have equal magnitude, in the sense that corrupting individual neuron does not harm generalization. This phenomenon was also found empirically in a recent work [@single], but has not been established analytically. (2) $\sigma_\mathcal{B}$ reduces kurtosis of the input distribution as well as [correlations]{} between neurons. (3) The regularization strengths of these statistics are [inversely proportional]{} to the batch size $M$, indicating that BN with large batch would decrease generalization. (4) Removing either one of $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ could imped convergence and generalization. $\bullet$ Second, by using ordinary differential equations (ODEs), Sec.\[sec:dynamic\] shows that gamma decay enables the network trained with BN to converge with [large maximum learning rate and effective learning rate]{}, compared to the network trained without BN or trained with weight normalization (WN) that is a counterpart of BN. The maximum learning rate (LR) represents the largest LR value that allows training to converge to a fixed point without diverging, while effective LR represents the actual LR in training. Larger maximum and effective LRs imply faster convergence rate. $\bullet$ Third, Sec.\[sec:gen\] compares generalization errors of BN, WN, and vanilla SGD by using statistical mechanics. The “large-scale” regime is of interest, where number of samples $P$ and number of neurons $N$ are both large but their ratio $P/N$ is finite. In this regime, the generalization errors are quantified both analytically and empirically. Numerical results in Sec.\[sec:exp\] show that BN in CNNs has the same traits of regularization as disclosed above. Related Work {#sec:related} ------------ **Neural Network Analysis**. Many studies conducted theoretical analyses of neural networks [@optimal-perceptron; @on-line-committe; @Dynamics-perceptron; @Geometry; @Electron-proton; @Globally; @Expressive; @landscape; @Yuandong]. For example, for a multilayer network with linear activation function, explored its SGD dynamics and [@wopoor] showed that every local minimum is global. [@Yuandong] studied the critical points and convergence behaviors of a 2-layered network with ReLU units. [@Electron-proton] investigated a teacher-student model when the activation function is harmonic. In [@on-line-committe], the learning dynamics of a committee machine were discussed when the activation function is error function $\mathrm{erf}(x)$. Unlike previous work, this work analyzes regularization emerged in BN and its impact to both learning and generalization, which are still unseen in the literature. **Normalization**. Many normalization methods have been proposed recently. For example, BN [@BN] was introduced to stabilize the distribution of input data of each hidden layer. Weight normalization (WN) [@WN] decouples the lengths of the network parameter vectors from their directions, by normalizing the parameter vectors to unit length. The dynamic of WN was studied by using a single-layer network [@WNdynamic]. @Disharmony diagnosed the compatibility of BN and dropout [@dropout] by reducing the variance shift produced by them. Moreover, @l2BN showed that weight decay has no regularization effect when using together with BN or WN. @LN demonstrated when BN or WN is employed, back-propagating gradients through a hidden layer is scale-invariant with respect to the network parameters. @santurkar_how_2018 gave another perspective of the role of BN during training instead of reducing the covariant shift. They argued that BN results in a smoother optimization landscape and the Lipschitzness is strengthened in networks trained with BN. However, both analytical and empirical results of regularization in BN are still desirable. Our study explores regularization, optimization, and generalization of BN in the scenario of online learning. **Regularization**. @BN conjectured that BN [implicitly]{} regularizes training to prevent overfitting. @rethinking-generalization categorized BN as an implicit regularizer from experimental results. @szegedy_rethinking_2015 also conjectured that in the Inception network, BN behaves similar to dropout to improve the generalization ability. @gitman_comparison_2017 experimentally compared BN and WN, and also confirmed the better generalization of BN. In the literature there are also [implicit regularization]{} schemes other than BN. For instance, random noise in the input layer for data augmentation has long been discovered equivalent to a weight decay method, in the sense that the inverse of the signal-to-noise ratio acts as the decay factor [@krogh_generalization_1992; @rifai_adding_2011]. Dropout [@dropout] was also proved able to regularize training by using the generalized linear model [@wager_dropout_2013]. A Probabilistic Interpretation of BN {#sec:view} ==================================== The notations in this work are summarized in Appendix Table \[tab:notation\] for reference. Training the above single-layer perceptron with BN in Eqn. typically involves minimizing a negative log likelihood function with respect to a set of network parameters ${\theta}=\{{{ \mathbf{w} }},\gamma,\beta\}$. Then the loss function is defined by $$\label{eq:loss} \small \frac{1}{P}\sum_{j=1}^P\ell(\hat{h}^j)=-\frac{1}{P}\sum_{j=1}^P\log p(y^j|\hat{h}^j;{\theta})+\zeta\|{\theta}\|_2^2,$$ where $p(y^j|\hat{h}^j;{\theta})$ represents the likelihood function of the network and $P$ is number of training samples. As Gaussian distribution is often employed as prior distribution for the network parameters, we have a regularization term $\zeta\|{\theta}\|_2^2$ known as weight decay [@AlexNet] that is a popular technique in deep learning, where $\zeta$ is a coefficient. To derive regularization of BN, we treat $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ as random variables. Since one sample ${{ \mathbf{x} }}$ is seen many times in the entire training course, and at each time ${{ \mathbf{x} }}$ is presented with the other samples in a batch that is drawn randomly, $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ can be treated as injected random noise for ${{ \mathbf{x} }}$. **Prior of $\boldsymbol{\mu_{\mathcal{B}}},\boldsymbol{\sigma_{\mathcal{B}}}$.** By following [@BayeUnEs], we find that BN also induces Gaussian priors for $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$. We have $\mu_{\mathcal{B}}\sim\mathcal{N}(\mu_{\mathcal{P}},\frac{\sigma_{P}^{2}}{M})$ and $ \sigma_{\mathcal{B}}\sim\mathcal{N}(\sigma_{P},\frac{\rho+2}{4M})$, where $M$ is batch size, $\mu_{\mathcal{P}}$ and $\sigma_{\mathcal{P}}$ are population mean and standard deviation respectively, and $\rho$ is kurtosis that measures the peakedness of the distribution of $h$. These priors tell us that $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ would produce Gaussian noise in training. There is a tradeoff regarding this noise. For example, when $M$ is small, training could diverge because the noise is large. This is supported by experiment of BN [@GN] where training diverges when $M=2$ in ImageNet [@imagenet12]. When $M$ is large, the noise is small because $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ get close to $\mu_{\mathcal{P}}$ and $\sigma_{\mathcal{P}}$. It is known that $M>30$ would provide a moderate noise, as the sample statistics converges in probability to the population statistics by the weak Law of Large Numbers. This is also supported by experiment [@BN] where BN with $M=32$ already works well in ImageNet. A Regularization Form --------------------- The loss function in Eqn.(\[eq:loss\]) can be written as an expected loss by integrating over the priors of $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$, that is, $\frac{1}{P}\sum_{j=1}^P\mathbb{E}_{\mu_\mathcal{B},\sigma_\mathcal{B}}[\ell(\hat{h}^j)]$ where $\mathbb{E}[\cdot]$ denotes expectation. We show that $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ impose regularization on the scale parameter $\gamma$ by decomposing BN into population normalization (PN) and gamma decay. To see this, we employ a single-layer perceptron and ReLU activation function as an illustrative example. A more rigorous description is provided in Appendix \[app:theorem\]. **Regularization of $\boldsymbol{\mu_\mathcal{B}},\boldsymbol{\sigma_\mathcal{B}}$.** Let $\ell(\hat{h})$ be the loss function defined in Eqn. and ReLU be the activation function. We have $$\small \frac{1}{P}\sum_{j=1}^P\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\ell(\hat{h}^j)\simeq \underbrace{\frac{1}{P}\sum_{j=1}^P\ell(\bar{h}^j)}_{\mathrm{PN}}~+\underbrace{\zeta(h)\gamma^2}_{\mathrm{gamma~decay}}, ~~\mathrm{and}~~\zeta(h)=\underbrace{\frac{\rho+2}{8M}\mathcal{I}(\gamma)}_{\mathrm{from~} \sigma_{\mathcal{B}}}+\underbrace{\frac{1}{2M}\frac{1}{P}\sum_{j=1}^P\sigma(\bar{h}^j)}_{\mathrm{from~} \mu_{\mathcal{B}}},\label{eq:theorem1}$$ where $\bar{h}^j=\gamma\frac{h^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}+\beta$ and $h^j={{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j$ represent the computations of PN. $\zeta(h)\gamma^2$ represents gamma decay, where $\zeta(h)$ is an adaptive decay factor depended on the hidden value $h$. Moreover, $\rho$ is the kurtosis of distribution of $h$, $\mathcal{I}(\gamma)$ represents an estimation of the Fisher information of $\gamma$ and $\mathcal{I}(\gamma)=\frac{1}{P}\sum_{j=1}^P(\frac{\partial\ell(\hat{h}^j)}{\partial\gamma})^2$, and $\sigma(\cdot)$ is a sigmoid function. From Eqn., we have several observations that have both theoretical and practical values. $\bullet$ First, PN replaces the batch statistics $\mu_{\mathcal{B}},\sigma_{\mathcal{B}}$ in BN by the population statistics $\mu_{\mathcal{P}},\sigma_{\mathcal{P}}$. In gamma decay, computation of $\zeta(h)$ is [data-dependent]{}, making it differed from weight decay where the coefficient is determined manually. In fact, Eqn. recasts the randomness of BN in a deterministic manner, not only enabling us to apply methodologies such as ODEs and statistical mechanics to analyze BN, but also inspiring us to imitate BN’s performance by WN without computing batch statistics in empirical study. $\bullet$ Second, PN is closely connected to WN, which is independent from sample mean and variance. WN [@WN] is defined by $\upsilon\frac{{{{ \mathbf{w} }}}^{T}{{{ \mathbf{x} }}}}{{||{{ \mathbf{w} }}||_2}}$ that normalizes the weight vector ${{ \mathbf{w} }}$ to have unit variance, where $\upsilon$ is a learnable parameter. Let each diagonal element of the covariance matrix of ${{ \mathbf{x} }}$ be $a$ and all the off-diagonal elements be zeros. $\bar{h}^j$ in Eqn. can be rewritten as $$\label{eq:h} \small \bar{h}^j=\gamma\frac{\mathbf{w}^{T}\mathbf{x}^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}+\beta=\upsilon\frac{{{{ \mathbf{w} }}}^{T}{{{ \mathbf{x} }}}^j}{{||{{ \mathbf{w} }}||_2}}+b,$$ where $\upsilon=\frac{\gamma}{a}$ and $b=-\frac{\gamma\mu_{\mathcal{P}}}{a{||{{ \mathbf{w} }}||_2}}+\beta$. Eqn.(\[eq:h\]) removes the estimations of statistics and eases our analyses of regularization for BN. $\bullet$ Third, $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ produce different strengths in $\zeta(h)$. As shown in Eqn.(\[eq:theorem1\]), the strength from $\mu_\mathcal{B}$ depends on the expectation of $\sigma(\bar{h}^j)\in[0,1]$, which represents excitation or inhibition of a neuron, meaning that [a neuron with larger output may exposure to larger regularization]{}, encouraging different neurons to have equal magnitude. This is consistent with empirical result [@single] which prevented reliance on single neuron to improve generalization. The strength from $\sigma_\mathcal{B}$ works as a complement for $\mu_\mathcal{B}$. For a single neuron, $\mathcal{I}(\gamma)$ represents the norm of gradient, implying that BN punishes large gradient norm. For multiple neurons, $\mathcal{I}(\gamma)$ is the Fisher information matrix of $\gamma$, meaning that BN would penalize correlations among neurons. Both $\sigma_\mathcal{B}$ and $\mu_\mathcal{B}$ are important, removing either one of them would imped performance. **Extensions to Deep Networks.** The above results can be extended to deep networks as shown in Appendix \[app:deep-reg\] by decomposing the expected loss at a certain hidden layer. We also demonstrate the results empirically in Sec.\[sec:exp\], where we observe that CNNs trained with BN share similar traits of regularization as discussed above. Optimization with Regularization {#sec:dynamic} ================================ Now we show that BN converges with large maximum and effective learning rate (LR), where the former one is the largest LR when training converged, while the latter one is the actual LR during training. With BN, we find that both LRs would be larger than a network trained without BN. Our result explains why BN enables large learning rates used in practice [@BN]. Our analyses are conducted in three stages. First, we establish dynamical equations of a teacher-student model in the thermodynamic limit and acquire the fixed point. Second, we investigate the eigenvalues of the corresponding Jacobian matrix at this fixed point. Finally, we calculate the maximum and the effective LR. **Teacher-Student Model**. We first introduce useful techniques from statistical mechanics (SM). With SM, a student network is dedicated to learn relationship between a Gaussian input and an output by using a weight vector ${{ \mathbf{w} }}$ as parameters. It is useful to characterize behavior of the student by using a teacher network with ${{ \mathbf{w} }}^\ast$ as a ground-truth parameter vector. We treat single-layer perceptron as a student, which is optimized by minimizing the euclidian distance between its output and the supervision provided by a teacher without BN. The student and the teacher have identical activation function. **Loss Function.** We define a loss function of the above teacher-student model by $\frac{1}{P}\sum_{j=1}^P\ell({{ \mathbf{x} }}^j)=\frac{1}{P}\sum_{j=1}^P\big[g({{{ \mathbf{w} }}^\ast}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)- g(\sqrt{N}\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j }{\|{{ \mathbf{w} }}\|_2})\big]^2+\zeta\gamma^2$, where $g({{{ \mathbf{w} }}^\ast}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)$ represents supervision from the teacher, while $g(\sqrt{N}\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j }{\|{{ \mathbf{w} }}\|_2})$ is the output of student trained to mimic the teacher. This student is defined by following Eqn.(\[eq:h\]) with $\nu=\sqrt{N}\gamma$ and the bias term is absorbed into ${{ \mathbf{w} }}$. The above loss function represents BN by using WN with gamma decay, and it is sufficient to study the learning rates of different approaches. Let $\theta=\{{{{ \mathbf{w} }}},\gamma\}$ be a set of parameters updated by SGD, [*i.e.* ]{}$\theta^{j+1}=\theta^{j}-\eta\frac{\partial\ell({{ \mathbf{x} }}^j)}{\partial\theta^j}$ where $\eta$ denotes learning rate. The update rules for ${{ \mathbf{w} }}$ and $\gamma$ are $$\small {{{ \mathbf{w} }}^{j+1}} -{{ \mathbf{w} }}^{j}={ \eta\delta^j(\frac{\gamma^j\sqrt{N}}{\|{{ \mathbf{w} }}^j\|_2}{{ \mathbf{x} }}^j- \frac{{{{ {\tilde{{{ \mathbf{w} }}}^j} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j}{\|{{ \mathbf{w} }}^j\|_2^2}{{ \mathbf{w} }}^j)}~~~\mathrm{and}~~~ {\gamma^{j+1}} -\gamma^{j}=\eta(\frac{\delta^j\sqrt{N}{{{ \mathbf{w} }}^j}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j}{\|{{ \mathbf{w} }}^j\|_2} -\zeta\gamma^j), \label{eq:w}$$ where ${{ \tilde{{{ \mathbf{w} }}} }}^j$ denotes a normalized weight vector of the student, that is, ${{ \tilde{{{ \mathbf{w} }}} }}^j=\sqrt{N}\gamma^j\frac{{{ \mathbf{w} }}^j}{\|{{ \mathbf{w} }}^j\|_2}$, and $\delta^j=g'({{ {\tilde{{{ \mathbf{w} }}}^j} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)[g({{{ \mathbf{w} }}^\ast}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)- g({{ {\tilde{{{ \mathbf{w} }}}^j} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)]$ represents the gradient[^2] for clarity of notations. **Order Parameters**. As we are interested in the “large-scale” regime where both $N$ and $P$ are large and their ratio $P/N$ is finite, it is difficult to examine a student with parameters in high dimensions directly. Therefore, we transform the weight vectors to order parameters that fully characterize interactions between the student and the teacher network. In this case, the parameter vector can be reparameterized by using a vector of three elements including $\gamma$, $R$, and $L$. In particular, $\gamma$ measures length of the normalized weight vector ${{ \tilde{{{ \mathbf{w} }}} }}$, that is, ${{{ \tilde{{{ \mathbf{w} }}} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{{ \tilde{{{ \mathbf{w} }}} }}}=N\gamma^2\frac{{{{ \mathbf{w} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{{ \mathbf{w} }}}}{\|{{ \mathbf{w} }}\|_2^2}=N\gamma^2$. The parameter $R$ measures angle (overlapping ratio) between the weight vectors of student and teacher. We have $R=\frac{{{ \tilde{{{ \mathbf{w} }}} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{w} }}^\ast}{\|{{ \tilde{{{ \mathbf{w} }}} }}\|\|{{ \mathbf{w} }}^\ast\|}=\frac{1}{N\gamma}{{ \tilde{{{ \mathbf{w} }}} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{w} }}^\ast$, where the norm of the ground-truth vector is $\frac{1}{N}{{ \mathbf{w} }}^\ast{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{w} }}^\ast=1$. Moreover, $L$ represents length of the original weight vector ${{ \mathbf{w} }}$ and $L^2=\frac{1}{N}{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{w} }}$. **Learning Dynamics.** The update equations can be transformed into a set of differential equations (ODEs) by using the above order parameters. This is achieved by treating the update step $j$ as a continuous time variable $t=\frac{j}{N}$. They can be turned into differential equations because the contiguous step $\Delta t=\frac{1}{N}$ approaches zero in the thermodynamic limit when $N\rightarrow\infty$. We obtain a dynamical system of three order parameters $$\frac{d \gamma}{d t}=\eta\frac{I_{1}}{\gamma}-\eta\zeta \gamma,\label{eq:Q}~~~ \frac{d R}{dt}=\eta\frac{\gamma}{{L}^2}I_{3}-\eta\frac{R}{{L}^2}I_{1} -\eta^{2}\frac{\gamma^{2}R}{2{L}^{4}}I_{2},~~~\mathrm{and}~~~ \frac{d {L}}{dt}=\eta^{2}\frac{\gamma^{2}}{2{L}^{3}}I_{2},$$ where $I_1=\mathds{E}_{{ \mathbf{x} }}[\delta{{{ \tilde{{{ \mathbf{w} }}} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}]$, $I_2=\mathds{E}_{{ \mathbf{x} }}[\delta^2{{ \mathbf{x} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}]$, and $I_3=\mathds{E}_{{ \mathbf{x} }}[\delta{{{ \mathbf{w} }}^\ast}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}]$ are defined to simplify notations. The derivations of Eqn. can be found in [Appendix]{} \[app:dyn\]. Fixed Point of the Dynamical System {#sec:fix-suff-data} ----------------------------------- [r]{}[6cm]{} [p[8pt]{}&lt;|p[37pt]{}&lt;|p[55pt]{}&lt;|p[26pt]{}&lt;]{} & $(\gamma_0,R_0,L_0)$ & $\eta_{\max}$ ($R$) & $\eta_{{ \mathrm{eff} }}$ ($R$)\ BN & $(\gamma_0,1,L_0)$ & $\big(\frac{\partial(\gamma_0 I_3-I_1)}{\gamma_0 \partial R}-\zeta\gamma_0\big)/\frac{\partial I_2}{2\partial R}$ & $\frac{\eta\gamma_0}{L_0^2}$\ WN & $(1,1,L_0)$ & $\frac{\partial(I_3-I_1)}{\partial R}/\frac{\partial I_2}{2\partial R}$ & $\frac{\eta}{L^2_0}$\ SGD & $(1,1,1)$ & $\frac{\partial(I_3-I_1)}{\partial R}/\frac{\partial I_2}{2\partial R}$ & $\eta$\ To find the fixed points of (\[eq:Q\]), we set $d\gamma/dt=dR/dt=d{L}/dt=0$. The fixed points of BN, WN, and vanilla SGD (without BN and WN) are given in Table \[tab:lr\]. In the thermodynamic limit, the optima denoted as $(\gamma_0,R_0,L_0)$ would be $(\gamma_0,R_0,L_0)=(1,1,1)$. Our main interest is the overlapping ratio $R_0$ between the student and the teacher, because it optimizes the direction of the weight vector regardless of its length. We see that $R_0$ for all three approaches attain optimum ‘1’. Intuitively, in BN and WN, this optimal solution does not depend on the value of $L_0$ because their weight vectors are normalized. In other words, WN and BN are easier to optimize than vanilla SGD, unlike SGD where both $R_0$ and $L_0$ have to be optimized to ‘1’. Furthermore, $\gamma_0$ in BN depends on the activation function. For ReLU, we have $\gamma_0^{{ {b\hspace{-1pt}n} }}=\frac{1}{2\zeta+1}$ (see Proposition \[prop:fixp\] in Appendix \[app:dyn\]), meaning that norm of the normalized weight vector relies on the decay factor $\zeta$. In WN, we have $\gamma_0^{{ {w\hspace{-1pt}n} }}=1$ as WN has no regularization on $\gamma$. Maximum and Effective Learning Rates ------------------------------------ With the above fixed points, we derive the maximum and the effective LR. Specifically, we analyze eigenvalues and eigenvectors of the Jacobian matrix corresponding to Eqn.. We are interested in the LR to approach $R_0$. We find that this optimum value only depends on its corresponding eigenvalue denoted as $\lambda_R$. We have $\lambda_R=\frac{\partial I_2}{\partial R}\frac{\eta{\gamma_0}}{2L_0^2} (\eta_{\max}-\eta_{{{ \mathrm{eff} }}})$, where $\eta_{\max}$ and $\eta_{{{ \mathrm{eff} }}}$ represent the maximum and effective LR (proposition \[prop:eigen\] in Appendix \[app:dyn\]), which are given in Table \[tab:lr\]. We demonstrate that $\lambda_R<0$ if and only if $\eta_{\max}>\eta_{{{ \mathrm{eff} }}}$, such that the fixed point $R_0$ is stable for all approaches (proposition \[prop:constraint\] in Appendix \[app:lr\]). Moreover, it is also able to show that $\eta_{\max}$ of BN ($\eta_{\max}^{{ {b\hspace{-1pt}n} }}$) is larger than WN and SGD, enabling $R$ to converge with a larger learning rate. For ReLU as an example, we find that $\eta_{\max}^{{ {b\hspace{-1pt}n} }}\geq\eta_{\max}^{\{{{ {w\hspace{-1pt}n} }},{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}\}}+2\zeta$ (proposition \[prop:maxeta\] in Appendix \[app:maxlr\]). The larger maximum LRs enables the network to be trained more stably and has the potential to be combined with other stabilization techniques [@Robust] during optimization. The effective LRs shown in Table \[tab:lr\] are consistent with previous work [@l2BN]. Generalization Analysis {#sec:gen} ======================= Here we investigate generalization of BN by using a teacher-student model that minimizes a loss function $\frac{1}{P}\sum_{j=1}^{P}((y^\ast)^{j}-y^{j})^{2}$, where ${y^\ast}$ represents the teacher’s output and $y$ is the student’s output. We compare BN with WN+gamma decay and vanilla SGD. All of them share the same teacher network whose output is a noise-corrupted linear function $y^\ast={{{ \mathbf{w} }}^{\ast}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}+s$, where ${{ \mathbf{x} }}$ is drawn from $\mathcal{N}(0,\frac{1}{N})$ and $s$ is an unobserved Gaussian noise. We are interested to see how the above methods resist this noise by using student networks with both identity (linear) and ReLU activation functions. For **vanilla SGD**, the student is computed by $y=g({{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }})$ with $g(\cdot)$ being either identity or ReLU, and ${{ \mathbf{w} }}$ being the weight vector to optimize, where ${{ \mathbf{w} }}$ has the same dimension as ${{ \mathbf{w} }}^\ast$. The loss function of vanilla SGD is $\ell^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}=\frac{1}{P}\sum_{j=1}^{P}\big(y^\ast-g({{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j)\big)^{2}$. For **BN**, the student is defined as $y=\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}-\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}+\beta$. As our main interest is the weight vector, we freeze the bias by setting $\beta=0$. Therefore, the batch average term $\mu_\mathcal{B}$ is also unnecessary to avoid additional parameters, and the loss function is written as $\ell^{{ {b\hspace{-1pt}n} }}=\frac{1}{P}\sum_{j=1}^{P}\big((y^\ast)^j-\gamma{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j/\sigma_{\mathcal{B}}\big)^{2}$. For **WN+gamma decay**, the student is computed similar to Eqn. by using $y=\sqrt{N}\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}}{\|{{ \mathbf{w} }}\|_2}$. Then the loss function is defined by $\ell^{{ {w\hspace{-1pt}n} }}=\frac{1}{P}\sum_{j=1}^P\big((y^\ast)^j-\sqrt{N}\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j }{\|{{ \mathbf{w} }}\|_2}\big)^2+\zeta\|\gamma\|^2_2$. With the above definitions, the three approaches are studied under the same teacher-student framework, where their generalization errors can be strictly compared with the other factors ruled out. Generalization Errors --------------------- [r]{}[0.35]{} ![image](student-teacher_with_relu.pdf){width="35.00000%"} We provide closed-form solutions of the generalization errors (see Appendix \[sub:gen\]) for vanilla SGD with both linear and ReLU student networks. The theoretical solution of WN+gamma decay can also be solved for the linear student, but still remains difficult for ReLU student whose numerical verification is provided instead. Both vanilla SGD and WN+gamma decay are compared with numerical solutions of BN. **vanilla SGD.** In an identity (linear) student, the solution of generalization error depends on the rank of correlation matrix $\mathbf{\Sigma}={{ \mathbf{x} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}$. Here we define an effective load $\alpha=P/N$ that is the ratio between number of samples $P$ and number of input neurons $N$ (number of learnable parameters). The generalization error of the identity student is denoted as $\epsilon_{\mathrm{id}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$, which can be acquired by using the distribution of eigenvalues of $\mathbf{\Sigma}$ following [@advani_high-dimensional_2017]. If $\alpha<1$, $\epsilon_{\mathrm{id}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}=1-\alpha+{\alpha S}{/(1-\alpha)}$. Otherwise, $\epsilon_{\mathrm{id}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}={S}{/(\alpha-1)}$ where $S$ is the variance of the injected noise to the teacher network. The values of $\epsilon_{\mathrm{id}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$ with respect to $\alpha$ are plotted in blue curve of Fig.\[fig:st\_loss\](a). It first decreases but then increases as $\alpha$ increases from 0 to 1. $\epsilon_{\mathrm{id}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$ diverges at $\alpha=1$. And it would decrease again when $\alpha>1$. In a ReLU student, the nonlinear activation yields difficulty to derive the theoretical solution. Here we utilize the statistical mechanics and calculate that $\epsilon_{\mathrm{relu}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}= 1-\alpha/4+\frac{\alpha S}{2(2-\alpha)}$ and $\alpha<2$ (see Appendix\[sub:equi\_order\]). When comparing to the lower bound (trained without noisy supervision) shown as the red curve in Fig.\[fig:st\_loss\](b), we see that $\epsilon_{\mathrm{relu}}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$ (blue curve) diverges at $\alpha=2$. This is because the student overfits the noise in the teacher’s output. The curve of numerical solution is also plotted in dashed line in Fig.\[fig:st\_loss\](b) and it captures the diverging trend well. It should be noted that obtaining the theoretical curve empirically requires an infinitely long time of training and an infinitely small learning rate. This unreachable limit explains the discrepancies between the theoretical and the numerical solution. **WN+gamma decay.** In a linear student, the gamma decay term turns the correlation matrix to $\mathbf{\Sigma}=\left(\mathbf{x}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x}+\zeta\mathbf{I}\right)$, which is positive definite. Following statistical mechanics [@krogh_generalization_1992], the generalization error is $ \epsilon_{\mathrm{id}}^{{ {w\hspace{-1pt}n} }}=\delta^{2}\frac{\partial\left(\zeta G\right)}{\partial\zeta}-\zeta^{2}\frac{\partial G}{\partial\zeta}$ [where]{} $G={1-\alpha-\zeta+{\big(\zeta+(1+\sqrt{\alpha})^{2}\big)^{\frac{1}{2}} \big(\zeta+(1-\sqrt{\alpha})^{2}\big)^{\frac{1}{2}}}}\big/{2\zeta}. $ We see that $\epsilon_{\mathrm{id}}^{{ {w\hspace{-1pt}n} }}$ can be computed quantitatively given the values of $\zeta$ and $\alpha$. Let the variance of noise injected to the teacher be $0.25$. Fig.\[fig:st\_loss\](a) shows that no other curves could outperform the red curve when $\zeta=0.25$, a value equal to the noise magnitude. The $\zeta$ smaller than $0.25$ (green curve $\zeta=\frac{1}{2M}$ and $M=32$) would exhibit overtraining around $\alpha=1$, but they still perform significantly better than vanilla SGD. For the ReLU student in Fig,\[fig:st\_loss\](b), a direct solution of the generalization error $\epsilon_{\mathrm{relu}}^{{ {w\hspace{-1pt}n} }}$ remains an open problem. Therefore, the numerical results of ‘WN+gamma decay’ (green curve) are run at each $\alpha$ value. It effectively reduces over-fitting compared to vanilla SGD. **Numerical Solutions of BN.** In the linear student, we employ SGD with $M=32$ to find solutions of ${{ \mathbf{w} }}$ for BN. The number of input neurons is 4096 and the number of training samples can be varied to change $\alpha$. The results are marked as black squares in Fig.\[fig:st\_loss\](a). After applying the analyses for linear student (Appendix \[app:theorem-id\]), BN is equivalent to ‘WN+gamma decay’ when $\zeta=\frac{1}{2M}$ (green curve). It is seen that BN gets in line with the curve ‘$\zeta=1/2M$’ ($M=32$) and thus quantitatively validates our derivations. In the ReLU student, the setting is mostly the same as the linear case, except that we employ a smaller batch size $M=16$. The results are shown as black squares in Fig.\[fig:st\_loss\](b). For ReLU units, the equivalent $\zeta$ of gamma decay is $\zeta=\frac{1}{4M}$. If one compares the generalization error of BN with ‘WN+gamma decay’ (green curve), a clear correspondence is found, which also validates the derivations for the ReLU activation function. Experiments in CNNs {#sec:exp} =================== This section shows that BN in CNNs follows similar traits of regularization as the above analyses. To compare different methods, the CNN architectures are fixed while only the normalization layers are changed. We adopt CIFAR10 [@cifar] that contains 60k images of 10 categories (50k images for training and 10k images for test). All models are trained by using SGD with momentum, while the initial learning rates are scaled proportionally [@goyal_accurate_2017] when different batch sizes are presented. More empirical setting can be found in Appendix \[app:exp\]. **Evaluation of PN+Gamma Decay**. This work shows that BN can be decomposed into PN and gamma decay. We empirically compare ‘PN+gamma decay’ with BN by using ResNet18 [@resnet]. For ‘PN+gamma decay’, the population statistics of PN and the decay factor of gamma decay are estimated by using sufficient amount of training samples. For BN, BN trained with a normal batch size $M=128$ is treated as baseline as shown in Fig.\[fig:FIGURE2\](a&b). We see that when batch size increases, BN would imped both loss and accuracy. For example, when increasing $M$ to $1024$, performance decreases because the regularization from the batch statistics reduces in large batch, resulting in overtraining (see the gap between train and validation loss in (a) when $M=1024$). In comparison, we train PN by using 10k training samples to estimate the population statistics. Note that this further reduces regularization. We see that the release of regularization can be complemented by gamma decay, making PN outperformed BN. This empirical result verifies our derivation of regularization for BN. Similar trend can be observed by experiment in a down-sampled version of ImageNet (see Appendix \[app:imagenet\]). We would like to point out that ‘PN+gamma decay’ is of interest in theoretical analyses, but it is computation-demanding when applied in practice because evaluating $\mu_\mathcal{P}$, $\sigma_\mathcal{P}$ and $\zeta(h)$ may require sufficiently large number of samples. **Comparisons of Regularization.** We study the regulation strengths of vanilla SGD, BN, WN, WN+mean-only BN, and WN+variance-only BN. At first, the strength of regularization terms from both $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ are compared by using a simpler network with 4 convolutional and 2 fully connected layers as used in [@WN]. Fig.\[fig:FIGURE2\](c&d) compares their training and validation losses. We see that the generalization error of BN is much lower than WN and vanilla SGD. The reason has been disclosed in this work: stochastic behaviors of $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ in BN improves generalization. To investigate $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ individually, we decompose their contributions by running a WN with mean-only BN as well as a WN with variance-only BN, to simulate their respective regularization. As shown in Fig.\[fig:FIGURE2\](c&d), improvements from the mean-only and the variance-only BN over WN verify our conclusion that noises from $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ have different regularization strengths. Both of them are essential to produce good result. **Regularization and parameter norm.** We further demonstrate impact of BN to the norm of parameters. We compare BN with vanilla SGD. A network is first trained by BN in order to converge to a local minima where the parameters do not change much. At this local minima, the weight vector is frozen and denoted as $\mathbf{w}^{{{ {b\hspace{-1pt}n} }}}$. Then this network is finetuned by using vanilla SGD with a small learning rate $10^{-3}$ and its kernel parameters are initialized by $\mathbf{w}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}=\gamma\frac{\mathbf{w}^{{{ {b\hspace{-1pt}n} }}}}{{\sigma}}$, where ${\sigma}$ is the moving average of $\sigma_{\mathcal{B}}$. Fig.\[fig:cifar\_finetune\] in Appendix \[app:paramnorm\] visualizes the results. As $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ are removed in vanilla SGD, it is found that the training loss decreases while the validation loss increases, implying that reduction in regularization makes the network converged to a sharper local minimum that generalizes less well. The magnitudes of kernel parameters ${{ \mathbf{w} }}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$ at different layers are also observed to increase after freezing BN, due to the release of regularization on these parameters. ![[]{data-label="fig:FIGURE2"}](FIGURE2.pdf){width="\textwidth"} **Batch size.** To study BN with different batch sizes, we train different networks but only add BN at one layer at a time. The regularization on the $\gamma$ parameter is compared in Fig.\[fig:FIGURE2\](e) when BN is located at different layers. The values of $\gamma^2$ increase along with the batch size $M$ due to the weaker regularization for the larger batches. The increase of $\gamma^2$ also makes all validation losses increased as shown in Fig.\[fig:FIGURE2\](f). **BN and WN trained with dropout**. As PN and gamma decay requires estimating the population statistics that increases computations, we utilize dropout as an alternative to improve regularization of BN. We add a dropout after each BN layer. Fig.\[fig:FIGURE2\](g&h) plot the classification results using ResNet18. The generalization of BN deteriorates significantly when $M$ increases from 128 to 1024. This is observed by the much higher validation loss (Fig.\[fig:FIGURE2\](g)) and lower validation accuracy (Fig.\[fig:FIGURE2\](h)) when $M=1024$. If a dropout layer with ratio $0.1$ is added *after* each residual block layer for $M=1024$ in ResNet18, the validation loss is suppressed and accuracy increased by a great margin. This superficially contradicts with the original claim that BN reduces the need for dropout [@BN]. As discussed in Appendix \[app:bn-wn-dropout\], we find that there are two differences between our study and previous work [@BN]. Fig.\[fig:FIGURE2\](g&h) also show that WN can also be regularized by dropout. We apply dropout after each WN layer with ratio 0.2 and the dropout is applied at the same layers as that for BN. We found that the improvement on both validation accuracy and loss is surprising. The accuracy increases from 0.90 to 0.93, even close to the results of BN. Nevertheless, additional regularization on WN still cannot make WN on par with the performance BN. In deep neural networks the distribution after each layer would be far from a Gaussian distribution, in which case WN is not a good substitute for PN. Conclusions =========== This work investigated an explicit regularization form of BN, which was decomposed into PN and gamma decay where the regularization strengths from $\mu_\mathcal{B}$ and $\sigma_\mathcal{B}$ were explored. Moreover, optimization and generalization of BN with regularization were derived and compared with vanilla SGD, WN, and WN+gamma decay, showing that BN enables training to converge with large maximum and effective learning rate, as well as leads to better generalization. Our analytical results explain many existing empirical phenomena. Experiments in CNNs showed that BN in deep networks share the same traits of regularization. In future work, we are interested in analyzing optimization and generalization of BN in deep networks, which is still an open problem. Moreover, investigating the other normalizers such as instance normalization (IN) [@IN] and layer normalization (LN) [@LN] is also important. Understanding the characteristics of these normalizers should be the first step to analyze some recent best practices such as whitening [@GWNN; @EigenNet], switchable normalization [@SN; @SN2; @SSN], and switchable whitening [@SW]. Appendices {#appendices .unnumbered} ========== Notations --------- ------------------------------------------ ------------------------------------------------------------------------------------------------ $\mu_\mathcal{B},\sigma_\mathcal{B}^2$ batch mean, batch variance $\mu_\mathcal{P},\sigma_\mathcal{P}^2$ population mean, population variance ${{ \mathbf{x} }},y$ input of a network, output of a network $y^\ast$ ground truth of an output $h,\hat{h}$ hidden value before and after BN $\bar{h}$ hidden value after population normalization $\gamma,\beta$ scale parameter, shift parameter $g(\cdot)$ activation function ${{ \mathbf{w} }},{{ \mathbf{w} }}^\ast$ weight vector, ground truth weight vector ${{ \tilde{{{ \mathbf{w} }}} }}$ normalized weight vector $M,N,P$ batch size, number of neurons, sample size $\alpha$ an effective load value $\alpha=P/N$ $\zeta$ regularization strength (coefficient) $\rho$ Kurtosis of a distribution $\delta$ gradient of the activation function $\eta_{{ \mathrm{eff} }},\eta_{\max}$ effective, maximum learning rate $R$ overlapping ratio (angle) between ${{ \tilde{{{ \mathbf{w} }}} }}$ and ${{ \mathbf{w} }}^\ast$ $L$ norm (length) of ${{ \mathbf{w} }}$ $\lambda_{\max},\lambda_{\min}$ maximum, minimum eigenvalue $\epsilon_{\mathrm{gen}}$ generalization error ------------------------------------------ ------------------------------------------------------------------------------------------------ : [Several notations are summarized for reference. ]{}[]{data-label="tab:notation"} More Empirical Settings and Results {#app:exp} ----------------------------------- All experiments in Sec.\[sec:exp\] are conducted in CIFAR10 by using ResNet18 and a CNN architecture similar to [@WN] that is summarized as ‘[conv]{}(3,32)-conv(3,32)-conv(3,64)-conv(3,64)-pool(2,2)-fc(512)-fc(10)’, where ‘conv(3,32)’ represents a convolution with kernel size 3 and 32 channels, ‘pool(2,2)’ is max-pooling with kernel size 2 and stride 2, and ‘fc’ indicates a full connection. We follow a configuration for training by using SGD with a momentum value of 0.9 and continuously decaying the learning rate by a factor of $10^{-4}$ each step. For different batch sizes, the initial learning rate is scaled proportionally with the batch size to maintain a similar learning dynamics [@goyal_accurate_2017]. ### Results in downsampled ImageNet {#app:imagenet} Besides CIFAR10, we also evaluate ‘PN+gamma decay’ by employing a downsampled version of ImageNet [@SGDR], which contains identical 1.2 million data and 1k categories as the original ImageNet, but each image is scaled to 32$\times$32. We train ResNet18 in downsampled ImageNet by following the training protocol used in [@resnet]. In particular, ResNet18 is trained by using SGD with momentum of 0.9 and the initial learning rate is 0.1, which is then decayed by a factor of 10 after 30, 60, and 90 training epochs. In downsampled ImageNet, we observe similar trends as those presented in CIFAR10. For example, we see that BN would imped both loss and accuracy when batch size increases. When increasing $M$ to $1024$ as shown in Fig.\[fig:PN\_imagenet\], both the loss and validation accuracy decrease because the regularization from the random batch statistics reduces in large batch size, resulting in overtraining. This can be seen by the gap between the training and the validation loss. Nevertheless, we see that the reduction of regularization can be complemented when PN is trained with adaptive gamma decay, which makes PN performed comparably to BN in downsampled ImageNet. ### Impact of BN to the Norm of Parameters {#app:paramnorm} We demonstrate the impact of BN to the norm of parameters. We compare BN with vanilla SGD, where a network is first trained by BN in order to converge to a local minima when the parameters do not change much. At this local minima, the weight vector is frozen and denoted as $\mathbf{w}^{{{ {b\hspace{-1pt}n} }}}$. Then this network is finetuned by using vanilla SGD with a small learning rate $10^{-3}$ with the kernel parameters initialized by $\mathbf{w}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}=\gamma\frac{\mathbf{w}^{{{ {b\hspace{-1pt}n} }}}}{{\sigma}}$, where ${\sigma}$ is the moving average of $\sigma_{\mathcal{B}}$. Fig.\[fig:cifar\_finetune\] below visualizes the results. As $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ are removed in the vanilla SGD, it is found from the last two figures that the training loss decreases while the validation loss increases, meaning that the reduction in regularization makes the network converged to a sharper local minimum that generalizes less well. The magnitudes of kernel parameters ${{ \mathbf{w} }}^{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}$ at different layers are also displayed in the first four figures. All of them increase after freezing BN, due to the release of regularization on these parameters. ![[]{data-label="fig:cifar_finetune"}](finetune2.pdf){width="100.00000%"} ### BN and WN with dropout {#app:bn-wn-dropout} **BN+dropout**. Despite the better generalization of BN with smaller batch sizes, large-batch training is more efficient in real cases. Therefore, improving generalization of BN with large batch is more desiring. However, gamma decay requires estimating the population statistics that increases computations. We also found that treating the decay factor as a constant hardly improves generalization for large batch. Therefore, we utilize dropout as an alternative to compensate for the insufficient regularization. Dropout has also been analytically viewed as a regularizer [@wager_dropout_2013]. We add a dropout after each BN layer to impose regularization. Fig.\[fig:FIGURE2\](g&h) in the main paper plot the classification results using ResNet18. The generalization of BN deteriorates significantly when $M$ increases from 128 to 1024. This is observed by the much higher validation loss (Fig.\[fig:FIGURE2\](g)) and lower validation accuracy (Fig.\[fig:FIGURE2\](h)) when $M=1024$. If a dropout layer with ratio $0.1$ is added *after* each residual block layer for $M=1024$ in ResNet18, the validation loss is suppressed and accuracy increased by a great margin. This superficially contradicts with the original claim that BN reduces the need for dropout [@BN]. We find that there are two differences between our study and [@BN]. First, in pervious study the batch size was fixed at a quite small value ([*e.g.* ]{}32), at which the regularization was already quite strong. Therefore, an additional dropout could not further cause better regularization, but on the contrary increases the instability in training and yields a lower accuracy. However, our study explores relatively large batch that degrades the regularization of BN, and thus dropout with a small ratio can complement. Second, usual trials put dropout before BN and cause BN to have different variances during training and test. In contrast, dropout follows BN in this study and the distance between two dropout layers is large (a residual block separation), thus the problem can be alleviated. The improvement by applying dropout after BN has also been observed by a recent work [@Disharmony]. **WN+dropout**. Since BN can be treated as WN trained with regularization as shown in this study, combining WN with regularization should be able to match the performance of BN. As WN outperforms BN in running speed (without calculating statistics) and it suits better in RNNs than BN, an improvement of its generalization is also of great importance. Fig.\[fig:FIGURE2\](g&h) also show that WN can also be regularized by dropout. We apply dropout after each WN layer with ratio 0.2 and the dropout is applied at the same layers as that for BN. We found that the improvement on both validation accuracy and loss is surprising. The accuracy increases from 0.90 to 0.93, even close to the results of BN. Nevertheless, additional regularization on WN still cannot make WN on par with the performance BN. In deep neural networks the distribution after each layer would be far from a Gaussian distribution, in which case WN is not a good substitute for PN. A potential substibute of BN would require us for designing better estimations of the distribution to improve the training speed and performance of deep networks. Proof of Results ---------------- ### Proof of Eqn. {#app:theorem} [1]{} \[Regularization of $\mu_\mathcal{B},\sigma_\mathcal{B}$\]\[theorem:reg\_app\] Let a single-layer perceptron with BN and ReLU activation function be defined by $y=\max(0,{{ \hat{h} }}),~{{ \hat{h} }}=\gamma\frac{h-\mu_\mathcal{B}}{\sigma_\mathcal{B}}+\beta~\mathrm{and}~h={{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}$, where ${{ \mathbf{x} }}$ and $y$ are the network input and output respectively, $h$ and $\hat{h}$ are the hidden values before and after batch normalization, and ${{ \mathbf{w} }}$ is the weight vector. Let $\ell(\hat{h})$ be the loss function. Then $$\begin{aligned} \frac{1}{P}\sum_{j=1}^P\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\ell(\hat{h}^j)\simeq \frac{1}{P}\sum_{j=1}^P\ell(\bar{h}^j)+\zeta(h)\gamma^2 ~~\mathrm{and}~~\zeta(h)={\frac{\rho+2}{8M}\mathcal{I}(\gamma)}+{\frac{1}{2M}\frac{1}{P}\sum_{j=1}^P\sigma(\bar{h}^j)},\nonumber\end{aligned}$$ where $\bar{h}^j=\gamma\frac{{{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}+\beta$ represents population normalization (PN), $\zeta(h)\gamma^2$ represents gamma decay and $\zeta(h)$ is a data-dependent decay factor. $\rho$ is the kurtosis of the distribution of $h$, $\mathcal{I}(\gamma)$ is an estimation of the Fisher information of $\gamma$ and $\mathcal{I}(\gamma)=\frac{1}{P}\sum_{j=1}^P(\frac{\partial\ell(\hat{h}^j)}{\partial\gamma})^2$, and $\sigma(\cdot)$ is a sigmoid function. We have $\hat{h}^j=\gamma\frac{\mathbf{w}^{T}\mathbf{x}^j-\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}+\beta$ and $\bar{h}^j=\gamma\frac{\mathbf{w}^{T}\mathbf{x}^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}+\beta$. We prove theorem \[theorem:reg\_app\] by performing a Taylor expansion on a function $A(\hat{h}^j)$ at $\bar{h}^j$, where $A(\hat{h}^j)$ is a function of $\hat{h}^j$ defined according to a particular activation function. The negative log likelihood function of the above single-layer perceptron can be generally defined as $-\log p(y^j|\hat{h}^j)=A(\hat{h}^j)-y^j\hat{h}^j$, which is similar to the loss function of the generalized linear models with different activation functions. Therefore, we have $$\begin{aligned} \frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}[l(\hat{h}^{j})] & =\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[A(\hat{h}^{j})-y^{j}\hat{h}^{j}\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}(A(\bar{h}^j)-y^{j}\bar{h}^j)+\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[-y^{j}(\hat{h}^{j}-\bar{h}^j)+A(\hat{h}^{j})-A(\bar{h}^j)\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}l(\bar{h}^j)+\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h}^j)-y^{j})(\hat{h}^{j}-\bar{h}^j)\right]\\ & +\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[\frac{A^{\prime\prime}(\bar{h}^j)}{2}(\hat{h}^{j}-\bar{h}^j)^{2}\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}l(\bar{h}^j)+R^{f}+R^{q},\end{aligned}$$ where $A^{\prime}(\cdot)$ and $A^{\prime\prime}(\cdot)$ denote the first and second derivatives of function $A(\cdot)$. The first and second order terms in the expansion are represented by $R^{f}$ and $R^{q}$ respectively. To derive the analytical forms of $R^{f}$ and $R^{q}$, we take a second-order Taylor expansion of of $\frac{1}{\sigma_{\mathcal{B}}}$ and $\frac{1}{\sigma_{\mathcal{B}}^{2}}$ around $\sigma_{P}$, it suffices to have $$\frac{1}{\sigma_{\mathcal{B}}}\approx\frac{1}{\sigma_{\mathcal{P}}}+(-\frac{1}{\sigma_{\mathcal{P}}^{2}})(\sigma_{\mathcal{B}}-\sigma_{\mathcal{P}})+\frac{1}{\sigma_{\mathcal{P}}^{3}}(\sigma_{\mathcal{B}}-\sigma_{\mathcal{P}})^{2}$$ and $$\frac{1}{\sigma_{\mathcal{B}}^{2}}\approx\frac{1}{\sigma_{\mathcal{P}}^{2}}+(-\frac{2}{\sigma_{\mathcal{P}}^{3}})(\sigma_{\mathcal{B}}-\sigma_{\mathcal{P}})+\frac{3}{\sigma_{\mathcal{P}}^{4}}(\sigma_{\mathcal{B}}-\sigma_{\mathcal{P}})^{2}.$$ By applying the distributions of $\mu_{\mathcal{B}}$ and $\sigma_{\mathcal{B}}$ introduced in section \[sec:view\], we have $\mu_{\mathcal{B}}\sim\mathcal{N}(\mu_{\mathcal{P}},\frac{\sigma_{P}^{2}}{M})$ and $\sigma_{\mathcal{B}}\sim\mathcal{N}(\sigma_{P},\frac{\rho+2}{4M})$. Hence, $R^{f}$ can be derived as in the paper, $R^{f}$ can be derived as $$\begin{aligned} R^{f} & =\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h}^j)-y^{j})(\hat{h}^{j}\bar{h}^j)\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h}^j)-y^{j})\left(\gamma\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}-\gamma\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}\right)\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h}^jy^{j})\left(\gamma\mathbf{w}^{T}\mathbf{x}^{j}\left(\frac{1}{\sigma_{\mathcal{B}}}-\frac{1}{\sigma_{\mathcal{P}}}\right)+\gamma\left(-\frac{\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}+\frac{\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}\right)\right)\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\gamma(A^{\prime}(\bar{h}^j)-y^{j})(\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}})\mathbb{E}_{\sigma_{\mathcal{B}}}\left[\frac{1}{\sigma_{\mathcal{B}}}-\frac{1}{\sigma_{\mathcal{P}}}\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\frac{\rho+2}{4M}\gamma(A^{\prime}(\bar{h}^j)-y^{j})\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}.\end{aligned}$$ This $R^{f}$ term can be understood as below. Let $h=\frac{\mathbf{w}^{T}\mathbf{x}-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}$ and the distribution of the population data be $p_{xy}$. We establish the following relationship $$\begin{aligned} \mathbb{E}_{(x,y)\sim p_{xy}}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h})-y)h\right]&=\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\mathbb{E}_{x\sim p_{x}}\mathbb{E}_{y|x\sim p_{y|x}}\left[(A^{\prime}(\bar{h})-y)h\right]\\ & =\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\mathbb{E}_{x\sim p_{x}}\left[(\mathbb{E}\left[y|x\right]-\mathbb{E}_{y|x\sim p_{y|x}}\left[y\right])h\right]\\ & =0.\end{aligned}$$ Since the sample mean converges in probability to the population mean by the Weak Law of Large Numbers, for all $\epsilon>0$ and a constant number $K$ ($\exists K>0$ and $\forall P>K$), we have $p\left(\big|R^f-\mathbb{E}_{(x,y)\sim p_{xy}}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(A^{\prime}(\bar{h})-y)h\right]\big|\geq \frac{\rho+2}{4M}\epsilon\right)=0$. This equation implies that $R^f$ is sufficiently small with a probability of 1 given moderately large number of data points $P$ (the above inequality holds when $P>30$). On the other hand, $R^{q}$ can be derived as $$\begin{aligned} R^{q} & =\frac{1}{P}\sum_{j=1}^{P}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[\frac{A^{\prime\prime}(\bar{h}^j)}{2}(\hat{h}^{j}-\bar{h}^j)^{2}\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\frac{A^{\prime\prime}(\bar{h}^j)}{2}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(\gamma\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}+\beta-\gamma\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}}+\beta)^{2}\right]\\ & =\frac{1}{P}\sum_{j=1}^{P}\frac{A^{\prime\prime}(\bar{h}^j)}{2}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(\gamma\mathbf{w}^{T}\mathbf{x}^{j})^{2}(\frac{1}{\sigma_{\mathcal{B}}}-\frac{1}{\sigma_{\mathcal{P}}})^{2}-2\gamma\mu_{\mathcal{P}}\mathbf{w}^{T}\mathbf{x}^{j}(\frac{1}{\sigma_{\mathcal{B}}}-\frac{1}{\sigma_{\mathcal{P}}})^{2}+(\frac{\mu_{\mathcal{B}}}{\sigma_{\mathcal{B}}}-\frac{\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}})^{2}\right]\\ & \simeq\frac{1}{P}\sum_{j=1}^{P}\frac{\gamma^{2}A^{\prime\prime}(\bar{h}^j)}{2}\left((\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}})^{2}\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[(\frac{1}{\sigma_{\mathcal{B}}}-\frac{1}{\sigma_{\mathcal{P}}})^{2}\right]+\mathbb{E}_{\mu_{\mathcal{B}},\sigma_{\mathcal{B}}}\left[\left(\frac{\mu_{\mathcal{B}}-\mu_{P}}{\sigma_{\mathcal{B}}}\right)^{2}\right]\right)\\ & =\frac{1}{P}\sum_{j=1}^{P}\frac{\gamma^{2}A^{\prime\prime}(\bar{h}^j)}{2}\left((\frac{\mathbf{w}^{T}\mathbf{x}^{j}-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}})^{2}\frac{\rho+2}{4M}+\frac{1}{M}(1+\frac{3(\rho+2)}{4M})\right).\label{R^q:last}\end{aligned}$$ Note that $\frac{\partial^2 l(\bar{h}^j)}{\partial\gamma^2}=A^{\prime\prime}(\bar{h}^j)(\frac{\mathbf{w}^{T}\mathbf{x}^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}})^{2}$, we have $\mathcal{I}(\gamma)=\frac{1}{P}\sum_{j=1}^{P}A^{\prime\prime}(\bar{h}^j)(\frac{\mathbf{w}^{T}\mathbf{x}^j-\mu_{\mathcal{P}}}{\sigma_{\mathcal{P}}})^{2}$ been an estimator of the Fisher information with respect to the scale parameter $\gamma$. Then, by neglecting $O(1/M^{2})$ high-order term in $R^q$, we get $$R^{q}\simeq\frac{\rho+2}{8M}\mathcal{I}(\gamma)\gamma^{2}+\frac{\mu_{d^{2}A}}{2M}\gamma^{2}, \label{R^q:two}$$ where $\mu_{d^{2}A}$ indicates the mean of the second derivative of $A(h)$. The results of both ReLU activation function and identity function are provided as below. ### ReLU Activation Function {#app:theorem-relu} For the ReLU non-linear activation function, that is $g(h)=\max(h,0)$, we use its continuous approximation softplus function $g(h)=\log(1+\exp(h))$ to derive the partition function $A(h)$. In this case, we have $\mu_{d^{2}A}=\frac{1}{P}\sum_{j=1}^{P}\sigma(\bar{h}^j)$. Therefore, we have $\zeta(h)=\frac{\rho+2}{8M}\mathcal{I}(\gamma)+\frac{1}{2M}\frac{1}{P}\sum_{j=1}^P\sigma(\bar{h}^j)$ as shown in Eqn.. ### Linear Student Network with Identity Activation Function {#app:theorem-id} For a loss function with identity (linear) units, $\frac{1}{P}\sum_{j=1}^{P}\big({{{ \mathbf{w} }}^{\ast}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j-\gamma({{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j-\mu_\mathcal{B})/\sigma_{\mathcal{B}}\big)^{2}$, we have $\mathcal{I}(\gamma)=2\lambda$ and $\rho=0$ for Gaussian input distribution. The exact expression of Eqn. is also possible for such linear regression problem. Under the condition of Gaussian input ${{ \mathbf{x} }}\sim \mathcal{N}(0,1/N)$, $h={{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}$ is also a random variable satisfying a normal distribution $~\mathcal{N}(0,1)$. It can be derived that $\mathbb{E}\left(\sigma_{\mathcal{B}}^{-1}\right) =\frac{\sqrt{M}}{\sqrt{2}\sigma_{\mathcal{P}}}\frac{\Gamma\left(\frac{M-2}{2}\right)}{\Gamma\left(\frac{M-1}{2}\right)}$ and $\mathbb{E}\left(\sigma_{\mathcal{B}}^{-2}\right) =\frac{M}{\sigma_{\mathcal{P}}^{2}}\frac{\Gamma\left(\frac{M-1}{2}-1\right)}{\Gamma\left(\frac{M-1}{2}\right)}$. Therefore $$\begin{aligned} \zeta=\lambda \left(1+\frac{M\Gamma\big((M-3)/2\big)}{2\Gamma\big((M-1)/2\big)} - \sqrt{2M}\frac{\Gamma\big((M-2)/2\big)}{\Gamma\big((M-1)/2\big)}\right).\end{aligned}$$ Furthermore, the expression of $\zeta$ can be simplified as $\zeta=\frac{3}{4M}$. If the bias term is neglected in a simple linear regression, contributions from $\mu_\mathcal{B}$ to the regularization term is neglected and thus $\zeta=\frac{1}{4M}$. Note that if one uses mean square error without being divided by 2 during linear regression, the values for $\zeta$ should be multiplied by 2 as well, where $\zeta=\frac{1}{2M}$. ### BN Regularization in a Deep Network {#app:deep-reg} The previous derivation is based on the single-layer perceptron. In deep neural networks, the forward computation inside one basic building block of a deep network is written by $$\small z_i^{l}=g({{ \hat{h} }}_i),~~~{{ \hat{h} }}_i^l={\gamma}_i^l\frac{{h}_i^l-(\mu_\mathcal{B})_i^l}{(\sigma_B)_i^l}+\beta_i^l ~~~\mathrm{and}~~~h_i^l=(\mathbf{w}_i^l){{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{z}^{l-1},$$ where the superscript $l\in [1, L]$ is the index of a building block in a deep neural network, and $i\in[1,N^l]$ indexes each neuron inside a layer. $z^0$ and $z^{L}$ are synonyms of input $x$ and output $y$, respectively. In order to analyze the regularization of BN from a specific layer, one needs to isolate its input and focus on the noise introduced by the BN layer in this block. Therefore, the loss function $\ell(\hat{h}^l)$ can also be expanded at $\ell(\bar{h}^l)$. In BN, the batch variance is calculated with regard to each neuron under the assumption of mutual independence of neurons inside a layer. By following this assumption and the above derivation in Appendix \[app:theorem\], the loss function with BN in deep networks can also be similarly decomposed. **Regularization of $\mu_\mathcal{B}^l,\sigma_\mathcal{B}^l$ in a deep network.** Let $\mathbf{\zeta}^l$ be the strength (coefficient) of the regularization at the $l$-[[th]{}]{} layer. Then $$\begin{aligned} &&\frac{1}{P}\sum_{j=1}^P\mathbb{E}_{\mu_{\mathcal{B}}^l,\sigma_{\mathcal{B}}^l}\ell\big((\hat{h}^l)^j\big)\simeq \frac{1}{P}\sum_{j=1}^P\ell\big((\bar{h}^l)^j\big)+\sum_i^{N^l}{\zeta_i^l \cdot (\mathbf{\gamma}_i^l)^2},\nonumber\\ &&~~ \mathrm{and}~~ {\zeta}_i^l =\frac{1}{P}\sum_{j=1}^{P}\frac{\mathrm{diag}\big(\mathcal{H}_{\ell}(\bar{h}^l)^j\big)_i}{2} \left(\frac{\rho_i^l+2}{4M}\bigg(\frac{(\mathbf{w}_i^l)^{T}(\mathbf{z}^{l-1})^{j}-(\mu_{\mathcal{P}})_i^l}{(\sigma_{\mathcal{P}})_i^l}\bigg)^{2}+\frac{1}{M}\right) +\mathcal{O}(1/M^2), \nonumber\end{aligned}$$ where $i$ is the index of a neuron in the layer, $(\bar{h}_i^l)^j=\gamma_i^l \frac{(\mathbf{w}_i^l)^{T}(\mathbf{z}^{l-1})^{j}-(\mu_{\mathcal{P}})_i^l}{(\sigma_{\mathcal{P}})_i^l}+\beta_i^l$ represents population normalization (PN), $\mathcal{H}_{\ell}(\bar{h}^l)$ is the Hessian matrix at $\bar{h}^l$ regarding to the loss $\ell$ and $\mathrm{diag}(\cdot)$ represents the diagonal vector of a matrix. It is seen that the above equation is compatible with the results from the single-layer perceptron. The main difference of the regularization term in a deep model is that the Hessian matrix is not guaranteed to be positive semi-definite during training. However, this form of regularization is also seen from other regularization such as noise injection [@rifai_adding_2011] and dropout [@wager_dropout_2013], and has long been recognized as a Tikhonov regularization term [@bishop_training_1995]. In fact, it has been reported that in common neural networks, where convex activation functions such as ReLU and convex loss functions such as common cross entropy are adopted, the Hessian matrix $\mathcal{H}_\ell({\bar{h}^l})$ can be seen as ‘locally’ positive semidefinite [@santurkar_how_2018]. Especially, as training converges to its mimimum training loss, the Hessian matrix of the loss can be viewed as positive semi-definite and thus the regularization term on $\gamma^l$ is positive. ### Dynamical Equations {#app:dyn} Here we discuss the dynamical equations of BN. Let the length of teacher’s weight vector be 1, that is, $\frac{1}{N}\mathbf{w^{\ast}}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{w^{\ast}}=1$. We introduce a normalized weight vector of the student as $\mathbf{\widetilde{w}}=\sqrt{N}\gamma\frac{\mathbf{w}}{\left\Vert \mathbf{w}\right\Vert }$. Then the overlapping ratio between teacher and student, the length of student’s vector, and the length of student’s normalized weight vector are $\frac{1}{N}\mathbf{\widetilde{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{w^{\ast}}=QR=\gamma R$, $\frac{1}{N}\mathbf{\widetilde{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\widetilde{\mathbf{w}}=Q^{2}=\gamma^2$, and $\frac{1}{N}\mathbf{w}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{w}=L^{2}$ respectively, where $Q=\gamma$. And we have $\frac{1}{N}\mathbf{w}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{w} }}=LR$. We transform update equations by using order parameters. The update rule for variable $Q^2$ can be obtained by $\big(Q^2\big)^{j+1}-\big(Q^2\big)^{j}=\frac{1}{N}\big[2\eta{\delta^j{{ {\tilde{{{ \mathbf{w} }}}^j} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j -2\eta\zeta\big(Q^2\big)^j\big]$ following update rule of $\gamma$. Similarly, the update rules for variables $RL$ and $L^2$ are calculated as follow: $$\label{eq:dRL} \begin{split} &\big(RL\big)^{j+1}-\big(RL\big)^{j}=\frac{1}{N }\big(\frac{\eta Q^j}{L^j}\delta^j{{{{ \mathbf{w} }}^\ast}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j-\frac{\eta R^j}{L^j}\delta^j{{ {\tilde{{{ \mathbf{w} }}}^j} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^j\big),\\ &\big({L}^2\big)^{j+1}-\big(L^2\big)^{j}=\frac{1}{N} \big[\frac{\eta^2(Q^2)^j}{(L^2)^j}{\delta^j}^2{{{ \mathbf{x} }}^j}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^{j} -\frac{\eta^2}{N(L^2)^{j}}{\delta^j}^2({{ {\tilde{{{ \mathbf{w} }}}^j} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}^{j})^{2}\big]. \end{split}$$ Let $t=\frac{j}{N}$ is a normalized sample index that can be treated as a continuous time variable. We have $\Delta t=\frac{1}{N}$ that approaches zero in the thermodynamic limit when $N\rightarrow\infty$. In this way, the learning dynamic of $Q^2$, $RL$ and $L^2$ can be formulated as the following differential equations: $$\label{eq:cldy} \left\{ \begin{array}{lll} \frac{dQ^2}{dt}&=2\eta I_{1}-2\eta\zeta Q^2,\\ \frac{dRL}{dt}&=\eta\frac{Q}{L}I_{3}-\eta\frac{R}{L}I_{1},\\ \frac{dL^2}{dt}&=\eta^{2}\frac{Q^{2}}{L^{2}}I_{2}, \end{array} \right.$$ where $I_1=\langle\delta{{{ \tilde{{{ \mathbf{w} }}} }}}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}\rangle_{{ \mathbf{x} }}$, $I_2=\langle\delta^2{{ \mathbf{x} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}\rangle_{{ \mathbf{x} }}$, and $I_3=\langle\delta{{{ \mathbf{w} }}^\ast}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}\rangle_{{ \mathbf{x} }}$, which are the terms presented in $\frac{dQ^2}{dt}$, $\frac{dRL}{dt}$, and $\frac{d{L}^2}{dt}$ and $\langle\cdot\rangle_{{ \mathbf{x} }}$ denotes expectation over the distribution of ${{ \mathbf{x} }}$. They are used to simplify notations. Note that we neglect the last term of $dL^2/dt$ in Eqn.(\[eq:dRL\]) since $\frac{\eta^2}{N(L^2)}{\delta}^2({{ \tilde{{{ \mathbf{w} }}} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }})^{2}$ can be approximately equal to zero when $N$ approaches infinity. On the other hand, we have $dQ^2=2QdQ, dRL=RdL+LdR$ and $dL^2=2LdL$. Hence, Eqn.(\[eq:cldy\]) can be reduced to $$\label{eq:QRL} \left\{ \begin{array}{lll} \frac{dQ}{dt}&=\eta\frac{I_{1}}{Q}-\eta\zeta Q,\\ \frac{dR}{dt}&=\eta\frac{Q}{L^{2}}I_{3}-\eta\frac{R}{L^{2}}I_{1}-\eta^{2}\frac{Q^{2}R}{2L^{4}}I_{2},\\ \frac{dL}{dt}&=\eta^{2}\frac{Q^{2}}{2L^{3}}I_{2}. \end{array} \right.$$ \[prop:fixp\] Let $(Q_0,R_0,L_0)$ denote a fixed point with parameters $Q$, $R$ and $L$ of Eqn.(\[eq:QRL\]). Assume the learning rate $\eta$ is sufficiently small when training converges and $x\sim\mathcal{N}(0,\frac{1}{N}\mathbf{I})$. If activation function $g$ is $\mathrm{ReLU}$, then we have $Q_0=\frac{1}{2\zeta+1},R_0=1$ and $L_0$ could be arbitrary. First, $L$ has no influence on the output of student model since ${{ \mathbf{w} }}$ is normalized, which implies that if $(Q_0,R_0,L_0)$ is a fixed point of Eqn.(\[eq:QRL\]), $L_0$ could be arbitrary. Besides, we have $\eta\gg\eta^2$ because the learning rate $\eta$ is sufficiently small. Therefore, the terms in Eqn.(\[eq:QRL\]) proportional to $\eta^2$ can be neglected. If $(Q_0,R_0,L_0)$ is a fixed point, it suffices to have $$\begin{aligned} \eta\frac{I_{1}(Q_{0},R_{0})}{Q_{0}}-\eta\zeta Q_{0}&=0,\label{eq:fixQ}\\ \eta\frac{Q_{0}}{L_0^{2}}I_{3}(Q_{0},R_{0})-\eta\frac{R_{0}}{L_0^{2}}I_{1}(Q_{0},R_{0})&=0, \label{eq:fixR}\end{aligned}$$ To calculate $I_1$ and $I_3$, we define $s$ and $t$ as ${{ \tilde{{{ \mathbf{w} }}} }}{{^{\mkern-1.5mu\mathsf{T}}}}{{ \mathbf{x} }}$ and ${{ \mathbf{w} }}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x}$. Since $\mathbf{x}\sim\mathcal{N}(0,\frac{1}{N}\mathbf{I})$, we can acquire $$\left[\begin{array}{c} s\\ t \end{array}\right]\sim N\left(\left(\left[\begin{array}{c} 0\\ 0 \end{array}\right],\left[\begin{array}{cc} Q^{2} & QR\\ QR & 1 \end{array}\right]\right)\right)$$ so probability measure of $[s,t]{{^{\mkern-1.5mu\mathsf{T}}}}$ can be written as $$DsDt=\frac{1}{2\pi Q\sqrt{1-R^{2}}}exp\left\{ -\frac{1}{2}\left[\begin{array}{c} s\\ t \end{array}\right]^{T}\left[\begin{array}{cc} Q^{2} & QR\\ QR & 1 \end{array}\right]^{-1}\left[\begin{array}{c} s\\ t \end{array}\right]\right\}$$ Then, $$\label{eq:intI1} \begin{split} I_{1} & =\left\langle g^{\prime}(\mathbf{\widetilde{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x})\left[g(\mathbf{w}^{\ast T}\mathbf{x})-g(\mathbf{\widetilde{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x})\right]\mathbf{\widetilde{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x}\right\rangle _{\mathbf{x}}\\ & =\intop_{u,v}[g'(s)\left(g(t)-g(s\right)s]DsDt\\ & =\int_{0}^{+\infty}\int_{0}^{+\infty}stDsDt-\int_{0}^{+\infty}s^{2}\int_{-\infty}^{+\infty}DsDt\\ & =\frac{Q(\pi R+2\sqrt{1-R^{2}}+2Rarcsin(R))}{4\pi}-\frac{Q^{2}}{2} \end{split}$$ and $$\label{eq:intI3} \begin{split} I_{3} & =\intop_{u,v}[g'(s)\left(g(t)-g(s\right)t]DsDt\\ & =\intop_{u,v}g'(s)g(t)tDsDt-\intop_{u,v}g'(s)g(s)tDsDt\\ & =\int_{0}^{+\infty}\int_{0}^{+\infty}t^{2}DsDt-\int_{0}^{+\infty}\int_{-\infty}^{+\infty}stDsDt\\ & =\frac{\pi+2R\sqrt{1-R^{2}}+2\arcsin(R)}{4\pi}-\frac{QR}{2} \end{split}$$ By substituting Eqn.(\[eq:intI1\]) and (\[eq:intI3\]) into Eqn.(\[eq:fixQ\]) and (\[eq:fixR\]), we get $Q_{0}=\frac{1}{2\zeta+1}$ and $R_0=1$. \[prop:eigen\] Given conditions in proposition\[prop:fixp\], let $\lambda_{Q}^{{{ {b\hspace{-1pt}n} }}}$, $\lambda_{R}^{{{ {b\hspace{-1pt}n} }}}$ be the eigenvalues of the Jacobian matrix at fixed point $(Q_{0},R_0,L_{0})$ corresponding to the order parameters $Q$ and $R$ respectively in BN. Then $$\begin{cases} \lambda_{Q}^{{{ {b\hspace{-1pt}n} }}}=\frac{\eta}{Q_{0}}\frac{\partial I_{1}}{\partial Q}-\eta\zeta Q_{0},\\ \lambda_{R}^{{{ {b\hspace{-1pt}n} }}}=\frac{\partial I_{2}}{2\partial R}\frac{\eta Q_{0}}{2L_{0}^{2}}(\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}-\eta_{\mathrm{eff}}^{bn}), \end{cases}$$ where $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}$ and $\eta_{\mathrm{eff}}^{{{ {b\hspace{-1pt}n} }}}$ are the maximum and effective learning rates respectively in BN. At fixed point $(Q_{0},R_0,L_{0})=(\frac{1}{2\zeta+1},1,L_0)$ obtained in proposition\[prop:fixp\], the Jacobian of dynamic equations of BN can be derived as $$J^{{{ {b\hspace{-1pt}n} }}}=\left[\begin{array}{ccc} \frac{\eta}{Q_{0}}\frac{\partial I_{1}}{\partial Q}-2\eta\zeta & \frac{\eta}{Q_{0}}\frac{\partial I_{1}}{\partial R} & 0\\ 0 & \frac{\eta}{L_{0}^{2}}\left(\frac{Q_{0}\partial I_{3}}{\partial R}-\frac{\partial I_{1}}{\partial R}-\zeta Q_{0}^{2}\right)-\frac{\eta^{2}Q_{0}^{2}}{2L_{0}^{4}}\frac{\partial I_{2}}{\partial R} & 0\\ 0 & \frac{\eta^{2}Q_{0}^{2}}{2L_{0}^{3}}\frac{\partial I_{2}}{\partial R} & 0 \end{array}\right],$$ and the eigenvalues of $J^{{{ {b\hspace{-1pt}n} }}}$ can be obtained by inspection $$\begin{cases} \lambda_{Q}^{{{ {b\hspace{-1pt}n} }}}=\frac{\eta}{Q_{0}}\frac{\partial I_{1}}{\partial Q}-2\eta\zeta,\\ \lambda_{R}^{{{ {b\hspace{-1pt}n} }}}=\frac{\eta}{L_{0}^{2}}\left(\frac{Q_{0}\partial I_{3}}{\partial R}-\frac{\partial I_{1}}{\partial R}-\zeta Q_{0}^{2}\right)-\frac{\eta^{2}Q_{0}^{2}}{2L_{0}^{4}}\frac{\partial I_{2}}{\partial R}=\frac{\partial I_{2}}{\partial R}\frac{\eta Q_{0}}{2L_{0}^{2}}\left(\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}-\eta_{\mathrm{eff}}^{{{ {b\hspace{-1pt}n} }}}\right),\\ \lambda_{L}^{{{ {b\hspace{-1pt}n} }}}=0. \end{cases}$$ Since $\gamma_{0}=Q_{0}$, we have $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}=(\frac{\partial(\gamma_{0}I_{3}-I_{1})}{\gamma_{0}\partial R}-\zeta\gamma_{0})/\frac{\partial I_{2}}{2\partial R}$ and $\eta_{\mathrm{eff}}^{{{ {b\hspace{-1pt}n} }}}=\frac{\eta\gamma_{0}}{L_{0}^{2}}$. ### stable fixed points of BN {#app:lr} \[prop:constraint\] Given conditions in proposition\[prop:fixp\], when activation function is ReLU, then (i) $\lambda_{Q}^{{{ {b\hspace{-1pt}n} }}}<0$, and (ii) $\lambda_{R}^{{{ {b\hspace{-1pt}n} }}}<0$ iff $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}>\eta_{\mathrm{eff}}^{{{ {b\hspace{-1pt}n} }}}$. When activation function is ReLU, we derive $I_{1}=\frac{Q(\pi R+2\sqrt{1-R^{2}}+2R\arcsin(R))}{4\pi}-\frac{Q^{2}}{2}$, which gives $$\frac{\partial I_{1}}{\partial Q}=-Q+\frac{\pi R+2\sqrt{1-R^{2}}+2R\arcsin(R)}{4\pi}.$$ Therefore at the fixed point of BN $(Q_{0},R_0,L_{0})=(\frac{1}{2\zeta+1},1,L_0)$, we have $$\lambda_{Q}^{{{ {b\hspace{-1pt}n} }}}=\eta(\frac{1}{Q_{0}}\frac{\partial I_{1}}{\partial Q}-2\zeta)=\eta(\frac{1}{Q_{0}}(-1+\frac{1}{2Q_{0}}-2\zeta)=-\zeta-\frac{1}{2}<0.$$ Note that $\mathbf{x}{{^{\mkern-1.5mu\mathsf{T}}}}\mathbf{x}$ approximately equals 1. We get $$\label{eq:intI2} \begin{split} I_{2} & =\intop_{u,v}[g'(s)\left(g(t)-g(s)\right)]^{2}DsDt\\ & =\int_{0}^{+\infty}\int_{0}^{+\infty}v^{2}DsDt+\int_{0}^{+\infty}\int_{-\infty}^{+\infty}s^{2}DsDv -2\int_{0}^{+\infty}\int_{-\infty}^{+\infty}stDsDt\\ & =\frac{Q^{2}}{2}+\frac{\pi R+2R\sqrt{1-R^{2}}+2\arcsin(R)}{4\pi}-\frac{Q(\pi R+2\sqrt{1-R^{2}}+2R\arcsin(R))}{2\pi}. \end{split}$$ At the fixed point we have $\frac{\partial I_{2}}{\partial R}=-Q_{0}<0$. Therefore, we conclude that $\lambda_{R}^{{{ {b\hspace{-1pt}n} }}}<0$ iff $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}>\eta_{\mathrm{eff}}^{{{ {b\hspace{-1pt}n} }}}$. ### Maximum Learning Rate of BN {#app:maxlr} \[prop:maxeta\] When the activation function is ReLU, then $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}\geq\eta_{\mathrm{max}}^{\{{{ {w\hspace{-1pt}n} }},{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}\}}+2\zeta$, where $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}$ and $\eta_{\mathrm{max}}^{\{{{ {w\hspace{-1pt}n} }},{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}\}}$ indicate the maximum learning rates of BN, WN, and vanilla SGD respectively. From the above results, we have $I_{1}=\frac{Q(\pi R+2\sqrt{1-R^{2}}+2R\arcsin(R))}{4\pi}-\frac{Q^{2}}{2}$, which gives $\partial I_{1}/\partial R\geq0$ at the fixed point of BN. Then it can be derived that $\frac{\partial I_{2}}{\partial R}<0$. Furthermore, at the fixed point of BN, $Q_{0}=\gamma_{0}=\frac{1}{2\zeta+1}<1$, then we have $$\begin{aligned} \eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}} & =(\frac{\partial(\gamma_{0}I_{3}-I_{1})}{\gamma_{0}\partial R}-\zeta\gamma_{0})/\frac{\partial I_{2}}{2\partial R}\\ & =\frac{\partial(I_{3}-I_{1})}{\partial R}/\frac{\partial I_{2}}{2\partial R}+(1-\frac{1}{\gamma_{0}})\frac{\partial I_{1}}{\partial R}/\frac{\partial I_{2}}{2\partial R}-\zeta\gamma_{0}/\frac{\partial I_{2}}{2\partial R}\\ & \geq\frac{\partial(I_{3}-I_{1})}{\partial R}/\frac{\partial I_{2}}{2\partial R}+2\zeta\end{aligned}$$ where the inequality sign holds because $(1-\frac{1}{\gamma_{0}})\frac{\partial I_{1}}{\partial R}/\frac{\partial I_{2}}{2\partial R}$ is positive. Note that $\frac{\partial(I_{3}-I_{1})}{\partial R}/\frac{\partial I_{2}}{2\partial R}$ is also defined as maximum learning rates of WN, and vanilla SGD in [@WNdynamic]. Hence, we conclude that $\eta_{\mathrm{max}}^{{{ {b\hspace{-1pt}n} }}}\geq \eta_{\mathrm{max}}^{\{{{ {w\hspace{-1pt}n} }},{{ {s\hspace{-1pt}g\hspace{-1pt}d} }}\}}+2\zeta$. {#app:sm_proof} In this section, we build an analytical model for the generalization ability of a single-layer network. The framework is based on the Teacher-Student model, where the teacher network output $y^{\ast}=g^{\ast}\left(\mathbf{w^{\ast}}{{^{\mkern-1.5mu\mathsf{T}}}}\cdot\mathbf{x}+s\right)$ is learned by a student network. The weight parameter of the teacher network satisfies $\frac{1}{N}\left(\mathbf{w}^{*}\right){{^{\mkern-1.5mu\mathsf{T}}}}\cdot\mathbf{w}^{\ast}$=1 and the bias term $s$ is a random variable $s\sim\mathcal{N}\left(0,S\right)$ fixed for each training example $\mathbf{x}$ to represent static errors in training data from observations. In the generalization analysis, the input is assumed to be drawn from $\mathbf{x}\sim\mathcal{N}\left(0,\frac{1}{N}\mathbf{I}\right)$. The output of the student can also be written as a similar form $y=g\left(\widetilde{\mathbf{w}}\cdot\mathbf{x}\right)$, where the activation function $g\left(\cdot\right)$ can be either linear or ReLU in the analysis and $\widetilde{\mathbf{w}}$ is a general weight parameter which can be used in either WN or common linear perceptrons. Here we take WN for example, since it has been derived in this study that BN can be decomposed into WN with a regularization term on $\gamma.$ In WN $\widetilde{\mathbf{w}}=\gamma\frac{\mathbf{w}}{\Vert\mathbf{w}\Vert_{2}}$ and we defined the same order parameter as the previous section that $\gamma^{2}=\frac{1}{N}\widetilde{\mathbf{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\cdot\widetilde{\mathbf{w}}$ and $\gamma R=\frac{1}{N}\widetilde{\mathbf{w}}{{^{\mkern-1.5mu\mathsf{T}}}}\cdot\mathbf{w}^{\ast}$ . ### Generalization error {#sub:gen} Since the learning task is a regression problem with teacher output biased by a Gaussian noise, it comes natural that we can use the the average mean square error loss $\epsilon_{t}=\frac{1}{P}\sum_{j}\left(y_{j}^{*}-y_{j}\right)^{2}$ for the regression. The generalization error defined as the estimation over the distribution of input $\mathbf{x}$ and is written as $$\epsilon_{\mathrm{gen}}(\widetilde{\mathbf{w}})=\left\langle \left(y^{*}-y\right)^{2}\right\rangle _{\mathbf{x}}$$ where $\left\langle \cdot\right\rangle _{\mathbf{x}}$ denotes an average over the distribution over $\mathbf{x}$. The generalization error is a function of its weight parameter and can be converted to a function only with regard to the aformentioned order parameters, detailed derivation can be seen in [@bos_statistical_1998; @krogh_generalization_1992]. $$\epsilon_{\mathrm{gen}}(\gamma,R)=\iint Dh_{1}Dh_{2}\left[g^{\ast}(h_{1})-g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})\right]^{2}$$ where $h_{1}$and $h_{2}$ are variables drawn from standard Gaussian distribution and $Dh_{1}:=\mathcal{N}\left(0,1\right)dh_{1}$. When both the teacher network and student network have a linear activation function, the above integration can be easily solved and $$\epsilon_{\mathrm{gen}}(\gamma,R)=1+\gamma^{2}-2\gamma R$$ As for the case where the teacher network is linear and the student network has a ReLU activation, it can still be solved first by decomposing the loss function $$\begin{aligned}\epsilon_{\mathrm{gen}}(\gamma,R) & =\iint Dh_{1}Dh_{2}\left[h_{1}-g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})\right]^{2}\\ & =\iint Dh_{1}Dh_{2}\left[h_{1}^{2}+g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})^{2}-2h_{1}g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})\right]^{2}\\ & =1+\frac{\gamma^{2}}{2}-2\iint Dh_{1}Dh_{2}\left[h_{1}g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})\right]^{2} \end{aligned}$$ It should be noted that the last two terms should only be integrated over the half space $\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2}>0$, and therefore if we define the angle of this line with the $h_{2}$ axis $\theta_{0}=\arccos\left(R\right)$ the integration is transformed to polar coordinate $$\begin{aligned}\epsilon_{\mathrm{gen}}(\gamma,R) & =1+\frac{\gamma^{2}}{2}-2\iint Dh_{1}Dh_{2}\left[h_{1}g(\gamma Rh_{1}+\gamma\sqrt{1-R^{2}}h_{2})\right]^{2}\\ & =1+\frac{\gamma^{2}}{2}-2\int_{-\theta_{0}}^{\pi-\theta_{0}}d\theta\int_{0}^{\infty}rdr\frac{1}{2\pi}\exp(-\frac{r^{2}}{2})\left(\gamma Rr^{2}\sin^{2}(\theta)+\gamma\sqrt{1-R^{2}}r^{2}\cos\left(\theta\right)\sin\left(\theta\right)\right)\\ & =1+\frac{\gamma^{2}}{2}-\gamma R \end{aligned}$$ ### Equilibrium order parameters {#sub:equi_order} Following studies on statistical mechanics, the learning process of a neural network resembles a Langevin process [@mandt_stochastic_2017] and at the equilibrium the network parameters $\theta$ follow a Gibbs distribution. That is, the weight vector that yields lower training error produces higher probability. We have $p(\theta)=Z^{-1}\exp\{-\beta\epsilon_{t}(\theta;{{ \mathbf{x} }})\}$, where $\beta=1/T$ and $T$ is temperature, representing the variance of noise during training and implicitly controlling the learning process. $\epsilon_{t}(\theta;{{ \mathbf{x} }})$ is an energy term of the training loss function, $Z=\int d\mathcal{P}(\theta)\exp\{-\epsilon_{t}(\theta;{{ \mathbf{x} }})/T\}$ is the partition function, and $\mathcal{P}(\theta)$ is a prior distribution. Instead of directly minimizing the energy term above, statistical mechanics finds the minima of free energy, $f$, which is a function over $T$, considering the fluctuations of $\theta$ at finite temperatures. We have $-\beta f=\langle\ln Z\rangle_{{{ \mathbf{x} }}}$. By substituting the parameters that minimize $f$ back into the generalization errors calculated above, we are able to calculate the averaged generalization error, at a certain temperature. The solution of SM requires the differentiation of $f$ with respect to the order parameters. In general, the expression of free energy under the replica theory is expressed as[@seung_statistical_1992] $$\label{eq:free_complete} -\beta f=\frac{1}{2}\frac{(\gamma^{2}-\gamma^{2}R^{2})}{q^{2}-\gamma^{2}}+\frac{1}{2}\ln(q^{2}-\gamma^{2})+\alpha\iint Dh_{1}Dh_{2}\ln\left[\int Dh_{3}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right)\right]$$ where $$\begin{aligned}g:=g\left(\gamma Rh_{1}+\sqrt{\gamma^{2}-\gamma^{2}R^{2}}h_{2}+\sqrt{q^{2}-\gamma^{2}}h_{3}\right)\\ g^{\ast}:=g^{\ast}(h_{1}+s) \end{aligned}$$ In the above expression, $h_{1},h_{2},h_{3}$ are three independent variables following the standard Gaussian distribution and $\alpha=P/N$ represents the ratio of the number of training samples $P$ to number of unknown parameters $N$ in the network, $R=\frac{1}{N}\frac{{{ \mathbf{w} }}}{\Vert{{ \mathbf{w} }}\Vert_2}\cdot{{ \mathbf{w} }}^\ast$, $q$ is the prior value of $\gamma$.. The above equation can be utilized for a general SM solution of a network. However, the solution is notoriously difficult to solve and only a few linear settings for the student network have close-form solutions[@bos_statistical_1998]. Here we extend the previous analysis of linear activations to a non-linear one, though still under the condition that $\beta\rightarrow\infty$, which means that the student network undergoes a exhaustive learning that minimizes the training error. In the current setting, the student network is a nonlinear ReLU network while the teacher is a noise-corrupted linear one. \[prop:free\_energy\] Given a single-layer linear teacher $y^{\ast}={{ \mathbf{w} }}^{\ast}{{ \mathbf{x} }}+s$ and a student ReLU network $y=g(\gamma\frac{{{ \mathbf{w} }}}{\Vert{{ \mathbf{w} }}\Vert_{2}}{{ \mathbf{x} }})$ linear student network with $g$ being a ReLU activation function, ${{ \mathbf{x} }}\sim\mathcal{N}(0, \frac{\mathbf{I}}{N})$ the free energy $f$ satisfies as $\beta\rightarrow\infty$ $$\begin{aligned}-\beta f & =\frac{1}{2}\frac{(\gamma^{2}-\gamma^{2}R^{2})}{q^{2}-\gamma^{2}}+\frac{1}{2}\ln(q^{2}-\gamma^{2})\\ & -\frac{\alpha}{4}\ln\left(1+\beta\left(q^{2}-\gamma^{2}\right)\right)-\frac{\alpha\beta\left(1-2\gamma R+\gamma^{2}+S\right)}{4\left(1+\beta\left(q^{2}-\gamma^{2}\right)\right)}-\frac{\alpha\beta}{4}-\frac{\alpha\beta}{4}S \end{aligned}$$ where $S$ is the variance of the Gaussian noise $s$ injected to the output of the teacher. The most difficult process in Eqn.\[eq:free\_complete\] is to solve the inner integration over $h_{3}$. Here as $\beta\rightarrow\infty$, it is noted that the function $\exp\left(-\beta x\right)$ only notches up only at $x=0$ and is 0 elsewhere. Therefore, the integration $\int Dh_{3}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right)$ depends on the value of $g^{*}$. If $g^{*}<0$, no solution exists for $g-g^{\ast}=0$ as $g$ is a ReLU activation, and thus the integration is equivalent to the maximum value of the integral under the limit of $\beta\rightarrow\infty$. As $g^{*}>0$, the integration over the “notch” is equivalent to the one at full range. That is, $$\int Dh_{3}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right)=\begin{cases} \int Dh_{3}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right) & h_{1}+s>0\\ \max_{h_{3}}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right) & h_{1}+s\leq0 \end{cases}$$ The above equation can be readity integrated out and we obtain $$\begin{aligned}\ln\int Dh_{3}\exp\left(-\frac{\beta\left(g-g^{\ast}\right)^{2}}{2}\right) & =-\frac{1}{2}\ln\left(1+\beta\left(q^{2}-\gamma^{2}\right)\right)\\ & -\frac{\beta}{2}\frac{\left(\left(1-\gamma R\right)h_{1}-\sqrt{\gamma^{2}-\gamma^{2}R^{2}}h_{2}+s\right)^{2}}{1+\beta\left(q^{2}-\gamma^{2}\right)} \end{aligned}$$ Substituting it back to Eqn.\[eq:free\_complete\], we have its third term equivalent $$\begin{aligned}\textrm{Term3} & =\alpha\iint_{h_{1}+s>0}Dh_{1}Dh_{2}\left[-\frac{\beta}{2}\frac{\left(\left(1-\gamma R\right)h_{1}-\sqrt{\gamma^{2}-\gamma^{2}R^{2}}h_{2}+s\right)^{2}}{1+\beta\left(q^{2}-\gamma^{2}\right)}\right]\\ & =\alpha\iint_{h_{1}+s>0}Dh_{1}Dh_{2}\left[-\frac{\beta}{2}\frac{\left(\left(1-\gamma R\right)^{2}h_{1}^{2}-\sqrt{\gamma^{2}-\gamma^{2}R^{2}}h_{2}+s\right)^{2}}{1+\beta\left(q^{2}-\gamma^{2}\right)}\right] \end{aligned}$$ To solve the above integration, we first realize that $s$ is a random variable to corrupt the output of the teacher output and the above integration should be averaged out over $s$. Given that $s\sim\mathcal{N}\left(0,S\right)$ , it is easy to realize that $$\left\langle \int_{h+s>0}s^{2}Dh\right\rangle _{s}=\frac{S}{2},\text{ }\left\langle \int_{h+s>0}Dh\right\rangle _{s}=\frac{1}{2},\text{ and }\left\langle \int_{h+s>0}hsDh\right\rangle _{s}=0$$ Through simple Gaussian integraions, we get $$\begin{aligned}\textrm{Term3} & =\alpha\left[-\frac{1}{4}\ln\left(1+\beta\left(q^{2}-\gamma^{2}\right)\right)-\frac{\beta\left(1-2\gamma R+\gamma^{2}+S\right)}{4\left(1+\beta\left(q^{2}-\gamma^{2}\right)\right)}-\frac{\beta}{4}-\frac{\beta}{4}S\right]\end{aligned}$$ Substituting Term3 back yields the results of the free energy. Therefore, by locating the values that minimizes $f$ in the above proposition, we have equilibrium order parameters $$\gamma^{2}=\frac{\alpha}{2a}+\frac{\alpha S}{2a-\alpha}$$ and $$\gamma R=\frac{\alpha}{2a}$$ where $a$ is defined as $a=\frac{1+\beta\left(q^{2}-\gamma^{2}\right)}{\beta\left(q^{2}-\gamma^{2}\right)}$. Substituting the order parameters back to the generalization error, we have $$\epsilon_{\mathrm{gen}}=1-\frac{\alpha}{4a}+\frac{\alpha S}{2a\left(2a-\alpha\right)}$$ When $\alpha<2$ and $\beta\rightarrow\infty$, $a=1$, the generalization error is $$\label{eq:relu_gen} \epsilon_{\mathrm{gen}}=1-\frac{\alpha}{4}+\frac{\alpha S}{2\left(2-\alpha\right)}$$ [^1]: The first three authors contribute equally. Corresponding to pluo.lhi@gmail.com, {wangxinjiang, pengzhanglin}@sensetime.com, weqish@link.cuhk.edu.hk. [^2]: $g'(x)$ denotes the first derivative of $g(x)$.
--- --- **** On orthogonal systems of shifts of scaling function on local fields of positive characteristic. Gleb Sergeevich BERDNIKOV, Iuliia Sergeevna KRUSS, Sergey Fedorovich LUKOMSKII Departament of Mathematic Analysis, Saratov State University, Saratov, Russia. [**Abstract:**]{} We present a new method for constructing an orthogonal step scaling function on local fields of positive characteristic, which generates multiresolution analysis. [**Key words:**]{} Local field, scaling function, multiresolution analysis.\ 1. Introduction {#introduction .unnumbered} ================ Chinese mathematicians H.Jiang, D.Li, and N.Jin in the article [@JLJ] introduced the notion of multiresolution analysis (MRA) on local fields. For the fields $ F^{(s)}$ of positive characteristic $p$ they proved some properties and gave an algorithm for constructing wavelets for a known scaling function. Using these results they constructed “Haar MRA” and corresponding “Haar wavelets”. The problem of constructing orthogonal MRA on the field $ F^{(1)}$ is studied in detail in the works [@YuF1; @YuF3; @YuF2; @SL; @WP; @YuFWP]. In [@LJ] a necessary condition and sufficient conditions for wavelet frame on local fields are given. B.Behera and Q.Jahan [@BJ1] constructed the wavelet packets associated with MRA on local fields of positive characteristic. In the article [@BJ2] a necessary and sufficient conditions for a function $\varphi\in L^2( F^{(s)})$ under which it is a scaling function for MRA are obtained. These conditions are following $$\label{eq0.1} \sum_{k\in \mathbb N_0}|\hat\varphi(\xi+u(k))|^2=1\$$ for a.e. $\xi$ in unit ball ${\cal D}$, $$\label{eq0.2} \lim\limits_{j\to\infty}|\hat\varphi(\mathfrak p^j\xi)|=1 \ for\ a.e.\ \xi \in F^{(s)},$$ and there exists an integral periodic function $m_0 \in L^2(\cal D)$ such that $$\label{eq0.3} \hat\varphi(\xi)=m_0(\mathfrak p\xi)\hat\varphi(\mathfrak p\xi)\ for \ a.e.\ \xi \in F^{(s)}$$ where $\{u(k) \}$ is the set of shifts, $\mathfrak p$ is a prime element. B.Behera and Q.Jahan [@BJ3] proved also if the translates of the scaling functions of two multiresolution analyses are biorthogonal, then the associated wavelet families are also biorthogonal. So, to construct MRA on a local field $ F^{(s)}$ we need to construct an integral periodic mask $m_0$ with conditions (\[eq0.1\]-\[eq0.3\]). To solve this problem in articles [@JLJ], [@BJ3; @BJ2; @BJ1; @LJ] was used prime element methods developed in [@MT]. In these articles only Haar wavelets are obtained. In the article [@SLAV] an another method to construct integral periodic masks and corresponding scaling step functions that generate non-Haar orthogonal MRA are developed. However, in [@SLAV] only simple case of mask $m_0$ being elementary is considered, i.e. $m_0(\chi)$ is constant on cosets $(F^{(s)+}_{-1})^\bot$ and $m_0(\chi)$ takes only two values 0 and 1. In this article, we get rid of these restrictions and specify the method of constructing the scaling function only with the condition: $|\hat\varphi|$ is a step function. We reduce this problem to the study of some dynamical system and prove that it has a fixed point. 2. Basic concepts {#basic-concepts .unnumbered} ================= Let $p$ be a prime number, $s\in \mathbb N$, $GF(p^s)$ – finite field. Local field $F^{(s)}$ of positive characteristic $p$ is isomorphic (Kovalski-Pontryagin theorem [@GGP]) to the set of formal power series $$a=\sum_{i=k}^{\infty}{\bf a}_it^i,\ k\in \mathbb{Z},\ {\bf a}_i\in GF(p^s).$$ Addition and multiplication in the field $F^{(s)}$ are defined as summ and product of such series, i.e. if $$a=\sum_{i=k}^{\infty}{\bf a}_it^i,\ b=\sum_{i=k}^{\infty}{\bf b}_it^i,$$ then $$a\dot+b=\sum_{i=k}^{\infty}({\bf a}_i\dot+ {\bf b}_i)t^i,\ {\bf a}_i\dot+ {\bf b}_i=({\bf a}_i + {\bf b}_i){\rm \ mod} \ p,$$ $$ab=\sum_{l=2k}^{\infty}t^l\ \sum_{i,j:i+j=l}({\bf a}_i {\bf b}_j)$$ Topology in $F^{(s)}$ is defined by the base of neighborhoods of zero $$F^{(s)}_n=\{a=\sum_{j=n}^\infty {\bf a}_jt^j|{\bf a}_j\in GF(p^s)\}.$$ If $$a=\sum_{j=n}^\infty {\bf a}_jt^j,\ {\bf a}_n\ne {\bf 0},$$ then by definition $\|a\|=(\frac{1}{p^s})^n$ which implies $$F^{(s)}_n=\{x\in F^{(s)}:\| x\|\le (\frac{1}{p^s})^n \}$$ Thus we may consider local field $F^{(s)}$ of positive characteristic $p$ as the field of sequences infinite in both directions $$a=(\dots ,{\bf 0}_{n-1},{\bf a}_n,\dots,{\bf a}_0,{\bf a}_1,\dots),\ {\bf a}_j\in GF(p^s)$$ which have only finite number of elements ${\bf a}_j$ with negative $j$ nonequal to zero, and the operations of addition and multiplication are defined by equalities $$a\dot+b=(({\bf a}_i\dot+ {\bf b}_i))_{i\in \mathbb Z},$$ $$\label{eq1.1} ab= (\sum_{i,j:i+j=l}({\bf a}_i {\bf b}_j))_{l\in \mathbb Z},$$ where $"\dot+"$ and $"\cdot"$ are respectively addition and multiplication in $GF(p^s)$. Thus $$\|a\|=\|(\dots,{\bf 0}_{n-1},{\bf a}_n,{\bf a}_{n+1},\dots)\|=(\frac{1}{p^s})^n, \ \mbox{\rm если}\ {\bf a}_n\ne {\bf 0},$$ $$F^{(s)}_n=\{a=({\bf a}_j)_{j\in \mathbb Z}: {\bf a}_j\in GF(p^s);\ {\bf a}_j=0,\ \forall j<n \}.$$ Let us consider $F^{(s)+}$ – the additive group of the field $F^{(s)}$. Neighborhoods $F^{(s)}_n$ are compact subgroups of the group $F^{(s)+}$, we will denote them as $F^{(s)+}_n$. They have the following properties: 1)$\dots\subset F^{(s)+}_1\subset F^{(s)+}_0\subset F^{(s)+}_{-1}\dots$ 2)$F^{(s)+}_n/ F^{(s)+}_{n+1}\cong GF(p^s)^+$ и $\sharp (F^{(s)+}_n/ F^{(s)+}_{n+1})=p^s$. This implies that if $s=1$ then $F^{(1)+}$ is Vilenkin group with the stationary generating sequence $p_n=p$. The inverse is also true: one can define multiplication in any Vilenkin group $(\mathfrak G,\dot +)$ with stationary generating sequence $p_n=p$ using equality (\[eq1.1\]). Supplied with such operation $(\mathfrak G,\dot+,\cdot)$ becomes a field isomorphic to $F^{(1)}$, where $e=(\dots,0,0_{-1},1_0,0_1,\dots)$ is a neutral element with respect to multiplication. It is noted in [@AVSL] that the field $F^{(s)}$ can be described as a linear space over $GF(p^s)$. Using this description one may define the multiplication of element $a\in F^{(s)} $ on element $\overline\lambda \in GF(p^s)$ coordinatewise, i.e. $\overline\lambda a =(\dots {\bf 0}_{n-1},\overline\lambda {\bf a}_n,\overline\lambda {\bf a}_{n+1},\dots)$, and the modulus $\overline\lambda \in GF(p^s)$ can be defined as $$|\overline\lambda|=\left\{ \begin{array}{ll} 1,& \overline\lambda \ne {\bf 0},\\ 0,& \overline\lambda = {\bf 0}.\\ \end{array} \right.$$ It is also proved there, that the system $g_k\in F_k^{(s)}\setminus F_{k+1}^{(s)}$ is a basis in $F^{(s)}$, i.e. any element $a\in F^{(s)}$ can be represented as: $a=\sum\limits_{k\in\mathbb{Z}}\overline\lambda_kg_k,\ \overline\lambda_k\in GF(p^s)$.\ From now on we will consider $g_k=(...,{\bf 0}_{k-1},(1^{(0)},0^{(1)},...,0^{(s-1)})_k,{\bf 0}_{k+1},...)$. In this case $\overline\lambda_k={\bf a}_k$. Let us define the sets $$H_0^{(s)}=\{h\in G: h={\bf a}_{-1}g_{-1}\dot+{\bf a}_{-2}g_{-2}\dot+\dots \dot+ {\bf a}_{-s}g_{-s} \},s\in \mathbb N.$$ $$H_0=\{h\in G:\;h={\bf a}_{-1}g_{-1}\dot+{\bf a}_{-2}g_{-2}\dot+\dots\dot+{\bf a}_{-s}g_{-s},\;s\in\mathbb N\}.$$ The set $H_0$ is the set of shifts in $F^{(s)}$. It is an analogue of the set of nonnegative integers. We will denote the collection of all characters of $F^{(s)+}$ as $X$. The set $X$ generates a commutative group with respect to the multiplication of characters: $(\chi*\phi) (a)=\chi(a)\cdot \phi(a)$. Inverse element is defined as $\chi^{-1}(a)=\overline{\chi(a)}$, and the neutral element is $e(a)\equiv1$. Following \[15\] we define characters $r_n$ of the group $F^{(s)+}$ in the following way. Let $x=(\dots,{\bf 0}_{k-1},{\bf x}_k,$ ${\bf x}_{k+1},\dots)$, ${\bf x}_j=(x_j^{(0)},x_j^{(1)},\dots,x_j^{(s-1)})\in GF(p^s)$. The element ${\bf x}_j$ can be written in the form ${\bf x}_j=(x_{js+0},x_{js+1},\dots,x_{js+(s-1)})$. In this case $$x=(\dots,0,...,0,x_{ks+0},x_{ks+1},\dots,x_{ks+s-1},x_{(k+1)s+0},x_{(k+1)s+1},\dots,x_{(k+1)s+s-1},\dots)$$ and the collection of all such sequences $x$ is Vilenkin group. Thus the equality $r_n(x)=r_{ks+l}(x)=e^{\frac{2\pi i}{p}(x_{ks+l})}$ defines Rademacher function of $F^{(s)+}$ and every character $\chi\in X$ can be described in the following way: $$\label{eq1.2} \chi=\prod \limits_{n\in\mathbb Z}{r}_n^{ a_n},\quad a_n=\overline{0,p-1}.$$ The equality (\[eq1.2\]) can be rewritten as $$\label{eq1.3} \chi= \prod\limits_{k\in\mathbb Z} r_{ks+0}^{a_k^{(0)}}r_{ks+1}^{a_k^{(1)}}\dots r_{ks+s-1}^{a_k^{(s-1)}}$$ and let us define $$r_{ks+0}^{a_k^{(0)}}r_{ks+1}^{a_k^{(1)}}\dots r_{ks+s-1}^{a_k^{(s-1)}}={\bf r}_k^{{\bf a}_k}$$ where ${\bf a}_k=(a_k^{(0)},a_k^{(1)},\dots,a_k^{(s-1)})\in GF(p^s)$. Then (\[eq1.3\]) takes the form $$\label{eq1.4} \chi =\prod_{k\in \mathbb Z}{\bf r}_k^{{\bf a}_k}.$$ We will refer to ${\bf r}_k^{(1,0,\dots,0)}={\bf r}_k$ as the Rademacher functions. By definition we set $$({\bf r}_k^{{\bf a}_k})^{{\bf b}_k}={\bf r}_k^{{\bf a}_k{{\bf b}_k}}, \quad \chi^{\bf b}=(\prod {\bf r}_k^{{\bf a}_k})^{\bf b}=\prod {\bf r}_k^{{\bf a}_k\bf b}, \quad {\bf a}_k, {\bf b}_k, {\bf b}\in GF(p^s).$$ The definition of Rademacher function implies that if ${\bf x}=((x_k^{(0)},x_k^{(1)},\dots x_k^{(s-1)}))_{k\in \mathbb Z}$ and ${\bf u}=(u^{(0)},u^{(1)},\dots, u^{(s-1)})\in GF(p^s)$ then $$({\bf r}_k^{{\bf u}},{\bf x})=\prod\limits_{l=0}^{s-1}e^{\frac{2\pi i}{p}u^{(l)}x_k^{(l)}}.$$ In \[15\] the following properties of characters are proved 1\) ${\bf r}_k^{{{\bf u}\dot+{\bf v}}}={\bf r}_k^{\bf u}{\bf r}_k^{\bf v}$, ${\bf u}, {\bf v}\in GF(p^s)$. 2\) $({\bf r}_k^{\bf v},{\bf u}g_j)=1$, $\forall k\ne j$, ${\bf u}, {\bf v}\in GF(p^s)$. 3\) The set of characters of the field $F^{(s)}$ is a linear space $(X,\; *,\; \cdot^{GF(p^s)})$ over the finite field $GF(p^s)$ with multiplication being an inner operation and the power ${\bf u}\in GF(p^s)$being an outer operation. 4\) The sequence of Rademacher functions $({\bf r}_k)$ is a basis in the space $(X,\; *,\; \cdot^{GF(p^s)})$. 5\) Any sequence of characters $\chi_k\in (F_{k+1}^{(s)})^\bot\setminus(F_k^{(s)})^\bot$ is also a basis in the space $(X,\; *,\; \cdot^{GF(p^s)})$, where ${F^{(s)}_n}^{\bot}$ is the annihilator of $F^{(s)+}_n$. The dilation operator ${\cal A}$ in local field $F^{(s)}$ can be defined as ${\cal A}x:=\sum_{n=-\infty}^{+\infty}{\bf a}_ng_{n-1}$, where $x=\sum_{n=-\infty}^{+\infty}{\bf a}_ng_n\in F^{(s)}$. In the group of characters it is defined as $(\chi {\cal A},x)=(\chi, {\cal A}x)$. 3. Scaling function and MRA {#scaling-function-and-mra .unnumbered} =========================== We will consider a case of scaling function $\varphi$, which generates an orthogonal MRA, being step function. The set of step functions constant on cosets of a subgroup $F_M^{(s)}$ with the support ${\rm supp}(\varphi)\subset F^{(s)}_{-N}$ will be denoted as $\mathfrak D_M(F^{(s)}_{-N})$, $M,N\in \mathbb N$. Similarly, $\mathfrak D_{-N}({F^{(s)}_{M}}^\bot)$ is a set of step functions, constant on the cosets of a subgroup ${F^{(s)}_{-N}}^{\bot}$ with the support ${\rm supp}(\varphi)\subset {F^{(s)}_M}^\bot$. If $\varphi\in \mathfrak D_M(F^{(s)}_{-N})$ generates an orthogonal MRA, it satisfies the refinement equation $\varphi(x)=\sum_{h\in H_0^{(N+1)}}\beta_h\varphi({\cal A}x\dot-h)$ [@SLAV], which can be rewritten in a frequency from $$\label{eq2.1} \hat\varphi(\chi)=m_0(\chi)\hat\varphi(\chi{\cal A}^{-1}),$$ where $$\label{eq2.2} m_0(\chi)=\frac{1}{p}\sum_{h\in H_0^{(N+1)}}\beta_h\overline{(\chi{\cal A}^{-1},h)}$$ is the mask of equation (\[eq2.1\]). For the step functions in [@SLAV] condition (\[eq0.3\]) and orthogonality condition (\[eq0.1\]) are rewritten in the terms of Rademacher functions 1\) If $\hat\varphi(\chi)\in\mathfrak D_{-N}({F^{(s)}_M}^\bot)$ is a solution of refinement equation (\[eq2.1\]) and the system of shifts $(\varphi(x\dot-h))_{h\in H_0}$ is orthonormal, then $\varphi$ generates an orthogonal MRA. 2\) If $\hat\varphi(\chi)\in\mathfrak D_{-N}({F^{(s)}_M}^\bot)$ , then the system of shifts $(\varphi(x\dot-h))_{h\in H_0}$ will be orthonormal iff for any ${\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1}\in GF(p^s)$ $$\label{eq2.3} \sum_{{\bf a}_{0},{\bf a}_1,\dots,{\bf a}_{M-1}\in GF(p^s)}|\hat\varphi({F^{(s)}_{-N}}^\bot {\bf r}_{-N}^{{\bf a}_{-N}}\dots {\bf r}_0^{{\bf a}_0}\dots {\bf r}_{M-1}^{{\bf a}_{M-1}})|^2=1.$$ Thus to construct an orthogonal MRA one must construct a function $\hat\varphi(\chi)\in\mathfrak D_{-N}({F^{(s)}_M}^\bot)$, which is a solution of refinement equation (\[eq2.1\]) and which satisfies conditions (\[eq2.3\]). Satisfying both conditions is the main difficulty of this problem. As it was already mentioned in introduction, a method for construction of scaling function which generates nonhaar orthogonal MRA is specified in [@SLAV]. It is constructed by the means of some tree and results in a function such that $|\varphi|$ takes two values only: 0 and 1. More general case will be presented in the next section. 4. Construction of orthogonal scaling function {#construction-of-orthogonal-scaling-function .unnumbered} ============================================== [**Definition 4.1.**]{} *Let $F^{(s)}$ be a local field of positive characteristic $p$, $N$ is a natural number. Then by $N$-valid tree we mean a tree, oriented from leaves to root and satisfying conditions:* 1)Every vertex is an element of $GF(p^s)$, i.e has the form ${\bf a}_{i}=(a_i^{(0)},a_i^{(1)},\dots,a_i^{(s-1)})$, $a_i^{(j)}=\overline{0,p-1}$. 2)The root and all vertices of level $N-1$ are equal to the zero element of $GF(p^s)$: ${\bf 0}=(0^{(0)},0^{(1)},\dots,0^{(s-1)})$. 3)Any path $({\bf a}_k\to{\bf a}_{k+1}\to\dots\to{\bf a}_{k+N-1})$ of length $N-1$ appears in the tree exactly one time. Let us choose $N$-valid tree $T$ and construct a scaling function using it. 1\) We will use this tree $T$ to construct new tree $\tilde T$. Every vertex of the tree $\tilde T$ is a vector of $N$ elements each being an element of $GF(p^s)$: ${\bf A}=({\bf a}_N,{\bf a}_{N-1},\dots,{\bf a}_1)$. Such vertices are constructed in the following way: if a tree $T$ has a path of length $N-1$ starting from ${\bf a}_N$ $${\bf a}_N\rightarrow{\bf a}_{N-1}\rightarrow\dots\rightarrow{\bf a}_1,$$ then in $\tilde T$ we will have a vertex with the value equal to the array of $N$ elements $({\bf a}_N,{\bf a}_{N-1},\dots$ $\dots,{\bf a}_1)$. Due to condition 3) of $N$-validity of tree $T$ each such array corresponds to the unique vertex of the new tree $\tilde T$. Thus, the root of $\tilde T$ is an $N$-dimensional vector with all elements equal to zero of $GF(p^s)$ ${\bf O}=({\bf 0},{\bf 0},\dots,{\bf 0})$. Vertices of level 1 in the tree $\tilde T$ are $N$-dimensional vectors, which have all their elements, except the first one, equal to zero of $GF(p^s)$: $({\bf a}_i,{\bf 0},\dots,{\bf 0})$, where ${\bf a}_i$ is some vertex of level $N$ in the tree $T$. Vertices of level 2 in the tree $\tilde T$ are $N$-dimensional vectors: $({\bf a}_{i_2},{\bf a}_{i_1},{\bf 0},\dots,{\bf 0})$, where ${\bf a}_{i_2}$ and ${\bf a}_{i_1}$ are some vertices of levels $N+1$ and $N$ of the tree $T$ respectively, which are connected. We should note that in this example ${\bf a}_{i_1}\ne{\bf 0}$, but ${\bf a}_{i_2}$ may be zero element of $GF(p^s)$. Thus in $\tilde T$ connected vertices have the form: $({\bf a}_{i_N},{\bf a}_{i_{N-1}},\dots,{\bf a}_{i_1})\rightarrow({\bf a}_{i_{N-1}}, \dots,{\bf a}_{i_1},{\bf a}_{i_0})$. However not all vertices satisfying this condition will be connected. Arcs are taken from the original tree $T$. If we denote $height(T)=H,$ $height(\tilde T)=\tilde H$, then obviously $\tilde H=H-N+1$. 2\) Now we will construct a directed graph $\Gamma$ using $\tilde T$. We connect each vertex ${\bf A}_N=({\bf a}_N,{\bf a}_{N-1},\dots,{\bf a}_1)$ of $\tilde T$ to each vertex of lesser level of the form $({\bf a}_{N-1},\dots,{\bf a}_1,{\bf a}_0)$, i.e having first $(N-1)$ elements equal to the last $(N-1)$ elements of vertex ${\bf A}_N$. The vertices, to which ${\bf A}_N$ is connected, we will denote by $({\bf a}_{N-1},\dots,{\bf a}_1,\tilde {\bf a}_0)$. I.e. ${\bf a}_0\in \{\tilde {\bf a}_0\}$ iff the vertex ${\bf A}_N$ is connected to $({\bf a}_{N-1},\dots,{\bf a}_1,{\bf a}_0)$ in digraph $\Gamma$. 3\) Let us denote $$\lambda_{{\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1},{\bf a}_0}= |m_0({F^{(s)}}^\bot_{-N}{\bf r}_{-N}^{{\bf a}_{-N}}{\bf r}_{-N+1}^{{\bf a}_{-N+1}}\dots {\bf r}_{-1}^{{\bf a}_{-1}}{\bf r}_{0}^{{\bf a}_{0}})|^2,$$ i.e. $\lambda_{{\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1},{\bf a}_0} $ is an $(N+1)$-dimensional array, enumerated by the elements of $GF(p^s).$ If the vertex $({\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_{1})$ of graph $\Gamma$ is connected to the vertices $({\bf a}_{N-1},{\bf a}_{N-2}\dots,{\bf a}_{1},$ $\tilde{\bf a}_0)$ then we define the values of the mask in the way satisfying the condition $$\label{eq3.1} \sum\limits_{\tilde{\bf a}_0} \lambda_{{\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1},\tilde{\bf a}_0}=1 \ \mbox{and}\ \lambda_{{\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1},{\bf a}_0}=0\ \mbox{for any ${\bf a}_0\notin\{\tilde{\bf a}_0\}.$}$$ Also, let us define $m_0({F^{(s)}_{-N}}^\bot)=1,$ which implies $\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}=1$. To present the main result we will need some extra notation. Firsly, we must note that the orthonormality condition (\[eq2.3\]) for the system of shifts of $\varphi(x)$ can be rewritten as: for any ${\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_{-1}\in GF(p^s)$ $$1=\sum_{{\bf a}_0,{\bf a}_1,\dots,{\bf a}_{M-1}\in GF(p^s)}|\hat\varphi( {F^{(s)}_{-N}}^\bot {\bf r}_{-N}^{{\bf a}_{-N}}\dots {\bf r}_{-1}^{{\bf a}_{-1}}{\bf r}_{0}^{{\bf a}_0}\dots {\bf r}_{M-1}^{{\bf a}_{M-1}})|^2=$$ $$=\sum_{{\bf a}_0\in GF(p^s)}\lambda_{{\bf a}_{-N},{\bf a}_{-N+1},\dots,{\bf a}_0} \sum_{{\bf a}_1\in GF(p^s)}\lambda_{{\bf a}_{-N+1},{\bf a}_{-N+2},\dots,{\bf a}_1}\dots$$ $$\dots\sum_{{\bf a}_{M-2}\in GF(p^s)}\lambda_{{\bf a}_{M-N-2},{\bf a}_{M-N-1},\dots,{\bf a}_{M-2}}$$ $$\label{eq3.2} \sum_{{\bf a}_{M-1}\in GF(p^s)}\lambda_{{\bf a}_{M-N-1},{\bf a}_{M-N},\dots,{\bf a}_{M-1}}\lambda_{{\bf a}_{M-N}, {\bf a}_{M-N+1}, \dots,{\bf a}_{M-1},{\bf 0}} \dots\lambda_{{\bf a}_{M-1},{\bf 0},\dots,{\bf 0}}.$$ Let us then define a sequence of $N$-dimensional arrays $A^{(n)}=(a^{(n)}_{{\bf i}_1, {\bf i}_2,\dots, {\bf i}_N})_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N\in GF(p^s)}$ recurrently by giving the relations of their components: $$\label{eq3.3} a^{(0)}_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N}=\lambda_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N,{\bf 0}}\lambda_{{\bf i}_2,{\bf i}_3,\dots,{\bf i}_N,{\bf 0},{\bf 0}} \dots\lambda_{{\bf i}_N,{\bf 0},\dots,{\bf 0}},$$ $$\label{eq3.4} a^{(n)}_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N}=\sum_{{\bf j}\in GF(p^s)}\lambda_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N,{\bf j}}a^{(n-1)}_{{\bf i}_2,{\bf i}_3,\dots,{\bf i}_N,{\bf j}}$$ We will say that the element $a^{(s)}_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N}$ corresponds to vertex $({\bf i}_1,{\bf i}_2,\dots,{\bf i}_N)$. Using new notation, orthonormality condition (\[eq3.2\]) can be reformulated in the following way: the system of shifts of the function $\varphi(x)\in \mathfrak D_{M}(F^{(s)}_{-N})$ is orthonormal if and only if for any ${\bf i}_1,{\bf i}_2,\dots,{\bf i}_N$: $a^{(M)}_{{\bf i}_1,{\bf i}_2,\dots,{\bf i}_N}=1$, in other words, iff an array $A^{(M)}$ has all its elements equal to 1. [**Lemma 4.1.**]{} [*The components of $A^{(0)}$ corresponding to vertices of level $l\leq N$ in the tree $\tilde T$ are equal to 1.*]{} [**Proof.**]{} Firstly, let us notice that any vertex of $\tilde T$ of level $l\leq N$ has the form $({\bf a}_l,{\bf a}_{l-1},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}),\quad {\bf a}_1\neq {\bf 0}$. Indeed, if a vertex has level $l$ in $\tilde T$, then the first element of the vector - the vertex of $T$ - is of level $l+N-1$ in $T$ and is the beginning of the following path directed to root: $({\bf a}_l\rightarrow{\bf a}_{l-1}\rightarrow\dots\rightarrow{\bf a}_1\rightarrow {\bf 0}\rightarrow\dots\rightarrow {\bf 0})$, where ${\bf a}_1$ is a vertex of level $N$ and is nonzero by the $N$-validity condition. We will prove [*the lemma*]{} by induction on $l$. Let $l=0$. Thus, we consider the root of $\tilde T$. The root has the form $({\bf 0},{\bf 0},\dots,{\bf 0})$. By construction $\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}=1$. Its corresponding element of array $A^{(0)}$ is $a^{(0)}_{{\bf 0},{\bf 0},\dots,{\bf 0}}$. Let us substitute ${\bf i}_1,{\bf i}_2,\dots,{\bf i}_N={\bf 0}$ into (\[eq3.3\]). We obtain $$a^{(0)}_{{\bf 0},{\bf 0},\dots,{\bf 0}}=\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}\dots\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}=1.$$ Now we prove if any vertex of level $l= k-1< N$ satisfies the condition $a^{(0)}_{{\bf a}_{k-1},{\bf a}_{k-2},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}=1$, then such condition is also satisfied by any vertex of level $l=k\leq N$ of the tree $\tilde T$. Using (\[eq3.3\]) and substituting ${\bf i}_1={\bf a}_{k-1},{\bf i}_2={\bf a}_{k-2},\dots,{\bf i}_{k-1}={\bf a}_1\neq {\bf 0},{\bf i}_{k}={\bf 0},\dots,{\bf i}_N={\bf 0},$ we rewrite the induction hypothesis: $$a^{(0)}_{{\bf a}_{k-1},{\bf a}_{k-2},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}=\lambda_{{\bf a}_{k-1},{\bf a}_{k-2},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}} \lambda_{{\bf a}_{k-2},{\bf a}_{k-3},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}} \dots\lambda_{{\bf a}_1,{\bf 0},\dots,{\bf 0}}\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}\dots$$ $$\begin{aligned} \label{eq4} \dots\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}=\lambda_{{\bf a}_{k-1},{\bf a}_{k-2},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}\lambda_{{\bf a}_{k-2},{\bf a}_{k-3},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}} \dots\lambda_{{\bf a}_1,{\bf 0},\dots,{\bf 0}}=1\end{aligned}$$ Here we omit $\lambda_{{\bf 0},{\bf 0},\dots,{\bf 0}}=1$. Now, let $${\bf A}_k=({\bf a}_k,{\bf a}_{k-1},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}),\quad {\bf a}_1\neq {\bf 0}$$ be a vertex of level $k$ of $\tilde T$. Let this vertex be connected to the vertex ${\bf A}_{k-1}=({\bf a}_{k-1},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0})$ of level $k-1$ in $\tilde T$. Then it can be shown that the vertex ${\bf A}_k$ is only connected to the vertex ${\bf A}_{k-1}$ in digraph $\Gamma$ also. Firstly, let us prove that in graph $\Gamma$ the vertex ${\bf A}_k$ is not connected to any other vertex, which has level $k-1$ in $\tilde T$. We will prove the fact by contradiction. Assume that ${\bf B}_{k-1}=({\bf b}_{k-1},\dots,{\bf b}_1\neq{\bf 0},{\bf 0},\dots,{\bf 0})$ is another vertex which has level $k-1$ in $\tilde T$ and that ${\bf A}_k$ is connected to ${\bf A}_{k-1}$ and ${\bf B}_{k-1}$ in graph $\Gamma$. By construction, if ${\bf A}_k$ is connected to ${\bf B}_{k-1}$ then for any $i=\overline{1,k-1},\quad {\bf a}_i={\bf b}_i$, which implies vertices ${\bf A}_{k-1}$ and ${\bf B}_{k-1}$ being identical, which contradicts the uniqueness of the vertices in $\tilde T$ and $\Gamma$. Thus, there is only one vertex, which is of level $(k-1)$ in $\tilde T$ and to which ${\bf A}_k$ is connected in graph $\Gamma$. Secondly, we prove that in $\Gamma$ the vertex ${\bf A}_k$ is not connected to any vertex, which has level strictly less, than $k-1$ in the tree $\tilde T$. Let $n>1$, ${\bf B}_{k-n}=({\bf b}_{k-n},\dots,{\bf b}_1,{\bf 0},\dots,{\bf 0})$ be an arbitrary vertex of level $(k-n)$ in $\tilde T$. By construction of $\Gamma$, for the vertex ${\bf A}_k$ to be connected to ${\bf B}_{k-n}$ it is necessary for the equality ${\bf a}_1={\bf 0}$ to hold, which is impossible by assumption ${\bf a}_1\neq{\bf 0}$. Thus, we proved that the vertex ${\bf A}_k$ is connected only to ${\bf A}_{k-1}$ in $\Gamma$. By construction that means that $\lambda_{{\bf a}_k,\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}=1$. Thus, substituting ${\bf i}_1={\bf a}_k,{\bf i}_2={\bf a}_{k-1},\dots,{\bf i}_k={\bf a}_1,{\bf i}_{k+1}={\bf 0},\dots,{\bf i}_N={\bf 0}$ into (\[eq3.3\]) and using the induction hypothesis we obtain $$a^{(0)}_{{\bf a}_k,\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}=\lambda_{{\bf a}_k,\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}} \lambda_{{\bf a}_{k-1},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}\dots\lambda_{{\bf a}_1,{\bf 0},\dots,{\bf 0}}=$$ $$=\lambda_{{\bf a}_k,\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}a^{(0)}_{{\bf a}_{k-1},\dots,{\bf a}_1,{\bf 0},\dots,{\bf 0}}=1.$$ Lemma is proved. [**Lemma 4.2.**]{} [*Let us consider $N$-valid tree $T$ and tree $\tilde T$ and digraph $\Gamma$ constructed using it. Let the values of $m_0(\chi)$ be defined as specified in equalities (\[eq2.1\]). Let also $(A^{(n)})_{n=0}^\infty$ be a sequence of arrays defined by equalities (\[eq3.3\]) and ($\ref{eq3.4}$). Then the array $A^{(n)}$ has its elements corresponding to the vertices of level $l\leq N+n$ in the tree $\tilde T$ equal to 1.*]{} [**Proof.**]{} We will prove the lemma by induction. The validity of base for $n=0$ follows from the previous lemma. Now we prove that if in $A^{(n-1)}$ elements corresponding to vertices of level less or equal to $N+n-1$ are equal to one, then in $A^{(n)}$ elements corresponding to vertices of level less or equal to $N+n$ are equal to one. Let ${\bf A}_N=({\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_1)$ be a vertex of level $l\leq N+n$ in $\tilde T$. In graph $\Gamma$ it is connected to al vertices of lower level, which we denote as $({\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0)$, moreover $\sum\limits_{\tilde{\bf a}_0} \lambda_{{\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_{1},\tilde{\bf a}_0}=1$ and $\lambda_{{\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_{1},{\bf a}_0}=0 \ \forall {\bf a}_0\notin\{\tilde{\bf a}_0\}.$ Also, it should be mentioned that since vertices $({\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0)$ of $\tilde T$ have their level not higher than $l-1\leq N+n-1$, then, by the induction hypothesis $a^{(n-1)}_{{\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0}=1$, $\forall \tilde{\bf a}_0\in\{\tilde{\bf a}_0\}.$ Then $$a^{(n)}_{{\bf a}_{N},{\bf a}_{N-1}\dots,{\bf a}_1}=\sum_{{\bf a}_0\in GF(p^s)} \lambda_{{\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_1,{\bf a}_0}a^{(n-1)}_{{\bf a}_{N-1},\dots,{\bf a}_1,{\bf a}_0}=$$ $$=\sum_{\tilde{\bf a}_0\in\{\tilde{\bf a}_0\}}\lambda_{{\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0} a^{(n-1)}_{{\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0}=\sum_{\tilde{\bf a}_0\in\{\tilde{\bf a}_0\}}\lambda_{{\bf a}_{N},{\bf a}_{N-1},\dots,{\bf a}_1,\tilde{\bf a}_0}=1$$ which proves the lemma.\ These lemmas directly imply the following theorem. [**Theorem 4.3.**]{} [*Let the tree $\tilde T$ and digraph $\Gamma$ be constructed using $N$-valid tree $T$. Let the values of $m_0(\chi)$ be defined as specified by equalities (\[eq3.1\]). Let $\tilde H=height(\tilde T).$ Then the equality $$\hat\varphi(\chi)=\prod\limits_{k=0}^\infty m_0(\chi{\cal A}^{-k})\in \mathfrak D_{-N}({F^{(s)}_M}^\bot)$$ defines an orthogonal scaling function $\varphi(x)\in\mathfrak D_M({F^{(s)}_{-N}})$, and $M\leq\tilde H-N$.*]{} [**Remark.**]{} Let us denote the collection of functions $f_N:\{0,1,...,p-1\}^N\to [0,1]$ as $\Phi_N$ and choose a function $\Lambda\in \Phi_{N+1}$. Function $\Lambda$ may be viewed as $N+1$-dimensional array $\Lambda= (\lambda_{i_1,i_2,...,i_N,i_{N+1}})$. Then the equalities (\[eq3.4\]) define discrete dynamic system $\Lambda:\Phi_N\to\Phi_N$, and the equality (\[eq3.3\]) defines the initial trajectory point. Theorem 4.3 specifies a class of discrete systems $\Lambda$, which have a stationary point in their trajectory starting from initial point (\[eq3.3\]). The theorem 4.3 for $s=1,\ N=1$ was proved by Iu.Kruss, for $s=1,\ N\in \mathbb N $ – by G.Berdnikov, for any $s, N\in \mathbb N$ – by Iu.Kruss. The idea to consider local field of positive characteristic as vector space was proposed by S.Lukomskii. The results were obtained within the framework of the state task of Russian Ministry of Education and Science (project 1.1520.2014K). [99]{} Behera B, Jahan Q. Biorthogonal wavelets on local fields of positive characteristic. Commun Math Anal 2013; 15: 52Ц75. Behera B, Jahan Q. Multiresolution analysis on local fields and characterization of scaling functions. Adv Pure Appl Math 2012; 3: 181Ц202. Behera B, Jahan Q. Wavelet packets and wavelet frame packets on local fields of positive characteristic. J Math Anal Appl 2012; 395: 1Ц14. Farkov YuA. Multiresolution Analysis and Wavelets on Vilenkin Groups. Facta universitatis, Ser.: Elec. Energ. 2008; 21: 309-325. Farkov YuA. Orthogonal wavelets on direct products of cyclic groups. Mat Zametki 2007; 82: 934-952. (article in Russian with an abstract in English). Farkov YuA. Orthogonal wavelets with compact support on locally compact abelian groups. Izv Ross Akad Nauk, Ser Mat 2005; 69: 193-220. (article in Russian with an abstract in English). Gelfand I, Graev M, Piatetski-Shapiro I. Theory of representations and authomorphic functions. M.:Nauka, 1966, 512p. (in russian.) (english translate: I.Gelfand, M.Graev, I.Piatetski-Shapiro. Theory of authomorphic functions. W.B.Saunders Company, Philadelphia, London, Toronto. 1969. ) Jiang H, Li D, Jin N. Multiresolution analysis on local fields. J Math Anal Appl 2004; 294: 523Ц532. Li D, Jiang H. The necessary condition and sufficient conditions for wavelet frame on local fields. J Math Anal Appl 2008; 345: 500Ц510. Lukomskii SF. Step refinable functions and orthogonal MRA on Vilenkin groups. J Fourier Anal Appl 2014; 20: 42Ц65. Lukomskii SF, Vodolazov AM. Non-Haar MRA on local Fields of positive characteristic. http://arxiv.org/abs/1407.4069 Protasov VYu. Approximation by dyadic wavelets. Mat Sb 2007; 198: 135-152. (article in Russian with an abstract in English). Protasov VYu, Farkov YuA. Dyadic wavelets and refinable functions on a half-line. Mat Sb 2006; 197: 129Ц160. (article in Russian with an abstract in English). Taibleson MH. Fourier Analysis on Local Fields. Princeton, NJ, USA. Princeton University Press, 1975. Vodolasov AM, Lukomskii SF. MRA on Local Fields of Positive Characteristic. Izv Saratov Univ Mat Mekh Inform, 2014; 14: 511Ц518. (article in Russian with an abstract in English).
--- abstract: 'The phenomenon of six degrees of separation is an old but attractive subject. The deep understanding has been uncovered yet, especially how closed paths included in a network affect six degrees of separation are an important subject left yet. For it, some researches have been made[@Newm21], [@Aoyama]. Recently we have develop a formalism [@Toyota3],[@Toyota4] to explore the subject based on the string formalism developed by Aoyama[@Aoyama]. The formalism can systematically investigate the effect of closed paths, especially generalized clustering coefficient $C_{(p)}$ introduced in [@Toyota4], on six degrees of separation. In this article, we analyze general $q$-th degrees of separation by using the formalism developed by us. So we find that the scale free network with exponent $\gamma=3$ just display six degrees of separation. Furthermore we drive a phenomenological relation between the separation number $q$ and $C_{(p)}$ that has crucial information on circle structures in networks.' author: - Norihito Toyota - 'Hokkaido Information University, Ebetsu, Nisinopporo 59-2, Japan' - 'email :toyota@do-johodai.ac.jp' title: '$p$-th Clustering coefficients and $q$-$th$ degrees of separation based on String-Adjacent Formulation ' --- **keywords:** Six Degrees of Separation, String, Clustering Coefficient, Adjacent Matrix, Generalized Clustering Coefficient Introduction {#intro} ============ In 1967, Milgram has made a great impact on the world by advocating “six degrees of separation” by a celebrated paper [@Milg] written based on an social experiment. “Six degrees of separation” indicates that people have a narrow circle of acquaintances. A series of social experiments made by him and his joint researchers [@Milg2],[@Milg3] made the suggestion, which all people in USA are connected through about 6 intermediate acquaintances, more certain. The two breakthroughs have made in the end of last century in network theory that declare the start of “complex network theory”. One is small world networks that have been proposed by Watts and Strogatz[@Watt1],[@Watt2]. Another is the scale free networks proposed by Barabasi et al.[@Albe2], [@Albe3]. Many empirical networks exhibit characteristic future of scale free [@Albe1],[@Newm],[@Doro1],[@Doro2]. Their frameworks provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. Furthermore Watts and his coworkers continued to explore six degrees of separation[@Watt4],[@Watt3]. We, however, think that the phenomenon, six degrees of separation, is not understood well in theoretical point of view. Especially how does the clustering coefficient proposed in [@Watt1] have an effect on it? If the network of human relations has a tree structure without circles, a person connects new persons in power of average degree, when he(she) follows his(her) acquaintances step by step on his(her) network of human relations. Then six degrees of separation is not so amazing, if a person has a few hundred acquaintances. A question is that networks of general human relations include some circles. This structures would decrease the number of new persons that connected with him(her) when he(she) follows his(her) acquaintances step by step. One of indices characterizing circle structures is the clustering coefficient. Thus it will be important to investigate the effect of the clustering coefficient on six degrees of separation. It is, however, difficult to investigate the influence of circle structures with general size. There are in fact only a little researches focused on the effect of circle structures. We have studied it from theoretical point of view with such motives. First we investigated it by imposing a homogeneous hypothesis on networks[@Toyota1]. As a result, we found that the clustering coefficient has not any decisive effects on the propagation of information on a network and then information easily spread to a lot of people even in networks with relatively large clustering coefficient under the hypothesis; a person only needs dozens of friends for six degrees of separation. Moreover we devoted deep study to the six degrees of separation based on some models proposed by Pool and Kochen [@Pool] by using a computer, numerically[@Toyota2]. In the article, we estimated the clustering coefficient along the method developed by us [@Toyota1] and improved our analysis of the subject through marrying Pool and Kochen’s models to our method introduced in [@Toyota1]. As a result, it seems to be difficult that six degrees of separation is realized in the models proposed by Pool and Kochen[@Pool] on the whole. The studies was, however, made only under rather restricted conditions on networks. Newman studied the influence of circle structures in general networks on the subject[@Newm21]. The study is so stimulating but only triangle structures and quadrilateral structures on networks were considered. It seems to be difficult to generalize his framework to $p$-polygon that are circles with general size $p$. Recently Aoyama proposed the string formulation for the subject[@Aoyama]. The idea inspired our study in this article, greatly. Although the formalism is available for general networks with any circles, he unfortunately tacked the subject only at tree approximation of networks. Since he deals with mainly scale free networks with small clustering coefficient, the approximation is valid up to a certain point. We developed the string formalism by fusing adjacent matrix formulation so as one can analyze six degrees of separation even in networks with general size of circles[@Toyota3],[@Toyota4]. In [@Toyota3], the formalism and the justification of it are mainly given, and the formalism and analyses of two degrees of separation as preliminary results were reported in [@Toyota4]. Although we also defined the general $p$-Clustering coefficient $C_{(p)}$ in [@Toyota4], we do not discuss any relation between six degrees of separation and $C_{(p)}$ yet. In this article we pursue the relations between separation number $q$ and $C_{(p)}$ as well as general $q$ degrees of separation (where $q\leq 6$) in string formulation. After that, we show that some phenomenological relation holds. The result naturally reflects the effect of circle structures in networks on separation. The plan of this article is as follows. After introduction, we briefly review the formalism developed in [@Toyota3],[@Toyota4] in the following section 2. According the formalism, we introduce generalized $p$-th clustering coefficients as well as the usual global one. In the next section 3, $q$-th degrees of separation (where $q\leq 6$) in scale free networks [@Albe2], [@Albe3] with various values of the exponents based on Milgram condition proposed by Aoyama[@Aoyama]. Though the obtained result is a little different from Aoyama’s one, it is not contradictory to Aoyama’s conjecture crucially. The justification for our result is provided by estimating the power $A^q$ of an adjacent matrix $A$. We discuss the relation between the separation number $q$ and $C_{(p)}$ in the section 4. We show a phenomenological relation holds there. The last section 5 is devoted to summary. Review for String Formulation and Adjacent Matrix {#usage} ================================================= String Formalism ---------------- We review the formalism given in [@Toyota3],[@Toyota4] , according to the formulation introduced by Aoyama [@Aoyama]. We consider a string-like part of a graph with connected $j$ vertices and call it “j-string”. $N$ is the number of vertices in a considering network and $S_j$ is the number of j-string in the network. (Note that $S_j$ in this article is $N$ times larger than $S_{j-1}^{Aoyama}$ defined by Aoyama[@Aoyama].) By definition, $S_1=N$ and $S_2$ is the number of edges in the network. $\bar{S}_j$ is the number of non-degenerate j-string where a non-degenerate string is defined as strings without any multiple edges and/or any circles in the subgraphs as seen in Fig.1. We, however, define that the non-degenerate string contains strings homeomorphic to a circle. We call strings without any circles as subgraphs and/or whole graphs “open string” and strings overall homeomorphic to a circle “closed string”. Thus we consider closed strings and open strings in this article. It is so difficult to calculate $S_j$ and $\bar{S}_j$, generally. It would be maybe impossible to calculate $S_j$ and $\bar{S}_j$ with $j>7$ explicitly at the present moment. ![Two types of strings ](strings.eps) Generalized Clustering Coefficient ---------------------------------- By using the string formulation, we can defined the usual clustering coefficient which essentially counts the number of triangular structures in a network. Although there are some definitions of the clustering coefficient[@Watt1],[@Newm21], we adopt the usual global clustering coefficient $C_{(3)}$ [@Newm21] defined by $$C_{(3)}=\frac{6\times \;number \;of \;triangles }{number \;of \;connected \;triplets }=\frac{ 6\Delta_3 }{\bar{S}_3},$$ where $\Delta_q$ is generally the number of polygons with $q$ edges in a network. Some authors have made extensions of the clustering coefficient for triangles to that for quadrilaterals. We, however, find it is difficult to extend it further to that for circles with larger size. But we need to introduce certain indices in order to uncover properties of general polygon structures in networks. From the expression of Eq.(1), we can generalize it to $p$-$th $ clustering coefficient $C_{(p)}$ straightforwardly; $$C_{(p)}=\frac{2p\times \;number \;of \;polygons }{number \;of\; connected \;p\mbox{-}plets }=\frac{ 2p\Delta_p }{\bar{S}_p}.$$ Adjacent Matrix Formulation --------------------------- We reformulate $C_{(p)}$ introduced in Eq.(2) by utilizing an adjacent matrix $A=(a_{ij})$. Generally the powers, $A^2$, $A^3$, $A^4$, $\cdots$ of adjacent matrix $A$ give information as to respecting that a vertex connects other vertices through $2,3,4, \cdots$ intermediation edges, respectively. The information of the connectivity between two vertices, $i_0$ and $i_n$, in $A^n$ also contains multiplicity of edges, generally. For resolving the degeneracy, we introduce a new series of matrices $R^n$ which give information as to respecting that a vertex connects other vertices through $n$ intermediation edges without multiplicity. We can find it by the following formula[@Toyota3]; $$[ R^n] _{i_0i_n}=\displaystyle \sum_{i_1,\cdots,i_{n-1}} a_{i_0i_1} a_{i_1i_2}\cdots a_{{i-1},i_{n}} \frac{\displaystyle\prod_{i_k,i_j,i_k-i_j>1}^{n}(1-\delta_{i_ki_j})}{(1-\delta_{i_0i_n})}.$$ where the product of $(1-\delta_{i_ki_j})$ of the numerator plays role of protecting of degeneracies from strings and $(1-\delta_{ i_0i_n})$ of the denominator is, however, needed to keep a closed string. This expression has ($n-1$)-ply loops in a computer program and so it is almost impossible to calculate $R^n$ within real time for large $N$. The expansion of Eq.(3) has $2^{n(n-1)/2}$ terms formally. This value is $32768$ for $n=6$ that is needed for the analysis of six degrees of separation as will be discussed in the later section. Though many terms really vanish, $R^6$ has still so complex expression. We give the expressions of $R^1\sim R^6$; $$\begin{aligned} [R^2]_{if} &=[A^2]_{if}-[A^2]_{ii}\delta_{if}=[A^2]_{if}-G_{if}, \nonumber \\ [R^3]_{if} &=[A^3]_{if}-\{ G,A \}_{if} +a_{if}, \nonumber \\ [R^4]_{if} &=[A^4]_{if}-\{ G,A^2 \}_{if} +\bigl\{A, diag(A^3)\bigr\}_{if} +2[A^2]_{if} +[G^2-G-AGA]_{if}+3a_{if}[A^2]_{if} \nonumber \\ [R^5]_{if} &=[A^5]_{if} -\bigl\{A, diag(A^4)\bigr\}_{if}-\{ G,A^3 \}_{if} -\bigl\{A^2, diag(A^3)\bigr\}_{if} +3\bigl([A^2]_{if}\bigr)^2 [A]_{if} \nonumber \\ &+3[A^3]_{if}[A]_{if}+2\{ G^2,A \}_{if}+[GAG]_{if}- 6\{ G,A \}_{if} -\{ AGA,A \}_{if} +3[A^3]_{if}\nonumber \\ & +\bigl\{A, diag(AGA)\bigr\}_{if}+2[diag(A^3G)]_{if} -[A\cdot diag(A^3)\cdot A]_{if} \nonumber - [diag(A^3)]_{if} \nonumber \\ &+3\sum_{k} a_{ik}a_{kf} \Bigl( [A^2]_{kf} + [A^2]_{ik}-\delta_{if}[A^2]_{kf} \Bigr) +4a_{if}\bigl( 1-a_{if} \bigr), %\{ G^2,A \}_{if} + [A^2]_{if}-[AGA]_{if}+[G^2-G]_{if}+3a_{if} [A^2]_{if}+a_{if}\bigl( [A^3]_{ii}+ A^3]_{ff}\bigr)\nonumber \\ %&=[A^4]_{if}-\{ G^2,A \}_{if}+[A^2]_{if}-[AGA]_{if}+[G^2-G]_{if}+3a_{if}[A^2]_{if}+\bigl\{A, diag(A^3)\bigr\}_{if}\end{aligned}$$ where suffix is abbreviate in trivial cases and $\{\cdot, \cdot\}$ means the anti-commutation relation; $\{A,B\}=AB+BA$. $diagA$ indicates the diagonal matrix whose elements are the diagonal elements of $A$, and $G$ is the diagonal matrix defined by $$\begin{aligned} G&=&\left[ \begin{array}{cccc} k_1&0&0&\cdots \\ 0&k_2&0 &\cdots\\ 0&0&k_3 &\cdots\\ \vdots &\vdots&\vdots&\ddots \\ \end{array} \right] , \end{aligned}$$ where $k_i$ is the degree of vertex $i$. $R^6$ is obtained after straightforward but long tedious calculations. We divide it into the following four parts to brighten the prospects of the calculations. $$\begin{aligned} [R^6]_{if} =&\sum_{j,k,l,m,n} a_{ij} a_{jk} a_{kl} a_{lm} a_{mn} a_{nf} \Delta_{ik} \Delta_{jl} \Delta_{km} \Delta_{ln} \Delta_{mf} \Delta_{il} \Delta_{jm} \Delta_{kn}\Delta_{lf} \Delta_{im} \Delta_{jn} \Delta_{kf}\Delta_{in} \Delta_{jf} \nonumber \\ =&\sum_{j,k,l,m,n} a_{ij} a_{jk} a_{kl} a_{lm} a_{mn} a_{nf} \Delta_{ik} \Delta_{jl} \Delta_{km} \Delta_{ln} \Delta_{mf} \Delta_{il} \Delta_{jm} \Delta_{kn}\Delta_{lf} \Delta_{im} \Delta_{jn} \Delta_{kf} \nonumber \\ &-\sum_{k,l,m,n} a_{if} a_{fk} a_{kl} a_{lm} a_{mn} a_{nf} \Delta_{ik} \Delta_{fl} \Delta_{km} \Delta_{ln} \Delta_{mf} \Delta_{il} \Delta_{kn} \Delta_{im} \nonumber \\ &-\sum_{j,k,l,m} a_{ij} a_{jk} a_{kl} a_{lm} a_{mi} a_{if} \Delta_{ik} \Delta_{jl} \Delta_{km} \Delta_{li} \Delta_{mf} \Delta_{jm} \Delta_{lf} \Delta_{kf} \nonumber \\ &+\sum_{k,l,m} a_{if} a_{fk} a_{kl} a_{lm} a_{mi} \Delta_{ik} \Delta_{fl} \Delta_{km} \Delta_{li} \Delta_{mf}, \nonumber \\ \equiv & R^6[1]_{if} + R^6[2]_{if}+R^6[3]_{if}+ R^6[4]_{if},\end{aligned}$$ where $\Delta_{ik} = 1-\delta_{ik}$. Furthemore we divide $R^6[1]_{if}$ into the following four parts to brighten the prospects of the caluculation. $$\begin{aligned} R^6[1]_{if} =&\sum_{j,k,l,m,n} a_{ij} a_{jk} a_{kl} a_{lm} a_{mn} a_{nf} \Delta_{ik} \Delta_{jl} \Delta_{km} \Delta_{ln} \Delta_{mf} \Delta_{il} \Delta_{jm} \Delta_{kn}\Delta_{lf} \Delta_{im} \Delta_{jn} \Delta_{kf} \nonumber \\ =&\sum_{j,k,l,m,n} a_{ij} a_{jk} a_{kl} a_{lm} a_{mn} a_{nf} \Delta_{ik} \Delta_{jl} \Delta_{km} \Delta_{ln} \Delta_{mf} \Delta_{il} \Delta_{jm} \Delta_{kn} \Delta_{lf} \Delta_{jn} \nonumber \\ -& \sum_{j,k,l,n} a_{ij} a_{jk} a_{kl} a_{li} a_{in} a_{nf} \Delta_{ik} \Delta_{jl} \Delta_{ln} \Delta_{if} \Delta_{kn} \Delta_{lf} \Delta_{jn} \Delta_{kf} \nonumber \\ -& \sum_{j,l,m,n} a_{ij} a_{jf} a_{fl} a_{lm} a_{mn} a_{nf} \Delta_{if} \Delta_{jl} \Delta_{fm} \Delta_{ln} \Delta_{il} \Delta_{jm} \Delta_{jn} \nonumber \\ +& \sum_{j,l,n} a_{ij} a_{jf} a_{fl} a_{li} a_{in} a_{nf} \Delta_{if} \Delta_{jl} \Delta_{ln} \Delta_{jn} \Delta_{mf}, \nonumber \\ \equiv & R^6[1,1]_{if} + R^6[1,2]_{if}+R^6[1,3]_{if}+ R^6[1,4]_{if}. \end{aligned}$$ The four terms are respectively expressed as follows; $$\begin{aligned} R^6[1,1]_{if} =&[A^6]_{if}+[A^4]_{if} \bigl(4-(k_i+k_f) \bigr) +[AGA]_{if}(k_i+k_p)-\{AGA,A^2 \}_{if}-[A^2GA^2]_{if} \nonumber \\ &+2[A(G^2-3G)A]_{if} +3\sum_{j,k} a_{ij}a_{jk}a_{kf}[A^2]_{jk}-\sum_j [A^3]_{jj} \bigl( a_{ij}[A^2]_{jp}+ [A^2]_{ij} a_{jf}\bigr) \nonumber \\ +&2\sum_j [A^2]_{ij} [A^2]_{jf} \bigl( a_{ij}+ a_{jf}\bigr) + [A^2]_{if} \bigl( k^2_i+k^2_f -3(k_i+k_f)+4\bigr) \nonumber \\ - &[A^3]_{if}\bigl( [A^3]_{ii}+ [A^3]_{ff} \bigr) + \bigl([A^3]_{if} \bigr)^2 + \sum_{j} a_{ij}a_{jf}\Bigl( \bigl( [A^3]_{ij} + [A^3]_{fj}\bigr)\nonumber \\ -& [A^4]_{jj} -2\bigl( [A^2]_{ij} + [A^2]_{fj}\bigr)+ [AGA]_{jj} + \bigl( ([A^2]_{ij})^2 + ([A^2]_{fj} )^2 \bigr) \Bigr) \nonumber \\ +&\Delta_{if} \Biggl( [A^2]_{if} \bigl( (k_i-1)(k_f-1) +1- [A^2]_{if} \bigr) -\bigl( [A^3]_{if} \bigr)^2 +\sum_j [A^2]_{ij} [A^2]_{jf} \bigl( a_{ij}+ a_{jf}\bigr) \nonumber \\ +& \sum_{j} a_{ij}a_{jf}\Bigl( \bigl( ([A^2]_{ij})^2 + ([A^2]_{fj} )^2 \bigr) -\bigl( [A^2]_{ij} + [A^2]_{fj}\bigr) \Bigr) \Biggr) \nonumber \\ + & a_{if} \Biggl( [A^3]_{ff} \bigl(2k_f+k_i-5\bigr) +[A^3]_{ii} \bigl(2k_i+k_f-5\bigr) +[A^2]_{if} \bigl(11-3k_i-3k_f\bigr) \nonumber \\ & -2 \sum_{j} a_{ij}a_{jf}\Bigl( \bigl( [A^2]_{ij} + [A^2]_{fj}\bigr) \Bigr) \Biggr), \nonumber \\ R^6[1,2]_{if} +&R^6[1,3]_{if} = -\Delta_{if} \Biggl( [A^2]_{if} \bigl( [A^4]_{ii}+ [A^4]_{ff} \bigr) +4[AGA]_{if} -\{ A^2, G^2-3G \}_{if} -\{ AGA, A^2 \}_{if} \nonumber \\ -4& [A^2]_{if} -\sum_{j} a_{ij} a_{jf} \biggl( \Bigl([A^2]_{if})^2+ ([A^2]_{if})^2\Bigr) +2\bigl( [A^3]_{ij} + [A^3]_{fj}\bigr) -\bigl( [A^2]_{ij} + [A^2]_{fj}\bigr) \biggr) \Biggr) \nonumber \\ +&a_{if} \Biggl( -2[A^2]_{if} [A^3]_{if} +2[A^2]_{if}(k_i+k_f-3) +2 \sum_{j} a_{ij}a_{jf} \bigl( [A^2]_{ij} + [A^2]_{fj}\bigr) \Biggr), \nonumber \\ R^6[1,4]_{if} =& [A^3]_{if}\Delta_{if} \Bigl( ([A^3]_{if})^2 - 3 [A^2]_{if} +2 \Bigr). \end{aligned}$$ $R^6[2]_{if} $, $R^6[3]_{if} $ and $R^6[4]_{if} $ are respectively given by the following expressions; $$\begin{aligned} R^6[2]_{if} +&R^6[3]_{if} = a_{if} \Biggl( 2[A^4]_{if} -( \bigl( [A^5]_{ii} + [A^5]_{ff}\bigr) -7\bigl( [A^3]_{ii} + [A^3]_{ff}\bigr) +22[A^2]_{ij} \nonumber \\ & +4[A^3]_{if}[A^2]_{if} +2\bigl( [A^3]_{ii}k_i + [A^3]_{ff}k_f\bigr)+\sum_{j} [A^3]_{jj} \bigl(a_{jf}+a_{ij}\bigr) \nonumber \\ &-4\sum_{j} a_{ij}a_{jf} \bigl( [A^2]_{ij} + [A^2]_{fj}\bigr) -6\{A^2,G \}_{if} -2[AGA]_{if} +\{A,AGA\}_{ii}+\{A,AGA\}_{ff} \Biggr), \nonumber \\ R^6[4]_{if} =& a_{if} \Biggl( [A^4]_{if} - [AGA]_{if} -\{A^2,G\} +5 [A^2]_{if} - \Bigl( [A^3]_{ii} + [A^3]_{ff} \Bigr) \Biggr). \end{aligned}$$ By unifying all the terms, we obtain the full expression of $R^6$. Lastly we give the expressions of Tr $R^n$ appearing in Eq. (7). $$\begin{aligned} Tr (R^2) &=0, \nonumber \\ Tr (R^3) &=Tr (A^3), \nonumber \\ Tr(R^4) &= Tr(A^4)-3 Tr(GA^2), +2Tr(A^2) + Tr(G^2-G), \nonumber \\ Tr(R^5) &= Tr(A^5)-3 Tr(GA^3) +6 Tr(A^3) -diag(A^3) Tr(A^2) +Ndiag(2A^3G-A^3), \nonumber \\ Tr(R^6)&= Tr(A^6) +6Tr(A^4)-5Tr(GA^4) -4Tr(A^3) +Tr(A^2G^2) -6Tr(A^2G)+4Tr(A^2) \nonumber \\ &+2Tr(AGAG) -\sum_i (a_{ii})^2 -\sum_{i,j} [A^3]_{jj}a_{ij}[A^2]_{ij} + 6 \sum_{i,j} a_{ij}[A^2]_{ij} +\sum_{i,j,k} a_{ij} a_{jk} a_{ki} [A^2]_{jk}. \end{aligned}$$ By using $R^n$, $\bar{S}_p$ and generalized $p$-th clustering coefficient $C_{(p)}$ are given by $$\bar{S}_p=\sum_{i,j} (R^{p-1})_{ij}/2,$$ $$C_{(p)}=\frac{\mbox{Tr} R^p }{ \displaystyle \sum_{i,j}R^{p-1}},$$ where the denominator and the numerator indicates the contribution from open strings and a closed string, respectively. Thus usual clustering coefficient $C_{(3)}$ becomes $$\displaystyle C_{(3)}=\frac{\mbox{Tr} R^3 }{ \displaystyle \sum_{i,j} (A^2)_{ij}-(A^2)_{ij} \delta_{ij} }=\frac{ \mbox{Tr} A^3 }{ ||A||-\mbox{Tr} A^2 }.$$ where we introduced a new symbol $|| \cdots ||$ which denotes $ ||A|| \equiv \sum_{i,j}A_{ij} $. Application to Six Degrees of Separation ======================================== We analyze general $q$-$th$ degrees of separation based on the formalism developed in the section 2. Aoyama has proposed a condition, so-called Milgram Condition, for $q$-$th$ degrees of separation[@Aoyama]; $$M_{q+1} \equiv \frac{\bar{S}_{q+1}}{N} \sim O(N).$$ For six degrees of separation, we obtain from Eq.(6) $$\bar{S}_7= \sum_{i,j} (R^6)_{ij}/2.$$ We investigate $q$-th degrees of separation by using Eq.(4)-(10) and the Milgram Condition. Here we place the focus on scale free networks where the degree distribution is $P(k)\sim k^{-\gamma}$. The networks can be constructed based on the configuration model [@Bebe].[@Bend],[@Moll] which can systematically produce networks with arbitrary degree distribution. But the networks produced by the model are degenerate multigraphs, generally. We modify it a little to produce networks without multiple edges. Since it is not essential in this article, we omit the technical details of it. Although Eq. (3) reduces to Eq.(4)-(9), we can not estimate the Milgram condition in large scale networks because of considerable computational complexity. We can see that the results are stable and reliable while estimations are carried out in small networks, Fig. 2 shows the relation between $\log_{10} M_q/N$ and $q$ for some $\gamma$’s where the average degree $\langle k \rangle$ is four and network size $N=200$. $M_q/N$ increases linearly for every $\gamma$ with $q$. The interior of a rectangle in Fig.2 shows the region where the Milgram condition is satisfied. From Fig.2, while we see the four degrees of separation in networks with $\gamma \leq 2.5$, we cannot recognize that vertecis are linked together in networks with $\gamma \geq 3.5$ up to six degree of separation. $\gamma=2.75$ shows five degrees of separation and $\gamma=3.0$, in which many real-world networks have this value of exponent, just shows six degrees of separation. Comparing these results with Aoyama’s ones [@Aoyama] where we represent the median of the region, there is a little difference between both results as shown in Table.1. Especially, it seems like Aoyam’s assertion that $\gamma=2$ is a critical point for two degrees of separation conflicts with our result. But Aoyama gives only a region where a separation number exists for every $\gamma$ and so we take the medians of the region in Table 1. By considering moreover that Aoyama’s calculations are based on a tree approximation and thus the separation number $q$ is only a estimated one, two results are not necessarily inconsistent. Furthermore the estimations depend on how we build up networks, in spite of networks with the same $\gamma$. The fact our result comes closer to Aoyama’s one[@Aoyama] for smaller $N$ (we do not go into the details), is consistent with Aoyam’s assertion[@Aoyama] that the accuracy of his calculations are decreased for larger $\gamma$. ![ Separation number $q$ v.s. $M_q$ for scale free networks with several $\gamma$](SeparationQ.eps){width="12cm" height="9cm"} $\gamma$ 2 2.5 2.75 3 3.5 ------------------ --- ----- ------ --- ----- Our results 4 4 5 6 × Aoyama’s results 2 3 4 4 × : Comparison of our and Aoyama’s $q$ for diverse $\gamma$ We can demonstrate the validity of our results by directly evaluating the ratio $r$ of together connected vertices to whole vertices from the power of an adjacent matrix, since the network size $N$ is small. Fig.3 shows the relation between $q$ and $r$ for every $\gamma$. When every node connects with $50\%\sim60\%$ of vertices in a network, it may be claimed in general that the network is almost connected. Taking $r>50 \%\sim 60\%$ as a borderline, $q$ values derived from it are consistent with those estimated from $M_q$ in our calculation. Thus the point where $M_q/N$ becomes $O(1)$ really shows that a majority of the vertices in a network connect each other. ![ Separation number $q$ v.s. $r$ for scale free networks ](Ajjacent6.eps){width="9cm" height="6cm"} ![ The sum of $C_{(p)}$ and $M_q$ for the scale network with $\gamma=3.0$ ](CpMq.eps){width="8cm" height="6cm"} Milgram Condition and Generalized Clustering coefficient ======================================================== We explore the relation between Milgram condition and the generalized clustering coefficients in this section. By making it, we can analyze how circle structures in a network is related with a separation number $q$. We define the following two quantities; $$\begin{aligned} X &\equiv& \sum_{p=3}^{q} C_{(p)},\\ \nonumber Y &\equiv& \log_{10} M_q.\end{aligned}$$ Fig.4 shows the relation between $X$ and $Y$ at $\gamma=3.0$ in the scale free network with $N=200$. We can recognize that $Y$ increases linearly with $X$; $$Y =A X +B.$$ Such a relation holds for $1.8 \leq \gamma <4.0$ in common. That is to say, it becomes clear that there is the relation of an exponential function between $M_q/N$ and the sum of generalized clustering coefficients; $$M_q \sim \exp ( c \sum_{p=3}^{q} C_{(p)}),$$ where $c$ is a constant determined by $A$ and $B$. Thus the separation number $q$ depends greatly on the sum of $C_{(p)}$( $p \leq q$ ), which represents the state of the circle structures up to $q$ in a network. This indicates that the generalized clustering coefficient introduced in this article is an effective index to explore $q$-$th$ degrees of separation. ![ The sum of $C_{(p)}$ and $M_q$ for the scale free networks with $\gamma=2.0,2.25,2.5,2.75,3.0,3.5,4.0$ ](X-Yall.eps){width="8cm" height="6cm"} We observe that further relations hold for $q$ and $M_q/N$ by drawing a superposed diagram of the above-mentioned linear relation for diverse $ \gamma $’s. Fig.5 is the superposed diagram for $2.0 \leq \gamma <4.0$. The linear lines for $2.0 \leq \gamma <4.0$ almost are joined to be a line with an almost common gradient. This means that $q$ depends only on the generalized clustering coefficient and not on $\gamma$, directly. Thus the exponent in scale free networks is not crucial for the separation number but the state of circle structures in networks is essential. The reason why the relations holds is outstanding issue and only a phenomenological relation at present. Summary ======= In this article, we first introduced the generalized clustering coefficient, which has information on the state of circle structures in a network, based on the string formulation proposed by [@Aoyama] to analyze networks. Fusing adjacent matrix $A$ into the formalism, we reformulate the string formalism to define the generalized $q$-$th$ clustering coefficient in a compact way[@Toyota3], [@Toyota4]. Then we introduce the $R$ matrix in the formalism developed in this article instead of $A$. The powers of $R$ play central role in the analysis of this article. The explicit representations of $R^n$ for $n=2\sim6$ are given after straightforward but tedious calculations. Next we applied the formulation to the subject of $q$-$th$, especially $q=6$, degrees of separation. We evaluated whether Milgram condition proposed by Aoyama’s article holds or not for diverse exponents in scale free networks. We find that as the exponent $\gamma$ is larger, so it is more difficult that Milgram condition holds. The six degrees of separation is just founded at $\gamma=3$ whose value is fairly universally observed in real-world networks. We also find that the result seems to be a little different from Aoyama’s one[@Aoyama]. We think that it does not mean necessarily inconsistency, considering that Aoyama’s evaluation is based on tree approximation and furthermore the way to construct networks is maybe different (Aoyama does not explain the way to construct networks and the construction of networks in this article include some original way in avoiding multiedges ). Our results is also supported by analyzing the number of zero-components in $A^n$. The our construction is based on the configuration model[@Bebe].[@Bend],[@Moll] with average degree $<k>=4$. According to some sociologists, the estimated average number of acquaintances of a person is 290 [@Bernard1],[@Bernard2],[@Bernard3]. Considering this estimation, the separation number would really take smaller values for every exponent. The following problems are yet left in future: 1. Finding explicit expressions of $R^n$ for arbitrary $n$ by applying our formalism. Then finding a general formula for $R^n$. 2. Revealing relations between $q$-$th$ degrees of separation and $N$, $\langle k \rangle$ or $<k^n>$. More definitely, discovering the relations between $q$ and $N$, $\langle k \rangle$ or $<k^n>$. 3. The reason why the relations (18) holds is outstanding issue. So finding some theoretical reasons for phenomenological relations between the separation number and various circle structures, especially $C_{(q)}$. 4. Attempt whether this relation holds or not in other networks, especially small world networks which can at least control the usual clustering coefficient by construction. [99]{} M.E.J.Newman,“Ego-centered networks and the ripple effect or why all your friends are wired”, Social Networks 25 (2003) p.83;arXiv. cond-mat/0111070 H. Aoyama, “Six degrees of separation; some caluculation”, SGC library65, “ Introduction to Network Science”, (2008) in Japanese; H, Aoyama, Y.Fujiwara, H, Ietomi, Y. Ikeda and W.Soma “EconoPhysics”,Kyouritu Shuppan 2008 S. Milgram, “The small world problem”, Psychology Today 2, 60-67 (1967) J. Travers and S. Milgram, “An Experimental Study of the Small World Problem”, Sociometry 32, 425 (1969) C. Korte and S. Milgram, "Acquaintance edges between White and Negro populations: Application of I.S. Pool and M. Kochen, “Contacts and Influence”, Social Networks, 1(1978/1979)5-51(This paper was actually written in 1958) D. J. Watts and S. H. Strogatz, “Collective dynamics of ’small-world’ networks”, Nature,393, 440-442(1998) D. J. Watts, “Six degree– The science of a connected age”, W.W. Norton and Company, New York (2003) A.-L.Barabasi and R.Albert, “Emergence of scaling in random networks”, Science, 286, 509-512(1999) A.-L.Barabasi and R.Albert, “edgeed: The New Science of Networks”, Perseus Books Group (2002) edgeed: How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life Plume ; ISBN: 0452284392 ; Reissue 版 (2003/04/29) R.Albert and A-.L. Barabasi, “Statistical Mechanics of complex networks”,Rev. Mod. Phys. 74, 47-97(2002) J.S. Kleinfield, “The small world problem”, Society 39(2) pp.61-66(2002): “COULD IT BE A BIG WORLD? ”, http://www.uaf.edu/northern/big$ \_$world.html M.E.J. Newman, A.-L.Barabasi and D. J. Watts, “The Structure and Dynamics of Networks”, Princeton Univ. Press, 2006   S. N. Dorogovtsev, A.V. Goltsev and J.F.F. Mendes, “Pseudo fractal scale-free web”, Phys. Rev. E.65, 066122(2002) S. N. Dorogovtsev and J.F.F. Mendes, “Evolution of Networka”, Oxford Univ. Press, Oxford(2003) D. J. Watts et al., Small World Project-Columbia University. http://small world.columbia.edu/ P.S.Dodds, R.Muhamad and D.J. Watts, “An Experimental Study of Research in Global\ Social Networks”, Science 301, pp.827-829:\ http://small world.columbia.edu/images/dodds2003pa.pdf (2003) N. Toyota, “Some Considerations on Six Degrees of Separation from A Theoretical Point of View”, arXiv:0803.2399 N. Toyota, ”Comments on Six Degrees of Separation based on the le Pool and Kochen Modelsgendary", arXiv:0905.4804 N. Toyota, IEICE Thecnical Report, “String Formalism for $p$-Clustering Coefficient-Toward Six Degrees of Separations”,NLP2009-49(2009) in Japanese. N. Toyota, “ $p$-th Clustering coefficients $C_{p}$ and Adjacent Matrix for Networks: Formulation based on String”, arXiv:0912.2807 A.Bebessy, P.Bebessy and J. Komlos, Stud,. Sci., Math. Hangary, 7343- 7353 (1972) E.A.Bender and E.R. Candield, J. Comb. Theory A. 24. 296-307 (1978) M. Molloy and B. Reed, Comb., Prob. and Compt. 6. 161-179 (1995); 7. 295-305 (1998 P. Erdos and A. Renyi,“ On random graphs I”, Publicationes Mathematicae Debrecen6, 290-297, 1959 P.D.Killwoth,E.C.Johnsen, H.R.Bernard, G.A.Shelley and “Estimating the size of personal networks”, Social Networks 12,289-312 (1990) H.R.Bernard, E.C.Johnsen, P.D.Killwoth and S. Robinson, “ Estimating the size of average personal network and of an event population; Some empirical results”, Social Science Research 20, 109-1211991) H.R.Bernard, P.D.Killwoth, E.C.Johnsen, and C.McCarty, “ Estimating the ripple effect of a disaster”, Connections 24(2), pp.16-22(2001) P.G.Lind, M.C.Gonzalez and H.J.Hermann, “Cycles and clustering in bipartite networks”, Phys.Rev.E 72,056127 (2005) P.Zhang, J.Wang, X.Li, M.Li, Z.Di and Y.Fan,“Clustering coefficient and community structure of bipartite networks”, Physica A, 387, 6869-6875(2008)
--- abstract: 'Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work which is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we use a reward based on user satisfaction estimation. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. Furthermore, we apply this novel user satisfaction estimation model live in simulated experiments where the satisfaction estimation model is trained on one domain and applied in many other domains which cover a similar task. We show that applying this model results in higher estimated satisfaction, similar task success rates and a higher robustness to noise.' author: - | Stefan Ultes\ Daimler AG\ Sindelfingen, Germany\ `stefan.ultes@daimler.com` bibliography: - 'references.bib' title: | Improving Interaction Quality Estimation with BiLSTMs and the\ Impact on Dialogue Policy Learning --- Introduction ============ One prominent way of modelling the decision-making component of a spoken dialogue system (SDS) is to use (partially observable) Markov decision processes ((PO)MDPs) [@lemon2012; @young2013]. There, reinforcement learning (RL) [@sutton1998] is applied to find the optimal system behaviour represented by the policy $\pi$. Task-oriented dialogue systems model the reward $r$, used to guide the learning process, traditionally with task success as the principal reward component [@gasic2014gaussian; @lemon2007machine; @daubigney2012; @levin1997; @young2013; @su2015; @su2016acl]. An alternative approach proposes user satisfaction as the main reward component [@ultes2017domain]. However, the applied statistical user satisfaction estimator heavily relies on handcrafted temporal features. Furthermore, the impact of the estimation performance on the resulting dialogue policy remains unclear. In this work, we propose a novel LSTM-based user satisfaction reward estimator that is able to learn the temporal dependencies implicitly and compare the performance of the resulting dialogue policy with the initially used estimator. Optimising the dialogue behaviour to increase user satisfaction instead of task success has multiple advantages: 1. The user satisfaction is more domain-independent as it can be linked to interaction phenomena independent of the underlying task [@ultes2017domain]. 2. User satisfaction is favourable over task success as it represents more accurately the user’s view and thus whether the user is likely to use the system again in the future. Task success has only been used as it has shown to correlate well with user satisfaction [@williams2004characterizing]. Based on previous work by @ultes2017domain, the interaction quality (IQ)—a less subjective version of user satisfaction[^1]—will be used for estimating the reward. The estimation model is thus based on domain-independent, interaction-related features which do not have any information available about the goal of the dialogue. This allows the reward estimator to be applicable for learning in unseen domains. The originally applied IQ estimator heavily relies on handcrafted temporal features. In this work, we will present a deep learning-based IQ estimator that utilises the capabilities of recurrent neural networks to get rid of all handcrafted features that encode temporal effects. By that, these temporal dependencies may be learned instead. The applied RL framework is shown in Figure \[fig:RLframework\]. Within this setup, both IQ estimators are used for learning dialogue policies in several domains to analyse their impact on general dialogue performance metrics. ![image](rl_framework){width="0.75\linewidth"} The remainder of the paper is organised as follows: in Section \[sec:related\_work\], related work is presented focusing on dialogue learning and the type of reward that is applied. In Section \[sec:iq\_reward\_estimation\], the interaction quality is presented and how it is used in the reward model. The deep learning-based interaction quality estimator proposed in this work is then described in detail in Section \[sec:lstm\_estimator\] followed by the experiments and results both of the estimator itself and the resulting dialogue policies in Section \[sec:results\]. Relevant Related Work {#sec:related_work} ===================== Most of previous work on dialogue policy learning focuses on employing task success as the main reward signal [@gasic2014gaussian; @gasic2014; @lemon2007machine; @daubigney2012; @levin1997; @young2013; @su2015; @su2016acl]. However, task success is usually only computable for predefined tasks e.g., through interactions with simulated or recruited users, where the underlying goal is known in advance. To overcome this, the required information can be requested directly from users at the end of each dialogue [@gasic2013]. However, this can be intrusive, and users may not always cooperate. An alternative is to use a task success estimator [@elasri2014task; @su2015; @su2016acl]. With the right choice of features, these can also be applied to new and unseen domains [@vandyke2015]. However, these models still attempt to estimate completion of the underlying task, whereas our model evaluates the overall user experience. In this paper, we show that an interaction quality reward estimator trained on dialogues from a bus information system will result in well-performing dialogues both in terms of success rate and user satisfaction on five other domains, while only using interaction-related, domain-independent information, i.e., not knowing anything about the task of the domain. Others have previously introduced user satisfaction into the reward [@walker1998learning; @walker2000; @rieser2008; @rieser08] by using the PARADISE framework [@walker1997]. However, PARADISE relies on the existence of explicit task success information which is usually hard to obtain. Furthermore, to derive user ratings within that framework, users have to answer a questionnaire which is usually not feasible in real world settings. To overcome this, PARADISE has been used in conjunction with expert judges instead [@elasri2012reward; @elasri2013reward] to enable unintrusive acquisition of dialogues. However, the problem of mapping the results of the questionnaire to a scalar reward value still exists. Therefore, we use interaction quality (Section \[sec:iq\_reward\_estimation\]) in this work because it uses scalar values applied by experts and only uses task-independent features that are easy to derive. Interaction Quality Reward Estimation {#sec:iq_reward_estimation} ===================================== In this work, the reward estimator is based on the interaction quality (IQ) [@schmitt2015] for learning information-seeking dialogue policies. IQ represents a less subjective variant of user satisfaction: instead of being acquired from users directly, experts annotate pre-recorded dialogues to avoid the large variance that is often encountered when users rate their dialogues directly [@schmitt2015]. IQ is defined on a five-point scale from five (satisfied) down to one (extremely unsatisfied). To derive a reward from this value, the equation $$R_{IQ} = T \cdot (-1) + (iq - 1) \cdot 5$$ is used where $R_{IQ}$ describes the final reward. It is applied to the final turn of the dialogue of length $T$ with a final IQ value of $iq$. A per-turn penalty of $-1$ is added to the dialogue outcome. This results in a reward range of 19 down to $-T$ which is consistent with related work [@gasic2014gaussian; @vandyke2015; @su2016acl e.g.] in which binary task success (TS) was used to define the reward as: $$R_{TS} = T \cdot (-1) + \mathbbm{1}_{TS} \cdot 20 \; ,$$ where $\mathbbm{1}_{TS} = 1$ only if the dialogue was successful, $\mathbbm{1}_{TS} = 0$ otherwise. $R_{TS}$ will be used as a baseline. \[par:iqestimator\] The problem of estimating IQ has been cast as a classification problem where the target classes are the distinct IQ values. The input consists of domain-independent variables called interaction parameters. These parameters incorporate information from the automatic speech recognition (ASR) output and the preceding system action. Most previous approaches used this information, which is available at every turn, to compute temporal features by taking sums, means or counts from the turn-based information for a window of the last 3 system-user-exchanges[^2] and the complete dialogue (see Fig. \[fig:parameterlevels\]). The baseline IQ estimation approach as applied by @ultes2017domain (and originating from @ultes2015b) used a feature set of 16 parameters as shown in Table \[tab:parameters\] with a support vector machine (SVM) [@vapnik1995; @chang2011]. ![Modelling of temporal information in the interaction parameters used as input to the IQ estimator. []{data-label="fig:parameterlevels"}](parameter-level-crop){width="0.9\linewidth"} The LEGO corpus [@schmitt2012a] provides data for training and testing and consists of 200 dialogues (4,885 turns) from the Let’s Go bus information system [@raux2006]. There, users with real needs were able to call the system to get information about the bus schedule. Each turn of these 200 dialogues has been annotated with IQ (representing the quality of the dialogue up to the current turn) by three experts. The final IQ label has been assigned using the median of the three individual labels. [crX]{} & Parameter & Description\ & ASRRecognitionStatus & ASR status: *success*, *no match*, *no input*\ & ASRConfidence & confidence of top ASR results\ & RePrompt? & is the system question the same as in the previous turn?\ & ActivityType & general type of system action: *statement*, *question*\ & Confirmation? & is system action confirm?\ & MeanASRConfidence & mean ASR confidence if ASR is success\ & \#Exchanges & number of exchanges (turns)\ & \#ASRSuccess & count of ASR status is success\ & %ASRSuccess & rate of ASR status is success\ & \#ASRRejections & count of ASR status is reject\ & %ASRRejections & rate of ASR status is reject\ & {Mean}ASRConfidence & mean ASR confidence if ASR is success\ & {\#}ASRSuccess & count of ASR is success\ & {\#}ASRRejections & count of ASR status is reject\ & {\#}RePrompts & count of times RePromt? is true\ & {\#}SystemQuestions & count of ActivityType is question\ \[tab:parameters\] Previous work has used the LEGO corpus with a full IQ feature set (which includes additional partly domain-related information) achieving an unweighted average recall[^3] (UAR) of 0.55 using ordinal regression [@elasri2014b], 0.53 using a two-level SVM approach [@ultes2013d], and 0.51 using a hybrid-HMM [@ultes2014b]. Human performance on the same task is 0.69 UAR [@schmitt2015]. A deep learning approach using only non-temporal features achieved an UAR of 0.55 [@rach2017interaction]. LSTM-based Interaction Quality Estimation {#sec:lstm_estimator} ========================================= ![The architecture of the proposed BiLSTM model with self attention. For each time $t$, the exchange level parameter of all exchanges $\mathbf{e}_i$ of the sub-dialogue $i \in \{1 \ldots t\}$ are encoded to their respective hidden representation $\mathbf{h}_i$ and are considered and weighted with the self attention mechanism to finally estimate the IQ value $\mathbf{y}_t$ at time $t$. []{data-label="fig:architecture"}](architecture-crop){width="0.9\linewidth"} The proposed IQ estimation model will be used as a reward estimator as depicted in Figure \[fig:RLframework\]. With parameters that are collected from the dialogue system modules for each time step $t$, the reward estimator derives the reward $r_t$ that is used for learning the dialogue policy $\pi$. The architecture of our proposed IQ estimation model is shown in Figure \[fig:architecture\]. It is based on the idea that the temporal information that has previously been explicitly encoded with the window and dialogue interaction parameter levels may be learned instead by using recurrent neural networks. Thus, only the exchange level parameters $\mathbf{e}_t$ are considered (see Table \[tab:parameters\]). Long Short-Term Memory (LSTM) cells are at the core of the model and have originally been proposed by @hochreiter1997 as a recurrent variant that remedies the vanishing gradient problem [@bengio1994learning]. As shown in Figue \[fig:architecture\], the exchange level parameters form the input vector $\mathbf{e}_t$ for each time step or turn $t$ to a bi-directional LSTM [@graves2013hybrid] layer. The input vector $\mathbf{e}_t$ encodes the nominal parameters ASRRecognitionStatus, ActivityType, and Confirmation? as 1-hot representations. In the BiLSTM layer, two hidden states are computed: $\vec{\mathbf{h}}_t$ constitutes the forward pass through the current sub-dialogue and $\cev{\mathbf{h}}_t$ the backwards pass: $$\begin{aligned} \vec{\mathbf{h}}_t &= \operatorname{LSTM}(\mathbf{e}_t,\vec{\mathbf{h}}_{t-1}) \\ \cev{\mathbf{h}}_t &= \operatorname{LSTM}(\mathbf{e}_t,\cev{\mathbf{h}}_{t+1})\end{aligned}$$ The final hidden layer is then computed by concatenating both hidden states: $$\mathbf{h}_t = [\vec{\mathbf{h}}_t , \cev{\mathbf{h}}_t] \; .$$ Even though information from all time steps may contribute to the final IQ value, not all time steps may be equally important. Thus, an attention mechanism [@vaswani2017attention] is used that evaluates the importance of each time step $t'$ for estimating the IQ value at time $t$ by calculating a weight vector $\alpha_{t,t'}$. $$\begin{aligned} \mathbf{g}_{t,t'} &= \tanh(\mathbf{h}_t^T \mathbf{W}_t + \mathbf{h}_{t'}^T \mathbf{W}_{t'} + \mathbf{b}_t) \\ \bm{\alpha}_{t,t'} &= \operatorname{softmax}(\sigma(\mathbf{W}_a \mathbf{g}_{t,t'} + \mathbf{b}_a)) \\ \mathbf{l}_t &= \sum_{t'}\bm{\alpha}_{t,t'} \mathbf{h}_{t'} \end{aligned}$$ @zheng2018opentag describe this as follows: “The attention-focused hidden state representation $\mathbf{l}_t$ of an \[exchange\] at time step $t$ is given by the weighted summation of the hidden state representation $\mathbf{h}_{t'}$ of all \[exchanges\] at time steps $t'$, and their similarity $\bm{\alpha}_{t,t'}$ to the hidden state representation $\mathbf{h}_t$ of the current \[exchange\]. Essentially, $\mathbf{l}_t$ dictates how much to attend to an \[exchange\] at any time step conditioned on their neighbourhood context.” To calculate the final estimate $\mathbf{y}_t$ of the current IQ value at time $t$, a softmax layer is introduced: $$\mathbf{y}_t = \operatorname{softmax}(\mathbf{l}_t)$$ For estimating the interaction quality using a BiLSTM, the proposed architecture frames the task as a classification problem where each sequence is labelled with one IQ value. Thus, for each time step $t$, the IQ value needs to be estimated for the corresponding sub-dialogue consisting of all exchanges from the beginning up to $t$. Framing the problem like this is necessary to allow for the application of a BiLSTM-approach and still be able to only use information that would be present at the current time step $t$ in an ongoing dialogue interaction. To analyse the influence of the BiLSTM, a model with a single forward-LSTM layer is also investigated where $$\mathbf{h}_t = \vec{\mathbf{h}}_t \; .$$ Similarly, a model without attention is also analysed where $$\mathbf{l}_t = \mathbf{h}_t \; .$$ Experiments and Results {#sec:results} ======================= The proposed BiLSTM IQ estimator is both trained and evaluated on the LEGO corpus and applied within the IQ reward estimation framework (Fig. \[fig:RLframework\]) on several domains within a simulated environment. Interaction Quality Estimation ------------------------------ *UAR* $\kappa$ $\rho$ *eA* *Ep.* ---------------------- ---------- ---------- ---------- ---------- ------- LSTM 0.78 0.85 0.91 **0.99** 101 BiLSTM **0.78** **0.85** **0.92** **0.99** 100 LSTM+att 0.74 0.82 0.91 **0.99** 101 BiLSTM+att 0.75 0.83 0.91 **0.99** 93 @rach2017interaction 0.55 0.68 0.83 0.94 - @ultes2015b 0.55 - - 0.89 - : Performance of the proposed LSTM-based variants with the traditional cross-validation setup. Due to overlapping sub-dialogues in the train and test sets, the performance of the LSTM-based models achieve unrealistically high performance. \[tab:xvalresults\] To evaluate the proposed BiLSTM model with attention (BiLSTM+att), it is compared with three of its own variants: a BiLSTM without attention (BiLSTM) as well as a single forward-LSTM layer with attention (LSTM+att) and without attention (LSTM). Additional baselines are defined by @rach2017interaction who already proposed an LSTM-based architecture that only uses non-temporal features, and the SVM-based estimation model as originally used for reward estimation by @ultes2015b. The deep neural net models have been implemented with Keras [@chollet2015keras] using the self-attention implementation as provided by @zheng2018opentag[^4]. All models were trained against cross-entropy loss using RmsProp [@tieleman2012rmsprop] optimisation with a learning rate of 0.001 and a mini-batch size of 16. As evaluation measures, the unweighted average recall (UAR)—the arithmetic average of all class-wise recalls—, a linearly weighted version of Cohen’s $\kappa$, and Spearman’s $\rho$ are used. As missing the correct estimated IQ value by only one has little impact for modelling the reward, a measure we call the extended accuracy (eA) is used where neighbouring values are taken into account as well. *UAR* $\kappa$ $\rho$ *eA* *Ep.* ---------------------- ---------- ---------- ---------- ---------- ------- LSTM 0.51 0.63 0.78 0.93 8 BiLSTM 0.53 0.63 0.78 0.93 8 LSTM+att 0.52 0.63 0.79 0.92 40 BiLSTM+att **0.54** **0.65** **0.81** **0.94** 40 @rach2017interaction 0.45 0.58 0.79 0.88 82 @ultes2015b 0.44 0.53 0.69 0.86 - : Performance of the proposed LSTM-based variants with the dialogue-wise cross-validation setup. The models by @rach2017interaction and @ultes2015b have been re-implemented. The BiLSTM with attention mechanism performs best in all evaluation metrics. \[tab:callxvalresults\] All experiments were conducted with the LEGO corpus [@schmitt2012a] in a 10-fold cross-validation setup for a total of 100 epochs per fold. The results are presented in Table \[tab:xvalresults\]. Due to the way the task is framed (one label for each sub-dialogue), memorising effects may be observed with the traditional cross-validation setup that has been used in previous work. Hence, the results in Table \[tab:xvalresults\] show very high performance, which is likely to further increase with ongoing training. However, the corresponding models are likely to generalise poorly. To alleviate this, a dialogue-wise cross-validation setup has been employed also consisting of 10 folds of disjoint sets of dialogues. By that, it can be guaranteed that there are no overlapping sub-dialogues in the training and test sets. All results of these experiments are presented in Table \[tab:callxvalresults\] with the absolute improvement of the two main measures UAR and eA over the SVM-based approach of @ultes2015b visualised in Figure \[fig:callimprovement\]. ![Absolute improvement of the IQ estimation models over the originally employed model by [@ultes2017domain] for IQ-based reward estimation with the dialogue-wise cross-validation setup. UAR and eA take values from 0 to 1[]{data-label="fig:callimprovement"}](improvement-crop){width="\linewidth"} The proposed BiLSTM+att model outperforms existing models and the baselines in all four performance measures by achieving an UAR of 0.54 and an eA of 0.94 after 40 epochs. Furthermore, both the BiLSTM and the attention mechanism by themselves improve the performance in terms of UAR. Based on this findings, the BiLSTM+att model is selected as reward estimator for the experiments in the dialogue policy learning setup as shown in Figure \[fig:RLframework\]. Dialogue Policy Learning ------------------------ ![image](success_graph-crop){width="0.95\linewidth"} To analyse the impact of the IQ reward estimator on the resulting dialogue policy, experiments are conducted comparing three different reward models. The two baselines are in accordance to @ultes2017domain: having the objective task success as principal reward component ($R_{TS}$) and having the interaction quality estimated by a support vector machine as principal reward component ($R_{IQ}^{s}$). TS can be computed by comparing the outcome of each dialogue with the pre-defined goal. Of course, this is only possible in simulation and when evaluating with paid subjects. This goal information is not available to the IQ estimators, nor is it required. Both baselines are compared to our proposed BiLST model to estimate the interaction quality used as principal reward component ($R_{IQ}^{bi}$). For learning the dialogue behaviour, a policy model based on the GP-SARSA algorithm [@gasic2014gaussian] is used. This is a value-based method that uses a Gaussian process to approximate the state-value function. As it takes into account the uncertainty of the approximation, it is very sample efficient and may even be used to learn a policy directly through real human interaction [@gasic2013]. The decisions of the policy are based on a summary space representation of the dialogue state tracker. In this work, the focus tracker [@henderson2014second]—an effective rule-based tracker—is used. For each dialogue decision, the policy chooses exactly one summary action out of a set of summary actions which are based on general dialogue acts like *request*, *confirm* or *inform*. The exact number of system actions varies for the domains and ranges from 16 to 25. To measure the dialogue performance, the task success rate (TSR) and the average interaction quality (AIQ) are measured: the TSR represents the ratio of dialogues for which the system was able to provide the correct result. AIQ is calculated based on the estimated IQ values of the respective model ($AIQ^{bi}$ for the BiLSTM and $AIQ^{s}$ for the SVM) at the end of each dialogue. As there are two IQ estimators, a distinction is made between $AIQ^{s}$ and $AIQ^{bi}$. Additionally, the average dialogue length (ADL) is reported. *Domain* *Code* *\# constraints* *\# DB items* ---------------- -------- ------------------ --------------- LetsGo 4 - CamRestaurants CR 3 110 CamHotels CH 5 33 SFRestaurants SR 6 271 SFHotels SH 6 182 Laptops L 6 126 : Statistics of the domains the IQ reward estimator is trained on (LetsGo) and applied to (rest). \[tab:domains\] For the simulation experiments, the performance of the trained polices on five different domains was evaluated: Cambridge Hotels and Restaurants, San Francisco Hotels and Restaurants, and Laptops. The complexity of each domain is shown in Table \[tab:domains\] and compared to the LetsGo domain (the domain the estimators have been trained on). ----------------------------------------------------------------------------------------------------------- ----- ------------------ -------------- --------------- ------------ ---------------- ------------- ---------------- ---------------- ---------------- ------------------ \[0\][\*]{}[*Domain*]{} (l[2pt]{}r[2pt]{})[3-5]{} (l[2pt]{}r[2pt]{})[6-7]{} (l[2pt]{}r[2pt]{})[8-9]{} (l[2pt]{}r[2pt]{})[10-12]{} $R_{TS}$ $R_{IQ}^{s}$ $R_{IQ}^{bi}$ $R_{TS}$ $R_{IQ}^{s}$ $R_{TS}$ $R_{IQ}^{bi}$ $R_{TS}$ $R_{IQ}^{s}$ $R_{IQ}^{bi}$ \[0\][\*]{}[CR]{} 0% **1.00**$^{2,3}$ 0.99$^{1}$ 0.99$^{1}$ 3.64$^{2}$ **3.90**$^{1}$ 3.68$^{3}$ **3.83**$^{1}$ 4.68 4.88 **4.59** 15% **0.97** 0.94 0.96 3.35$^{2}$ **3.65**$^{1}$ 3.45$^{3}$ **3.63**$^{1}$ 5.85$^{3}$ 5.33 **5.10**$^{1}$ 30% **0.94** 0.92 0.90 3.15$^{2}$ **3.34**$^{1}$ 3.22 **3.30** 6.34 6.30 **6.25** (l[3pt]{}r[3pt]{})[2-12]{} \[0\][\*]{}[CH]{} 0% 0.98 0.99 **0.99** 3.26$^{2}$ **3.62**$^{1}$ 3.33 **3.44** 5.71 5.61 **5.40** 15% **0.96**$^{2}$ 0.89$^{1,3}$ 0.93$^{2}$ **2.90** 2.88 **3.14** 3.14 **6.28**$^{2}$ 7.26$^{1,3}$ 6.31$^{2}$ 30% 0.86 **0.88** 0.87 2.38$^{2}$ **2.79**$^{1}$ 2.79$^{3}$ **3.02**$^{1}$ 7.94$^{3}$ 7.31 **6.99**$^{1}$ (l[3pt]{}r[3pt]{})[2-12]{} \[0\][\*]{}[SR]{} 0% **0.98** 0.97 0.98 3.04$^{2}$ **3.53**$^{1}$ 3.13$^{3}$ **3.37**$^{1}$ 6.26 6.03 **5.80** 15% **0.90**$^{3}$ 0.88 0.84$^{1}$ 2.40$^{2}$ **3.00**$^{1}$ 2.85$^{3}$ **3.01**$^{1}$ 7.99 7.55 **7.33** 30% 0.71 0.77 **0.78** 2.03$^{2}$ **2.52**$^{1}$ 2.46$^{3}$ **2.78**$^{1}$ 9.77$^{3}$ 9.41 **8.50**$^{1}$ (l[3pt]{}r[3pt]{})[2-12]{} \[0\][\*]{}[SH]{} 0% 0.97 **0.99** 0.98 3.15$^{2}$ **3.52**$^{1}$ 3.17$^{3}$ **3.36**$^{1}$ 5.99$^{2}$ **5.50**$^{1}$ 5.76 15% 0.88 0.88 **0.89** 2.63$^{2}$ **2.94**$^{1}$ 2.77$^{3}$ **3.17**$^{1}$ 7.98$^{3}$ 7.59$^{3}$ **6.63**$^{1,2}$ 30% **0.83**$^{2}$ 0.76$^{1}$ 0.80 2.50 **2.63** 2.70$^{3}$ **2.87**$^{1}$ 8.38 9.21 **8.37** (l[3pt]{}r[3pt]{})[2-12]{} \[0\][\*]{}[L]{} 0% 0.98 **0.99** **0.99** 3.26$^{2}$ **3.61**$^{1}$ 3.28 **3.41** 5.78 **5.44** 5.60 15% 0.89 0.88 **0.92** 2.58$^{2}$ **2.97**$^{1}$ 2.92$^{3}$ **3.17**$^{1}$ 7.19 7.34 **6.73** 30% **0.80** 0.74 0.77 2.43 **2.57** 2.79 **2.92** 8.22$^{2}$ 9.32$^{1,3}$ **7.97**$^{2}$ (l[3pt]{}r[3pt]{})[2-12]{} \[0\][\*]{}[All]{} 0% 0.98 0.98 **0.98** 3.23$^{2}$ **3.65**$^{1}$ 3.31 **3.48** 5.76 5.50 **5.47** 15% **0.92** 0.89 0.91 2.76$^{2}$ **3.10**$^{1}$ 3.02$^{2}$0 **3.20**$^{1}$ 7.13 7.06 **6.52** 30% **0.83** 0.81 0.82 2.49 **2.80** 2.78 **2.97** 8.20$^{2}$ 8.23$^{1,3}$ **7.66**$^{2}$ ----------------------------------------------------------------------------------------------------------- ----- ------------------ -------------- --------------- ------------ ---------------- ------------- ---------------- ---------------- ---------------- ------------------ \[tab:results\_simulation\] The dialogues were created using the publicly available spoken dialogue system toolkit PyDial [@ultes2017pydial][^5] which contains an implementation of the agenda-based user simulator [@schatzmann2009] with an additional error model. The error model simulates the required semantic error rate (SER) caused in the real system by the noisy speech channel. For each domain, all three reward models are compared on three SERs: 0%, 15%, and 30%. More specifically, the applied evaluation environments are based on Env. 1, Env. 3, and Env. 6, respectively, as defined by @casanueva2017benchmarking. Hence, for each domain and for each SER, policies have been trained using 1,000 dialogues followed by an evaluation step of 100 dialogues. The task success rates in Figure \[fig:result\_simulation\_TSR\] with exact numbers shown in Table \[tab:results\_simulation\] were computed based on the evaluation step averaged over three train/evaluation cycles with different random seeds. As already known from the experiments conducted by @ultes2017domain, the results of the SVM IQ reward estimator show similar results in terms of TSR for $R_{IQ}^{s}$ and $R_{TS}$ in all domains for an SER of 0%. This finding is even stronger when comparing $R_{IQ}^{bi}$ and $R_{TS}$. These high TSRs are achieved while having the dialogues of both IQ-based models result in higher AIQ values compared to $R_{TS}$ throughout the experiments. Of course, only the IQ-based model is aware of the IQ concept and indeed is trained to optimise it. For higher SERs, the TSRs lightly degrade for the IQ-based reward estimators. However, there seems to be a tendency that the TSR for $R_{IQ}^{bi}$ is more robust against noise compared to $R_{IQ}^{s}$ while still resulting in better AIQ values. Finally, even though the differences are mostly not significant, there is also a tendency for $R_{IQ}^{bi}$ to result in shorter dialogues compared to both $R_{IQ}^{s}$ and $R_{TS}$. Discussion ========== One of the major questions of this work addresses the impact of an IQ reward estimator on the resulting dialogues where the IQ estimator achieves better performance than previous ones. Analysing the results of the dialogue policy learning experiment leads to the conclusion that the policy learned with $R_{IQ}^{bi}$ performs similar or better than $R_{IQ}^{s}$ through out all experiments while still achieving better average user satisfaction compared to $R_{TS}$. Especially for noisy environments, the improvement is relevant. The BiLSTM clearly performs better on the LEGO corpus while learning the temporal dependencies instead of using handcrafted ones. However, it entails the risk that these learned temporal dependencies are too specific to the original data so that the model does not generalise well anymore. This would mean that it would be less suitable to be applied to dialogue policy learning for different domains. Luckily, the experiments clearly show that this is not the case. Obviously, the experiments have only been conducted in a simulated environment and not verified in a user study with real humans. However, the general framework of applying an IQ reward estimator for learning a dialogue policy has already been successfully validated with real user experiments by  @ultes2017domain and it seems rather unlikely that the changes we induce by changing the reward estimator lead to a fundamentally different result. Conclusion ========== In this work we proposed a novel model for interaction quality estimation based on BiLSTMs with attention mechanism that clearly outperformed the baseline while learning all temporal dependencies implicitly. Furthermore, we analysed the impact of the performance increase on learned polices that use this interaction quality estimator as the principal reward component. The dialogues of the proposed interaction quality estimator show a slightly higher robustness towards noise and shorter dialogues while still yielding good performance in terms of both of task success rate and (estimated) user satisfaction. This has been demonstrated by training the reward estimator on a bus information domain and applying it to learn dialogue policies in five different domains (Cambridge restaurants and hotels, San Francisco restaurants and hotels, Laptops) in a simulated experiment. For future work, we aim at extending the interaction quality estimator by incorporating domain-independent linguistic data to further improve the estimation performance. Furthermore, the effects of using a user satisfaction-based reward estimator needs to be applied to more complex tasks. [^1]: The relation of US and IQ has been closely investigated in [@schmitt2015; @ultes2013a]. [^2]: a system turn followed by a user turn [^3]: UAR is the arithmetic average of all class-wise recalls. [^4]: Code freely available at <https://github.com/CyberZHG/keras-self-attention> [^5]: Code freely available at <http://www.pydial.org>
--- abstract: 'Ground state cooling of massive mechanical objects remains a difficult task restricted by the unresolved mechanical sidebands. We propose an optomechanically-induced-transparency cooling scheme to achieve ground state cooling of mechanical motion without the resolved sideband condition in a pure optomechanical system with two mechanical modes coupled to the same optical cavity mode. We show that ground state cooling is achievable for sideband resolution $\omega_{\mathrm{m}}/\kappa$ as low as $\sim0.003$. This provides a new route for quantum manipulation of massive macroscopic devices and high-precision measurements.' author: - 'Yong-Chun Liu' - 'Yun-Feng Xiao' - Xingsheng Luan - Chee Wei Wong title: 'Optomechanically-induced-transparency cooling of massive mechanical resonators to the quantum ground state' --- Introduction ============ Cavity optomechanics provides a perfect platform not only for the fundamental study of quantum theory but also for the broad applications in quantum information processing and high-precision metrology [@RevSci08; @RevRMP13; @RevMeys13]. For most applications it is highly desirable to cool the mechanical motion to the quantum ground state by suppressing thermal noise. In the past few years numerous efforts have made strides towards this goal through backaction cooling [@GSNat11; @GSNat11-2; @CooNat06; @CooNat06-2; @CooPRL06; @CooNatPhys08; @CooNatPhys09-1; @CooNatPhys09-2; @CooNatPhys09-3; @CooNat10]. However, the cooling limit is subjected to quantum backaction, and ground state cooling is possible only in the resolved sideband (good-cavity) limit [@PRL07-1; @PRL07-2], which requires the resonance frequency of the mechanical motion ($\omega_{\mathrm{m}}$) to be larger than the cavity decay rate $\kappa$. This sets a major obstacle for the ground state preparation and quantum manipulation of macroscopic and mesoscopic mechanical resonators with typically low mechanical resonance frequency. Therefore, it is essential to overcome this limitation, so that ground state cooling can be achieved irrespective of mechanical resonance frequency and cavity damping. Some recent proposals [@RevCPB13] focus on circumventing the resolved sideband restriction by using dissipative coupling mechanism [@DCPRL09; @DCPRL11], parameter modulations [@PulPRB09; @PulPRA11-1; @PulPRA11-2; @PulPRL11; @PulPRL12] and hybrid systems [@Atom09PRA; @AtomPRA13; @CQEDPRL14; @CoupledPRA13gxli; @CoupledCLEO13ycliu; @CoupledCLEO13ycliu-2]. Here we propose a unresolved-sideband ground-state cooling scheme in a generic optomechanical system which does not require modified mechanisms of coupling or specific modulation of the system parameters, or additional components. We take advantage of the destructive quantum interference in a cavity optomechanical system with two mechanical modes coupled to the same optical cavity mode, where optomechanically-induced transparency (OMIT) phenomenon [@OMITSCI10; @OMITNat11; @OMITPRA09; @EIAPRA13] occurs. We show that with the help of quantum interference, ground state cooling of the mechanical mode with $\omega_{\mathrm{m}}\ll\kappa$ can be achieved. Moreover, we examine the multiple input cascaded OMIT cooling which further suppresses the quantum backaction heating. This renders quantum optomechanics with low optical-$Q$ cavities and low mechanical frequency resonators. Model ===== ![(color online) (a) Sketch of a typical optomechanical system with two mechanical modes $b$ and $c$ coupled to the same optical cavity mode $a$. The cavity is driven by a cooling laser and a control laser. (b) Energy level diagram of the system. $\left\vert n,m_{\mathrm{c}},m\right\rangle $ denotes the state of $n$ photons, $m_{\mathrm{c}}$ $c$-mode phonons and $m$ $b$-mode phonons in the displaced frame. The red solid (dashed) arrow denotes the cooling (heating) process of mode $b$. The blue dotted arrow denotes the control laser enhanced coupling between mechanical mode $c$ and the optical cavity mode $a$.[]{data-label="Fig1"}](Fig1.eps){width="\columnwidth"} In a generic optomechanical system, as shown in Fig. \[Fig1\](a), we consider an optical cavity mode $a$ coupled to two mechanical resonance modes $b$ and $c$, where $b$ is the mode to be cooled and $c$ is a control mode. The cavity is driven by a cooling laser and a control laser, with frequencies ${\omega}_{0}$ and ${\omega}_{1}$, respectively. In the frame rotating at the cavity resonance frequency ${\omega_{\mathrm{c}}}$, the system Hamiltonian reads as $$\begin{aligned} H & =H_{b}+H_{c},\nonumber\\ H_{b} & ={\omega_{\mathrm{m}}b^{\dag}b+ga^{\dag}a(b+b^{\dag})+(\Omega}_{0}^{\ast}{{a{e}^{i\Delta_{0}t}+\mathrm{H.c.})}},\nonumber\\ H_{c} & ={\omega_{\mathrm{mc}}c^{\dag}c+{g}_{\mathrm{c}}{a^{\dag }a{(c+c^{\dag})}+{{({\Omega}}}_{1}^{\ast}{ae}^{i\Delta_{1}t}}}+{\mathrm{H.c.})}.\end{aligned}$$ Here $H_{b}$ ($H_{c}$) describes the Hamiltonian related with mode $b$ ($c$); ${\omega_{\mathrm{m}}}$ (${\omega_{\mathrm{mc}}}$) is the resonance frequency of mode $b$ ($c$); ${g}$ and ${g}_{\mathrm{c}}$ denote the single-photon optomechanical coupling rates; ${\Omega}_{0}$ (${\Omega}_{1}$) represents the driving strength and $\Delta_{0}={\omega}_{0}-{\omega_{\mathrm{c}}}$ ($\Delta_{1}={\omega}_{1}-{\omega_{\mathrm{c}}}$) is the frequency detuning between the cooling (control) laser and the cavity mode. For strong driving, the linearized system Hamiltonian is given by $$\begin{aligned} H_{L} & ={\omega_{\mathrm{m}}b_{1}^{\dag}b}_{1}+{[{G}^{\left( t\right) }{{a_{1}^{\dag}+}}G^{\left( t\right) \ast}a_{1}]({b_{1}+b{{_{1}^{\dag})}}}}\nonumber\\ & +{\omega_{\mathrm{mc}}{c_{1}^{\dag}}}c_{1}+[{G_{\mathrm{c}}^{{\left( t\right) }}{a_{1}^{\dag}+}G_{\mathrm{c}}^{{\left( t\right) \ast}}}a_{1}]({c_{1}+{{c_{1}^{\dag})}}.} \label{HL}$$ Here the operators $a_{1}$, ${b}_{1}$ and $c_{1}$ describe the quantum fluctuations around the corresponding classical mean fields after the linearization; ${G}^{\left( t\right) }=g({{\alpha}}_{0}{e}^{-i\Delta _{0}^{\prime}t}+{\alpha}_{1}{e}^{-i\Delta_{1}^{\prime}t})$ and ${G_{\mathrm{c}}^{{\left( t\right) }}}={{g}_{\mathrm{c}}({{\alpha}}_{0}{e}^{-i\Delta _{0}^{\prime}t}}+{\alpha}_{1}{e}^{-i\Delta_{1}^{\prime}t})$ are the light-enhanced optomechanical coupling strengths, with modified detunings $\Delta_{0}^{\prime}=\Delta_{0}+\Delta_{\mathrm{om}}$, $\Delta_{1}^{\prime }=\Delta_{1}+\Delta_{\mathrm{om}}$ and $\Delta_{\mathrm{om}}=2(g^{2}/{\omega_{\mathrm{m}}}+g_{\mathrm{c}}^{2}/{\omega_{\mathrm{mc}})(}\left\vert {\alpha}_{0}\right\vert ^{2}+\left\vert {\alpha}_{1}\right\vert ^{2})$; ${{\alpha}}_{0}$ and ${{\alpha}}_{1}$ are the intracavity field [from the contribution of the cooling and control laser inputs]{}; $\kappa$, $\gamma$ $(\equiv{\omega_{\mathrm{m}}/Q}_{\mathrm{m}})$ and $\gamma_{\mathrm{c}}$ $(\equiv{\omega_{\mathrm{mc}}/Q}_{\mathrm{mc}})$ are the energy decay rates of the modes $a$, $b$ and $c$. Quantum noise spectrum ====================== The optical force acting on mode $b$ takes the form $F=-[{G}^{\left( t\right) \ast}a_{1}+{G}^{\left( t\right) }{{a_{1}^{\dag}]}}/x_{\mathrm{ZPF}}$, where $x_{\mathrm{ZPF}}$ is the zero-point fluctuation. The quantum noise spectrum of the optical force $S_{FF}(\omega)\equiv\int dte^{i\omega t}\left\langle F(t)F(0)\right\rangle $ is calculated to be $$\begin{aligned} S_{FF}({\omega}) & =\sum\nolimits_{j=0}^{1}S_{FF}^{j}({\omega}),\nonumber\\ S_{FF}^{j}({\omega}) & =\frac{g^{2}}{x_{\mathrm{ZPF}}^{2}}\left\vert {{\alpha}}_{j}\tilde{\chi}_{j}\left( {\omega}\right) \right\vert ^{2}\nonumber\\ & \times\lbrack\kappa+{{g}_{\mathrm{c}}^{2}}\sum\nolimits_{k=0}^{1}\left\vert {\alpha}_{k}\right\vert ^{2}\tilde{\chi}_{\mathrm{mc}}\left( {\omega+\Delta }_{j}^{\prime}-{\Delta}_{k}^{\prime}\right) ],\label{SFF}$$ where $\tilde{\chi}_{j}^{-1}\left( {\omega}\right) =\chi_{j}^{-1}\left( {\omega}\right) +{{g}_{\mathrm{c}}^{2}}\sum\nolimits_{k=0}^{1}\left\vert {\alpha}_{k}\right\vert ^{2}[\chi_{\mathrm{mc}}({\omega+\Delta}_{j}^{\prime }-{\Delta}_{k}^{\prime})+\chi_{\mathrm{mc}}^{\ast}(-{\omega-\Delta}_{j}^{\prime}+{\Delta}_{k}^{\prime})]$, $\tilde{\chi}_{\mathrm{mc}}\left( {\omega}\right) =\gamma_{\mathrm{c}}(n_{\mathrm{c,th}}+1)|\chi_{\mathrm{mc}}({\omega)}|^{2}+\gamma_{\mathrm{c}}n_{\mathrm{c,th}}|\chi_{\mathrm{mc}}\left( -{\omega}\right) |^{2}$, $\chi_{j}^{-1}({\omega)}=-i({\omega +{\Delta}}_{j}^{\prime}{)}+\kappa/2$ and $\chi_{\mathrm{mc}}^{-1}({\omega )}=-i({\omega-{\omega_{\mathrm{mc}}})}+\gamma_{\mathrm{c}}/2$, with integers $j$ and $k$ being the summation indices. Here $\chi_{j}\left( {\omega }\right) $ represent the optical response to the input light and $\chi_{\mathrm{mc}}\left( {\omega}\right) $ is the response function of the control mechanical mode; $n_{\mathrm{th}}=1/[e^{\hbar{\omega_{\mathrm{m}}/(k}_{\mathrm{B}}T)}-1]$ and $n_{\mathrm{c,th}}=1/[e^{\hbar{\omega _{\mathrm{mc}}/(k}_{\mathrm{B}}T)}-1]$ are the thermal phonon numbers of modes $b$ and $c$ at the environmental temperature $T$. ![(color online) Quantum noise spectra (arbitrary units). The solid curves denote (a) $S_{FF}^{0}({\omega})$, (b) $S_{FF}^{1}({\omega})$ and (c) $S_{FF}({\omega})$ for ${\omega_{\mathrm{m}}/\kappa=0.02}$, ${\omega _{\mathrm{mc}}/\kappa=2}$, ${\Delta}_{0}^{\prime}{=\omega_{\mathrm{m}}}$, $\Delta_{1}^{\prime}=-{\omega_{\mathrm{mc}}}$[, ]{}$g/{\omega_{\mathrm{m}}=10}^{-3}$, ${{g}_{\mathrm{c}}}/{\omega_{\mathrm{mc}}=5\times10}^{-4}$, ${{\alpha}}_{0}={{\alpha}}_{1}=10^{3}$, ${Q}_{\mathrm{mc}}=10^{4}$ and $n_{\mathrm{th}}=10^{3}$. The black dotted curves in (a)-(c) correspond to the results without the control mode (${{g}_{\mathrm{c}}=0}$). In (a) and (b), the position of the sharp peaks and dips are marked. (d) Zoom-in view of the shaded region in (a)-(c). $S_{FF}^{0}({\omega})$, blue dashed curve; $S_{FF}^{1}({\omega})$, green dashed-dotted curve; $S_{FF}({\omega})$, red solid curve. The black dotted curves corresponds to the result without the control mode and the control laser \[$S_{FF}^{0,0}({\omega})$\]. The gray vertical lines denote $\omega=\pm{\omega_{\mathrm{m}}}$.[]{data-label="Fig2"}](Fig2.eps){width="\columnwidth"} In conventional single mechanical mode approach, the quantum noise spectrum exhibits a standard Lorentzian curve [@PRL07-2]. However, here due to the interaction between the control mechanical and the optical cavity modes, the noise spectrum \[Eq. (\[SFF\])\] is modified to a non-Lorentzian lineshape. This originates from the quantum interference manifested by OMIT. As shown in Fig. \[Fig1\](b), the system here contains a series of three-level subsystems relevant with OMIT for heating suppression. In the presence of the control field, the transition amplitude between the two pathways (red dashed arrow and blue dotted arrow) destructively interfere, leading to the suppression of the heating transition. For the unresolved sideband regime (${\omega_{\mathrm{m}}\ll\kappa}$), the quantum noise spectra are plotted in Fig. \[Fig2\] with parameters ${\omega_{\mathrm{m}}/\kappa=0.02}$, ${\omega_{\mathrm{mc}}/\kappa=2}$, ${\Delta}_{0}^{\prime}={\omega_{\mathrm{m}}}$, $\Delta_{1}^{\prime}=-{\omega_{\mathrm{mc}}}$[, ]{}$g/{\omega_{\mathrm{m}}=10}^{-3}$, ${{g}_{\mathrm{c}}}/{\omega_{\mathrm{mc}}=5\times10}^{-4}$, ${\alpha}_{0}=\alpha_{1}=10^{3}$, ${Q}_{\mathrm{mc}}=10^{4}$ and $n_{\mathrm{th}}=10^{3}$. Note that $S_{FF}^{0}({\omega})$ corresponds to the spectrum associated with the cooling laser (with detuning ${\Delta}_{0}^{\prime}$) and $S_{FF}^{1}({\omega})$ represents the spectrum related to the control laser (with detuning ${{{\Delta}_{1}^{\prime}}}$). Without the control mechanical mode (${{g}_{\mathrm{c}}=0}$), they reduce to $S_{FF}^{0,0}({\omega})=\kappa \left\vert {{\alpha}}_{0}\chi_{0}\left( {\omega}\right) \right\vert ^{2}g^{2}/x_{\mathrm{ZPF}}^{2}$ and $S_{FF}^{1,0}({\omega})=\kappa\left\vert {\alpha}_{1}\chi_{1}\left( {\omega}\right) \right\vert ^{2}g^{2}/x_{\mathrm{ZPF}}^{2}$, which are Lorentzians with the centers at ${\omega =-}\Delta_{0}^{\prime}$ and $-\Delta_{1}^{\prime}$ \[black dotted curves in Figs. \[Fig2\](a) and \[Fig2\](b)\], respectively. In contrast, with the presence of the control mode, a series of OMIT resonances appear in the spectra. For $S_{FF}^{0}({\omega})$, those dips/peaks are located at ${\omega=}\pm{\omega_{\mathrm{mc}}}$, ${-\delta}\pm{\omega_{\mathrm{mc}}}$ \[Fig. \[Fig2\](a)\], where $\delta\equiv\Delta_{0}^{\prime}-\Delta _{1}^{\prime}$ represents the two-photon detuning of the input lasers. Thereinto, the resonances at ${\omega=}\pm{\omega_{\mathrm{mc}}}$ originate from the interaction between the cooling laser and mode $c$, which changes the mode density for the cavity field to absorb/emit a phonon with energy $\hbar{\omega_{\mathrm{mc}}}$; the resonances at ${\omega=-\delta}\pm {\omega_{\mathrm{mc}}}$ stem from the interaction among the cooling laser, the control laser and mode $c$. Analogously, for $S_{FF}^{1}({\omega})$, the resonances are located at ${\omega=}\pm{\omega_{\mathrm{mc}}}$, ${\delta}\pm{\omega_{\mathrm{mc}}}$ \[Fig. \[Fig2\](b)\]. For cooling of mode $b$, the dips/peaks at ${\omega=}$ $\pm{\omega_{\mathrm{m}}}$ are relevant, since $A_{\mp}\equiv S_{FF}\left( \pm{{\omega_{\mathrm{m}}}}\right) x_{\mathrm{ZPF}}^{2}$ are the rates for absorbing and emitting a $b$-mode phonon by the cavity field, corresponding to the cooling and heating of mode $b$, respectively. With the appropriate value of the two-photon detuning at $\delta={\omega_{\mathrm{mc}}+\omega_{\mathrm{m}}}$, the OMIT lineshapes in $S_{FF}({\omega})$ can be tuned to appear at ${\omega=}$ $\pm{\omega _{\mathrm{m}}}$, as shown in Figs. \[Fig2\](c) and \[Fig2\](d). At ${\omega=}$ $-{\omega_{\mathrm{m}}}$, it exhibits a deep OMIT window, which reveals the suppression of heating process, originating from the destructive interference. Although a shallow dip also appears at ${\omega=}$ ${\omega_{\mathrm{m}}}$, it only slightly decreases the mode density. The reason is that, with ${\left\vert \Delta_{1}^{\prime}\right\vert \gg\left\vert \Delta_{0}^{\prime}\right\vert }$, this dip is located far away from the center (${\omega=}-\Delta_{1}^{\prime}$) of the Lorentzian background in $S_{FF}^{1,0}({\omega})$, as shown in Fig. \[Fig2\](b). ![(color online) (a) Time evolution of the mean phonon number $n_{b}(t)$ with control mode (red closed circle) for ${\omega_{\mathrm{m}}/\kappa=0.05}$, ${\omega_{\mathrm{mc}}/\kappa=2}$, ${\Delta}_{0}^{\prime }={\omega_{\mathrm{m}}}$, $\Delta_{1}^{\prime}=-{\omega_{\mathrm{mc}}}$[, ]{}$g/{\omega_{\mathrm{m}}=10}^{-3}$, ${{g}_{\mathrm{c}}}/{\omega_{\mathrm{mc}}=5\times10}^{-4}$, ${\alpha}_{0}={{1200}}$, ${{\alpha}}_{1}=1600$, ${Q}_{\mathrm{mc}}=10^{4}$, ${Q}_{\mathrm{m}}=10^{5}$ and $n_{\mathrm{th}}=10^{3}$. The results without the control mode and the control laser (${{g}_{\mathrm{c}}=0}$, ${{\alpha}}_{1}=0$) for ${\Delta}_{0}^{\prime }=-{\omega_{\mathrm{m}}}$ (blue open circle) and ${\Delta}_{0}^{\prime }={\omega_{\mathrm{m}}}$ (green triangles) are plotted for comparison. (b) Same as (a) except that ${\omega_{\mathrm{m}}/\kappa=0.02}$ and ${{\alpha}}_{0}={{\alpha}}_{1}=10^{3}$. The shaded regions in (a) and (b) denote $n_{b}<1$. (c) Cooling rates $\Gamma_{\mathrm{opt}}$ as functions of ${\omega_{\mathrm{m}}}$ in the unit of ${\kappa}$ with the presence (red solid curve) and absence (blue dashed curve) of the control mode and the control laser; the parameters are the same as Fig. \[Fig3\](b).[]{data-label="Fig3"}](Fig3.eps){width="\columnwidth"} Covariance approach =================== To verify the destructive quantum interference effect, we next solve the quantum master equation and use covariance approach [@ycliuDC13; @ycliuDC13-2] to obtain exact numerical results. The master equation is given by $\dot{\rho}=i[\rho,H_{L}]+\kappa\mathcal{D}[a_{1}]\rho+\gamma(n_{\mathrm{th}}+1)\mathcal{D}[b_{1}]\rho+\gamma n_{\mathrm{th}}\mathcal{D}[{{b_{1}^{\dag}}}]\rho+\gamma_{\mathrm{c}}(n_{\mathrm{c,th}}+1)\mathcal{D}[c_{1}]\rho+\gamma_{\mathrm{c}}n_{\mathrm{c,th}}\mathcal{D}[{{c_{1}^{\dag}}}]\rho$, where $\mathcal{D}[\hat{o}]\rho=\hat{o}\rho\hat {o}^{\dag}{{-(\hat{o}^{\dag}\hat{o}\rho+\rho\hat{o}^{\dag}\hat{o})/2}}$ denotes the the standard dissipator in Lindblad form. In Fig. \[Fig3\](a) and \[Fig3\](b) we plot the time evolution of the mean phonon number $n_{b}(t)$ for typical parameters. For single mechanical mode case in the unresolved sideband regime, with red detuning input laser, the mechanical motion is only slowly cooled with a small cooling rate (net optical damping rate) $\Gamma_{\mathrm{opt}}\equiv A_{-}-A_{+}$, without reaching the ground state. However, in the presence of the control mode and the control laser, the cooling rate can be enhanced for more than two orders of magnitude \[Fig. \[Fig3\](c)\], and ground state cooling with mean phonon number $n_{b}<1$ is achievable, even for sideband resolution ${\omega_{\mathrm{m}}/\kappa}$ as small as $0.02$. It should be emphasized that in this case the cooling laser is blue detuned, which is quite different from the single mechanical mode approach. For the latter, blue detuning leads to amplification instead of cooling of the mechanical motion. The blue-detuning cooling is the unique property originating from the quantum interference which modifies the noise spectrum of the optical force. The OMIT lineshape can be viewed as the inverse of the standard Lorentzian, thus the detunings for cooling are just opposite to that for the single mechanical mode case. Cascaded OMIT cooling ===================== To further suppress quantum backaction heating, we propose the use of additional coherent laser inputs, resulting in cascaded OMIT cooling. For $N$ inputs, the quantum noise spectrum of the optical force takes the same form as Eq. (\[SFF\]) except that the summation indices ($j,k$) run from $0$ to $N-1$. As displayed in Figs. \[Fig4\](a)-\[Fig4\](d), for two inputs, the suppression of heating for mode $b$ is the contribution of suppressed $A_{+}^{0}\equiv S_{FF}^{0}\left( -{{\omega_{\mathrm{m}}}}\right) x_{\mathrm{ZPF}}^{2}$, while $A_{-}^{1}\equiv S_{FF}^{1}\left( {{\omega _{\mathrm{m}}}}\right) x_{\mathrm{ZPF}}^{2}$ is also slightly suppressed. In the presence of the third input laser with detuning $\Delta_{2}^{\prime }=\Delta_{0}^{\prime}-2({\omega_{\mathrm{mc}}+\omega_{\mathrm{m}})}$, the interaction involved with the control mode results in the suppression of $A_{+}^{1}\equiv S_{FF}^{1}(-{{\omega_{\mathrm{m}}}})x_{\mathrm{ZPF}}^{2}$ and $A_{-}^{2}\equiv S_{FF}^{2}({{\omega_{\mathrm{m}}}})x_{\mathrm{ZPF}}^{2}$. This results in the net optical damping rate $\Gamma_{\mathrm{opt}}\equiv \sum\nolimits_{k=0}^{2}(A_{-}^{k}-A_{+}^{k})\simeq A_{-}^{0}-A_{+}^{2}$ \[Fig. \[Fig4\](c) and (d)\]. More generally, for $N$ inputs with detuning $\Delta_{k}^{\prime}=\Delta_{0}^{\prime}-k({\omega_{\mathrm{mc}}+\omega_{\mathrm{m}})}$ ($k=1,2,...N-1$), we obtain $\Gamma_{\mathrm{opt}}\simeq A_{-}^{0}-A_{+}^{N-1}$. Note that the remaining heating rate $A_{+}^{N-1}$ is much smaller than the original heating rate due to the large detuning $\Delta_{k}^{\prime}$for the $(N-1)$-th input. In Fig. \[Fig4\](e) we compare the cooling dynamics between two inputs and three inputs for typical parameters, which shows that the cascaded OMIT cooling enables larger cooling rate and lower cooling limit, with ground state cooling achievable even for ${\omega_{\mathrm{m}}/}\kappa=0.01$. ![(color online) (a)-(b): Quantum noise spectra (arbitrary units) for two inputs (a) and three **** inputs (b). $S_{FF}({\omega})$, red solid curve; $S_{FF}^{0}({\omega})$, blue dashed curve; $S_{FF}^{1}({\omega})$, green dotted curve; $S_{FF}^{2}({\omega})$, purple dashed-dotted curve. The gray vertical lines denote $\omega=\pm{\omega_{\mathrm{m}}}$. (c) **** Scattering interpretation of optomechanical interactions with three inputs. The shaded Lorentzian represents the mode density of the optical cavity. The green vertical arrows denote the input lasers. The solid (dashed) curved arrows denote the anti-Stokes (Stokes) scattering processes relevant with mode $b$ (red) and mode $c$ (blue). (d) Energy levels and interpretation of the suppression (denoted by the $\times$) of heating and cooling with three inputs. (e) Time evolution of the mean phonon number $n_{b}(t)$ with two inputs (blue open circle) and three inputs (red closed circle). The shaded region denotes $n_{b}<1$. Here ${{\alpha}}_{2}=10^{3}$, ${\omega_{\mathrm{mc}}/\kappa=2}$[, ]{}${\omega_{\mathrm{m}}/\kappa=0.01}$ and other parameters are the same as Fig. \[Fig3\] (b).[]{data-label="Fig4"}](Fig4.eps){width="\columnwidth"} Cooling limits ============== In Fig. \[Fig5\] the fundamental cooling limits $n_{\min}$ as functions of the sideband resolution ${\omega_{\mathrm{m}}/}\kappa$[ ]{}are plotted. The exact numerical results are obtained from master equation simulations. The black dotted curve shows the best result for conventional single mechanical mode approach, given by $n_{\min}=\kappa/(4{{\omega_{\mathrm{m}}}})$, which is obtained when $\Delta_{0}^{\prime}=-\kappa/2$ [@PRL07-1]. It reveals the great advantage of OMIT cooling and cascaded OMIT cooling, with possibility for ground state cooling even when ${\omega_{\mathrm{m}}/\kappa\sim3\times 10}^{-3}$, which goes beyond the resolved sideband limit by nearly $3$ orders of magnitude. Note that Fig. \[Fig5\] shows the cooling limits increase as ${\omega_{\mathrm{m}}/}\kappa$ increases from $\sim0.02$ to a larger value. This is a result of blue detuning induced heating, which becomes significant when ${\omega_{\mathrm{m}}/}\kappa$ is large. Experimental feasibility ======================== It should be stressed that the OMIT cooling described here adds little complexity to the existing optomechanical system, which is crucial in the experimental point of view. Compared with the conventional backaction cooling approach, the additional requirement here is a control mechanical mode and one (or more) input laser. It is experimentally feasible for various optomechanical systems within current technical conditions. On one hand, many optomechanical systems possess abundant mechanical modes with different resonance frequencies, since the oscillation have different types and orders. This situation can be found in optomechanical systems using whispering-gallery microcavities [@Torid05PRL; @Sphere09PRL], photonic crystal cavities [@OMcrystal09Nat; @ZhengWongAPL12], membranes [@Mem08Nat; @BuiWongAPL12], nanostrings [@Near09NatPhys] and nanorods [@ZhengWongOE12; @LiTangNat08] amongst others. Usually only one mechanical mode is used in most optomechanical experiments, while exciting an additional mechanical mode is often unintended. On the other hand, composite optomechanical systems, containing two independent mechanical resonators, are also conceivable. For example, in Fabry-Pérot cavities, the motion of one mirror acts as an control mechanical mode while the other mirror is to be cooled \[Fig. \[Fig1\](a)\]. In the near-field optomechanical system [@Near09NatPhys], to cool the nanostrings, the control mode can be selected from the vibration of the microtoroid. ![(color online) Fundamental cooling limits $n_{\min}$ as functions of ${{{{\omega_{\mathrm{m}}}}/\kappa}}$ for two inputs (red closed circles) and three inputs (blue open circles). The result for single mechanical mode approach (black dotted curve) is plotted for comparison. The shaded region denotes $n_{\min}<1$. Here ${{\alpha}}_{0}={{\alpha}}_{1}={{\alpha}}_{2}=500$ and other parameters are the same as Fig. \[Fig4\].[]{data-label="Fig5"}](Fig5.eps){width="7cm"} Conclusion ========== In summary, we have presented the OMIT cooling scheme allowing ground state cooling of mechanical resonators beyond the resolved sideband limit. It is demonstrated that by employing the OMIT interference, quantum backaction heating can be largely suppressed, extending the fundamental limit of backaction cooling. The scheme is experimentally feasible, which requires another control mechanical mode and multiple laser inputs. Such a M-O-M system (M, mechanical mode; O, optical mode) studied here offers potential for cooling enormous mass scale resonators [@LIGO09NJP; @LIGO09], which possess small resonance frequencies. Together with the recently examined multi-optical-mode [@CoupledCLEO13ycliu; @ST12NatComm; @ST13SCI; @ST13PRL; @MM14Nat] and multi-mechanical-mode [@LinNatPhoton10; @MM12NatCom; @MM12PRA; @MM13PRA; @MM13PRA-2] systems,it is shown that such interference effect in multi-mode cavity optomechanics provides unique advantage for both fundamental studies and broad applications. Recently we noticed a related work [@arXiv14], but here we use the covariance approach to examine the fundamental cooling limits and present detailed analysis of cascaded OMIT cooling. This paves the way for the manipulation of macroscopic mechanical resonators in the quantum regime. Y.-C.L. and Y.-F.X were supported by the National Basic Research Program of China (No. 2013CB328704, No. 2013CB921904), National Natural Science Foundation of China (Nos. 11474011, 11222440, and 61435001), and Research Fund for the Doctoral Program of Higher Education of China (No. 20120001110068). X.L. and C.W.W were supported by the Optical Radiation Cooling and Heating in Integrated Devices program of Defense Advanced Research Projects Agency (contract number C11L10831). [99]{} Kippenberg T J, Vahala K J. Cavity optomechanics: Back-action at the mesoscale. , 2008, 321: 1172 Aspelmeyer M, Kippenberg T J, Marquardt F. Cavity optomechanics. :1303.0733, 2013 Meystre P. A short walk through quantum optomechanics. , 2013, 525: 215 Teufel J D, Donner T, Li D, Harlow J W, Allman M S, Cicak K, Sirois A J, Whittaker J D, Lehnert K W, Simmonds R W. Sideband cooling of micromechanical motion to the quantum ground state. 2011, 475: 359 Chan J, Mayer Alegre T P, Safavi-Naeini A H, Hill J T, Krause A, Gröblacher S, Aspelmeyer M, Painter O. Laser cooling of a nanomechanical oscillator into its quantum ground state. 2011, 478: 89 Gigan S, Böhm H R, Paternostro M, Blaser F, Langer G, Hertzberg J B, Schwab K C, Bäuerle D, Aspelmeyer M, Zeilinger A. Self-cooling of a micromirror by radiation pressure. 2006,444: 67 Arcizet O, Cohadon P F, Briant T, Pinard M, Heidmann A. Radiation-pressure cooling and micromechanical instability of a micromirror. 2006, 444: 71 Schliesser A, Del’Haye P, Nooshi N, Vahala K J, Kippenberg T J. Radiation pressure cooling of a micromechanical oscillator using dynamical backaction. 2006,97: 243905 Schliesser A, Rivière R, Anetsberger G, Arcizet O, Kippenberg T J. Resolved-sideband cooling of a micromechanical oscillator. 2008, 4: 415 Gröblacher S, Hertzberg J B, Vanner M R, Cole G D, Gigan S, Schwab K C, Aspelmeyer M. Demonstration of an ultracold micro-optomechanical oscillator in a cryogenic cavity. 2009, 5: 485 Park Y S, Wang H. Resolved-sideband and cryogenic cooling of an optomechanical resonator. 2009, 5: 489 Schliesser A, Arcizet O, Rivère R, Anetsberger G, Kippenberg T J. Resolved-sideband cooling and position measurement of a micromechanical oscillator close to the Heisenberg uncertainty limit. 2009, 5: 509 Rocheleau T, Ndukum T, Macklin C, Hertzberg J B, Clerk A A, Schwab K C. Preparation and detection of a mechanical resonator near the ground state of motion. 2010, 463: 72 Wilson-Rae I, Nooshi N, Zwerger W, Kippenberg T J. Theory of ground state cooling of a mechanical oscillator using dynamical backaction. 2007, 99: 093901 Marquardt F, Chen J P, Clerk A A, Girvin S M. Quantum theory of cavity-assisted sideband cooling of mechanical motion. 2007, 99: 093902 Liu Y C, Hu Y W, Wong C W, Xiao Y F. Review of cavity optomechanical cooling. 2013, 22: 114213 Elste F, Girvin S M, Clerk A A. Quantum noise interference and backaction cooling in cavity nanomechanics. 2009, 102: 207209 Xuereb A, Schnabel R, Hammerer K. Dissipative optomechanics in a Michelson-Sagnac interferometer. 2011, 107: 213604 Tian L. Ground state cooling of a nanomechanical resonator via parametric linear coupling. 2009,79: 193407 Li Y, Wu L A, Wang Z D. Fast ground-state cooling of mechanical resonators with time-dependent optical cavities. 2011, 83: 043804 Liao J Q, Law C K. Cooling of a mirror in cavity optomechanics with a chirped pulse. 2011, 84: 053838 Wang X, Vinjanampathy S, Strauch F W, Jacobs K. Ultraefficient Cooling of Resonators: Beating Sideband Cooling with Quantum Control. 2011, 107: 177204 Machnes S, Cerrillo J, Aspelmeyer M, Wieczorek W, Plenio M B, Retzker A. Pulsed Laser Cooling for Cavity Optomechanical Resonators. 2012, 108: 153601 Genes C, Ritsch H, Vitali D. Micromechanical oscillator ground-state cooling via resonant intracavity optical gain or absorption. 2009,80: 061803(R) Vogell B, Stannigel K, Zoller P, Hammerer K, Rakher M T, Korppi M, Jöckel A, Treutlein P. Cavity-enhanced long-distance coupling of an atomic ensemble to a micromechanical membrane. 2013, 87: 023816 Restrepo J, Ciuti C, Favero I. Single-Polariton Optomechanics. 2014,112: 013601 Gu W J, Li G X. Quantum interference effects on ground-state optomechanical cooling. 2013, 87: 025804 Liu Y C, Xiao Y F, Luan X, Wong C W. Ground state cooling of mechanical motion through coupled cavity interactions in the unresolved sideband regime. CLEO: 2013 (Optical Society of America), p.QM2B.2 Liu Y C, Xiao Y F, Luan X, Gong Q, Wong C W. Coupled cavities for motional ground state cooling and strong optomechanical coupling. , 2015, 91: 033818 Weis S, Rivière R, Deléglise S, Gavartin E, Arcizet O, Schliesser A, Kippenberg T J. Optomechanically induced transparency. 2010, 330: 1520 Safavi-Naeini A H, Alegre T P M, Chan J, Eichenfield M, Winger M, Lin Q, Hill J T, Chang D E, Painter O. Electromagnetically induced transparency and slow light with optomechanics. 2011, 472: 69 Agarwal G S, Huang S. Electromagnetically induced transparency in mechanical effects of light. 2010, 81: 041803(R) Qu K, Agarwal G S. Phonon-mediated electromagnetically induced absorption in hybrid opto-electromechanical systems. 2013, 87: 031802(R) Liu Y C, Xiao Y F, Luan X, Wong C W. Dynamic dissipative cooling of a mechanical resonator in strong coupling optomechanics. 2013, 110: 153606 Liu Y C, Shen Y F, Gong Q, Xiao Y F. Optimal limits of cavity optomechanical cooling in the strong coupling regime. 2014, 89: 053821 Kippenberg T J, Rokhsari H, Carmon T, Scherer A, Vahala K J. Analysis of radiation-pressure induced mechanical oscillation of an optical microcavity. 2005, 95: 033901 Tomes M, Carmon T. Photonic Micro-Electromechanical Systems Vibrating at X-band (11-GHz) Rates. 2009, 102: 113601 Eichenfield M, Chan J, Camacho R M, Vahala K J, Painter O. Optomechanical crystals. 2009, 462: 78 Zheng J, Li Y, Aras M S, Stein A, Shepard K L, Wong C W. Parametric optomechanical oscillations in two-dimensional slot-type high-Q photonic crystal cavities. 2012, 100: 211908 Thompson J D, Zwickl B M, Jayich A M, Marquardt F, Girvin S M, Harris J G E. Strong dispersive coupling of a high-finesse cavity to a micromechanical membrane. 2008, 452: 72 Bui C H, Zheng J, Hoch S W, Lee L Y T, Harris J G E, Wong C W. High-reflectivity, high-Q micromechanical membranes via guided resonances for enhanced optomechanical coupling. 2012, 100: 021110 Anetsberger G, Arcizet O, Unterreithmeier Q P, Rivière Q P, Schliesser A, Weig E M, Kotthaus J P, Kippenberg T J. Near-field cavity optomechanics with nanomechanical oscillators. 2009, 5: 909 Zheng J, Sun X, Li Y, Poot M, Dadgar A, Shi N, Pernice W H P, Tang H X, Wong C W. Femtogram dispersive L3-nanobeam optomechanical cavities: design and experimental comparison. 2012, 20: 26484 Li M, Pernice W H P, Xiong C, Baehr-Jones T, Hochberg M, Tang H X. Harnessing optical forces in integrated photonic circuits. 2008, 456: 480 Abbott B et al. Observation of a kilogram-scale oscillator near its quantum ground state. 2009, 11: 073032 Abbott B et al. ** LIGO: The Laser Interferometer Gravitational-Wave Observatory. 2009, 72: 076901 Hill J T, Safavi-Naeini A H, Chan J, Painter O. Coherent optical wavelength conversion via cavity-optomechanics. 2012, 3**:** 1196 Dong C, Fiore V, Kuzyk M C, Wang H. Optomechanical dark mode. 2012, 338**:** 1609 Liu Y, Davanco M, Aksyuk V, Srinivasan K, Electromagnetically induced transparency and wideband wavelength conversion in silicon nitride microdisk optomechanical resonators. 2013, 110: 223603 Bagci T, Simonsen A, Schmid S, Villanueva L G, Zeuthen E, Appel J, Taylor J M, S[ø]{}rensen A, Usami K, Schliesser A, Polzik E S. Optical detection of radio waves through a nanomechanical transducer. , 2014, 507: 81 Lin Q, Rosenberg J, Chang D, Camacho R, Eichenfield M, Vahala K J, Painter O. Coherent mixing of mechanical excitations in nano-optomechanical structures. 2010, 4**:** 236 Massel F, Cho S U, Pirkkalainen J M, Hakonen P J, Heikkilä P J, Sillanpää M A. Multimode circuit optomechanics near the quantum limit. 2012, 3**:** 987 Seok H, Buchmann L F, Singh S, Meystre P. Optically mediated nonlinear quantum optomechanics. 2012, 86: 063829 Seok H, Buchmann L F, Wright E M, Meystre P. Multimode strong-coupling quantum optomechanics. 2013, 88: 063809 Tan H, Li G, Meystre P. Dissipation-driven two-mode mechanical squeezed states in optomechanical systems. 2013, 87: 033829 Ojanen T, B[ø]{}rkje K. Ground-state cooling of mechanical motion in the unresolved sideband regime by use of optomechanically induced transparency. 2014, 90: 013824
--- abstract: 'Let $r, s \ge 2$ be integers. Suppose that the number of blue $r$-cliques in a red/blue coloring of the edges of the complete graph $K_n$ is known and fixed. What is the largest possible number of red $s$-cliques under this assumption? The well known Kruskal-Katona theorem answers this question for $r=2$ or $s=2$. Using the shifting technique from extremal set theory together with some analytical arguments, we resolve this problem in general and prove that in the extremal coloring either the blue edges or the red edges form a clique.' author: - 'Hao Huang[^1]' - 'Nati Linial[^2]' - 'Humberto Naves[^3]' - 'Yuval Peled[^4]' - 'Benny Sudakov[^5]' title: On the densities of cliques and independent sets in graphs --- Introduction {#section_introduction} ============ As usual we denote by $K_s$ the complete graph on $s$ vertices and by $\overline{K}_s$ its complement, the edgeless graph on $s$ vertices. By the celebrated Ramsey’s theorem, for every two integers $r, s$ every sufficiently large graph must contain $K_r$ or $\overline{K}_s$. Turán’s theorem can be viewed as a quantitative version of the case $s=2$. Namely, it shows that among all $\overline{K}_r$-free $n$-vertex graphs, the graph with the least number of $K_2$ (edges) is a disjoint union of $r-1$ cliques of nearly equal size. More generally, one can ask the following question. Fix two graphs $H_1$ and $H_2$, and suppose that we know the number of induced copies of $H_1$ in an $n$-vertex graph $G$. What is the maximum (or minimum) number of induced copies of $H_2$ in $G$? In its full generality, this problem seems currently out of reach, but some special cases already have important implications in combinatorics, as well as other branches of mathematics and computer science. To state these classical results, we introduce some notation. Adjacency between vertices $u$ and $v$ is denoted by $u \sim v$, and the neighbor set of $v$ is denoted by $N(v)$. If necessary, we add a subscript $G$ to indicate the relevant graph. The collection of induced copies of a $k$-vertex graph $H$ in an $n$-vertex graph $G$ is denoted by ${\textup{Ind}}(H; G)$, i.e. $${\textup{Ind}}(H; G) := \{X \subseteq V(G): G[X] \simeq H\}$$ and the *induced $H$-density* is defined as$$d(H; G) := \frac{|{\textup{Ind}}(H; G)|}{\binom{n}{k}}.$$ In this language, Turán’s theorem says that if $d(K_r;G)=0$ then $d(K_2; G)\le 1-\frac{1}{r-1}$ and this bound is tight. For a general graph $H$, Erdős and Stone [@erdos-stone] determined $\max d(K_2; G)$ when $d(H;G)=0$ and showed that the answer depends only on the chromatic number of $H$. Zykov [@zykov] extended Turán’s theorem in a different direction. Given integers $2 \le r<s$, he proved that if $d(K_s;G)=0$ then $d(K_r; G) \le \frac{(s-1) \cdots (s-r)}{(s-1)^r}$. The balanced complete $(s-1)$-partite graphs show that this bound is also tight. For fixed integers $r<s$, the Kruskal-Katona theorem [@katona; @kruskal] states that if $d(K_r; G)=\alpha$ then $d(K_s; G) \le \alpha^{s/r}$. Again, the bound is tight and is attained when $G$ is a clique on some subset of the vertices. On the other hand, the problem of [*minimizing*]{} $d(K_s; G)$ under the same assumption is much more difficult. Even the case $r=2$ and $s=3$ has remained unsolved for many years until it was recently answered by Razborov [@razborov] using his newly-developed flag algebra method. Subsequently, Nikiforov [@nikiforov] and Reiher [@reiher] applied complicated analytical techniques to solve the cases $(r,s)=(2,4)$, and ($r=2$, arbitrary $s$), respectively. In this paper, we study the following natural analogue of the Kruskal-Katona theorem. Given $d(\overline{K}_r; G)$, how large can $d(K_s; G)$ be? For integers $a \ge b > 0$ we let $Q_{a,b}$ be the $a$-vertex graph whose edge set is a clique on some $b$ vertices. The complement of this graph is denoted by $\overline Q_{a,b}$. Let $\mathcal{Q}_a$ denote the family of all graphs $Q_{a,b}$ and its complement $\overline Q_{a,b}$ for $0 < b \le a$. Note that for $r=2$ or $s=2$, the Kruskal-Katona theorem implies that the extremal graph comes from $\mathcal{Q}_n$. Our first theorem shows that a similar statement holds for all $r$ and $s$. \[maintheorem\] Let $r, s \ge 2$ be integers and suppose that $d(\overline{K}_r; G) \geq p$ where $G$ is an $n$-vertex graph and $0 \le p \le 1$. Let $q$ be the unique root of $q^r+rq^{r-1}(1-q)=p$ in $[0,1]$. Then $d(K_s;G) \le M_{r,s,p} + o(1)$, where $$M_{r,s,p} := \max \{(1-p^{1/r})^s + sp^{1/r}(1-p^{1/r})^{s-1}, (1-q)^s\}.$$ Namely, given $d(\overline{K}_r; G)$, the maximum of $d(K_s; G)$ (up to $\pm o_n(1)$) is attained in one of two graphs, (or both), one of the form $Q_{n,t}$ and another $\overline Q_{n,t'}$. We obtain as well a [*stability version*]{} of Theorem \[maintheorem\]. Two $n$-vertex graphs $H$ and $G$ are *$\epsilon$-close* if it is possible to obtain $H$ from $G$ by adding or deleting at most $\epsilon n^2$ edges. As the next theorem shows, every near-extremal graph $G$ for Theorem \[maintheorem\] is $\epsilon$-close to a specific member of $\mathcal{Q}_n$. \[stabilitytheorem\] Let $r, s \ge 2$ be integers and let $p \in [0,1]$. For every $\epsilon > 0$, there exists $\delta > 0$ and an integer $N$ such that every $n$-vertex graph $G$ with $n > N$ satisfying $d(\overline{K}_r;G) \ge p$ and $|d(K_s;G) - M_{r,s,p}| \le \delta$, is $\epsilon$-close to some graph in $\mathcal{Q}_n$. ![Illustration for the case $r=s=3$. The green curve is $(d(\overline K_3;Q_{n,\theta n}), d(K_3;Q_{n,\theta n}))$ for $\theta\in{[0,1]}$, and the red curve defined the same with $\overline Q_{n,\theta n}$. The maximum between the curves is the extremal function in Theorem \[maintheorem\]. The intersection of the curves represents the solution of the max-min problem in Theorem \[thm:max\_min\]](fig1.png){width="50.00000%"} Rather than talking about an $n$-vertex graph and its complement, we can consider a two-edge-coloring of $K_n$. A quantitative version of Ramsey Theorem asks for the minimum number of monochromatic $s$-cliques over all such colorings. Goodman [@goodman] showed that for $r=s=3$, the optimal answer is essentially given by a random two-coloring of $E(K_n)$. In other words, $\min_G d(K_3; G) + d(\overline{K}_3; G) = 1/4-o(1)$. Erdős [@erdos-false] conjectured that the same random coloring also minimizes $d(K_r; G) + d(\overline{K}_r; G)$ for all $r$, but this was refuted by Thomason [@thomason] for all $r \ge 4$. A simple consequence of Goodman’s inequality is that $\min_G \max \{d(K_3; G), d(\overline{K}_3; G)\}=1/8$. The following construction by Franek and Rödl [@franek-rodl] shows that the analogous statement for $r \ge 4$ is again false. Let $H$ be a graph with vertex set $[2]^{13}$, the collection of all $8192$ binary vectors of length $13$. Two vertices are adjacent if the Hamming distance between the corresponding binary vectors is a number in $\{1, 4, 5, 8, 9, 11\}$. Let $G$ be obtained from $H$ by replacing each vertex with a clique of size $n$, and every edge with a complete bipartite graph. The number of $K_4$ and $\overline{K}_4$ in $G$ can be easily expressed in terms of the parameters of $H$ (see [@franek-rodl]), for large enough $n$ one can show that $d(K_4; G)<0.99\cdot \frac{1}{64}$ and $d(\overline{K}_4; G) <0.993\cdot \frac{1}{64}$. While the min-max question remains at present very poorly understood, we succeeded to completely answer the max-min version of this problem. \[thm:max\_min\] $$\max_G \min \{d(K_r; G), d(\overline{K}_r; G)\} = \rho^r +o(1),$$ where $\rho$ is the unique root in $[0,1]$ of the equation $\rho^r = (1-\rho)^r + r\rho(1-\rho)^{r-1}$. This theorem follows easily from Theorem \[maintheorem\]. Moreover, using Theorem \[stabilitytheorem\], we can also show that for every $\epsilon>0$ there is a $\delta>0$ such that every $n$-vertex graph $G$ with $\min \{d(K_r; G), d(\overline{K}_r; G)\} > \rho^r-\delta$ is $\epsilon$-close to a clique of size $\rho n$ or to the complement of this graph. Here we prove these theorems using the method of shifting. In the next section we describe this well-known and useful technique in extremal set theory. Using shifting, we show how to reduce the problem to *threshold graphs*. Section \[section\_main\] contains the proof of our main result for threshold graphs and section \[section\_stability\] contains the proof of the stability result. In Section \[section\_shift\] we sketch a second proof for the case $r=s$, based on a different representation of threshold graphs. We make a number of comments on the analogous problems for hypergraphs in Section \[section\_hyper\]. We finish this paper with some concluding remarks and open problems. Shifting {#section_shifting} ======== [*Shifting*]{} is one of the most important and widely-used tools in extremal set theory. This method allows one to reduce many extremal problems to more structured instances which are usually easier to analyze. Our treatment is rather shallow and we refer the reader to Frankl’s survey article [@frankl-survey] for a fuller account. Let $\mathcal{F}$ be a family of subsets of a finite set $V$, and let $u, v$ be two distinct elements of $V$. We define the *$(u, v)$-shift map* $S_{u\to v}$ as follows: for every $F \in \mathcal{F}$, let $$S_{u\to v}(F, \mathcal{F}) := \left\{\begin{array}{ll} (F \cup \{v\})\setminus \{u\} & \text{if } u \in F, v\not \in F \text{ and } (F \cup \{v\})\setminus \{u\} \not\in \mathcal{F}, \\ F & \text {otherwise.} \end{array}\right.$$ We define the $(u,v)$-shift of $\mathcal{F}$, to be the following family of subsets of $V$: $S_{u\to v}(\mathcal{F}) := \{ S_{u\to v}(F, \mathcal{F}) : F \in \mathcal{F}\}$. We observe that $|S_{u\to v}(\mathcal{F})|=|\mathcal{F}|$. In this context, one may think of $\mathcal{F}$ as a hypergraph over $V$. When all sets in $\mathcal F$ have cardinality $2$ this is a graph with vertex set $V$. As the next lemma shows, shifting of graph does not reduce the number of $l$-cliques in it for every $l$. Recall that ${\textup{Ind}}(K_l;G)$ denotes the collection of all cliques of size $l$ in $G$. \[lemma\_increase\] For every integer $l>0$, every graph $G$, and every $u\neq v \in V(G)$ there holds $$S_{u\to v}({\textup{Ind}}(K_l;G))\subseteq {\textup{Ind}}(K_l;S_{u\to v}(G)).$$ Let $A=S_{u\to v}(B,G)$, where $B$ is an $l$-clique in $G$. First, consider the cases when $u\notin B$ or both $u, v\in B$ or $B\setminus\{u\}\cup\{v\}$ is also a clique in $G$. Then $A=B$ and we need to show that $B$ remains a clique after shifting. Which edge in $B$ can be lost by shifting? It must be some edge $uw$ in $B$ that gets replaced by the non-edge $vw$ (otherwise we can not shift $uw$). Note that $vw$ is not in $B$, since $B$ is a clique. Hence $u, w\in B$ and $v\not\in B$. But then $B\setminus\{u\}\cup\{v\}$ is not a clique, contrary to our assumption. In the remaining case when $u\in B$, $v\notin B$ and $B\setminus\{u\}\cup\{v\}$ is not a clique in $G$, we need to show that $A=B\setminus\{u\}\cup\{v\}$ is a clique after shifting $S_{u\to v}(G)$. Every pair of vertices in $A\setminus\{v\}$ belongs to $B$ and the edge they span is not affected by the shifting. So consider $v\not = w\in A$. If $vw \in E(G)$, this edge remains after shifting. If, however, $vw \notin E(G)$, note that $uw \in E(G)$ since both vertices belong to the clique $B$. In this case $vw=S_{u\to v}(uw, G)$ and the claim is proved. Since shifting edges from $u$ to $v$ is equivalent to shifting non-edges from $v$ to $u$, it is immediate that $S_{u\to v}({\textup{Ind}}(\overline{K_l};G))\subseteq {\textup{Ind}}(\overline{K_l};S_{u\to v}(G))$. Therefore we obtain the following corollary. \[corollary\_increase\] Let $G$ be a graph, let $H = S_{u\to v}(G)$ and let $l$ be a positive integer. Then $$d(K_l; H) \ge d(K_l; G) \textrm{~~and~~~} d(\overline{K}_l; H) \ge d(\overline{K}_l; G).$$ We say that vertex $u$ *dominates* vertex $v$ if $S_{v\to u}(\mathcal{F})=\mathcal{F}$. In the case when $ \cal F$ is a set of edges of $G$, this implies that every $w\not = u$ which is adjacent to $v$ is also adjacent to $u$. If $V=[n]$, we say that a family $\mathcal{F}$ is *shifted* if $i$ dominates $j$ for every $i<j$. Every family can be made shifted by repeated applications of shifting operations $S_{j\to i}$ with $i<j$. To see this note that a shifting operation that changes $\mathcal F$ reduces the following non-negative potential function $\sum_{A\in\mathcal{F}}\sum_{i\in A}i$. As Corollary \[corollary\_increase\] shows, it suffices to prove Theorem \[maintheorem\] for shifted graphs. In Section \[section\_main\] we use the notion of *threshold graphs*. There are several equivalent ways to define threshold graph (see [@chvatal-hammer]), and we adopt the following definition. \[definition\_threshold\] We say that $G=(V,E)$ is a threshold graph if there is an ordering of $V$ so that every vertex is adjacent to either all or none of the preceding vertices. \[lemma\_threshold\] A graph is shifted if and only if it is a threshold graph. Let $G$ be a shifted graph. We may assume that $V=[n]$, and $i$ dominates $j$ in $G$ for every $i<j$. Consider the following order of vertices, $$...,\;3,\;N_G(2)\backslash N_G(3),\;2,\;N_G(1)\backslash N_G(2),\;1,\; V\backslash N_G(1)\;,$$ where the vertices inside the sets that appear here are ordered arbitrarily. We claim that this order satisfies Definition \[definition\_threshold\]. First, every vertex $v\notin N_G(1)$ is isolated. Indeed, if $u\sim v$, then necessarily $v\sim 1$, since $1$ dominates $u$. Therefore, vertex $1$ and its non-neighbors satisfy the condition in the definition. The proof that $G$ is threshold proceeds by induction applied to $G[N_G(1)]$. Conversely, let $G$ be a threshold graph. Let $v_1, v_2, \ldots, v_n$ be an ordering of $V$ as in Definition \[definition\_threshold\]. We say that a vertex is good (resp. bad) if it is adjacent to all (none) of its preceding vertices. Consider two vertices $v_i$ and $v_j$. It is straightforward to show that $v_i$ dominates $v_j$ if either (1) $v_i$ is good and $v_j$ is bad, (2) they are both good and $i>j$ or (3) they are both bad and $i<j$. Therefore we can reorder the vertices by first placing the good vertices in reverse order followed by the bad vertices in the regular order. This new ordering demonstrates that $G$ is shifted. Main result {#section_main} =========== In this section, we prove Theorem \[maintheorem\]. It will be convenient to reformulate the theorem, in a way that is analogous to the Kruskal-Katona theorem. \[maintheorem\_unrestricted\] Let $r, s \ge 3$ be integers and let $a,b > 0$ be real numbers. The maximum (up to $\pm o_n(1)$) of the function $f(G):= \min\{a\cdot d(K_s; G), b \cdot d(\overline{K}_r; G)\}$ over all $n$-vertex graphs is attained in one of two graphs, (or both), one of the form $Q_{n,t}$ and another $\overline Q_{n,t'}$. In particular, $f(G) \le \max \{a \cdot \alpha^{s}, b \cdot \beta^{r}\} + o(1)$, where $\alpha$ is the unique root in $[0,1]$ of $a \cdot \alpha^s = b \cdot [(1-\alpha)^r + r \alpha (1-\alpha)^{r-1}]$ and $\beta$ is the unique root in $[0,1]$ of $b \cdot \beta^r = a \cdot [(1-\beta)^s + s \beta (1-\beta)^{s-1}]$. We turn to show how to deduce Theorem \[maintheorem\] from Theorem \[maintheorem\_unrestricted\]. We assume that $r,s \ge 3$, since the other cases follow from Kruskal-Katona theorem. Let $M$ be the maximum of $d(K_s;G)$ over all graphs $G$ on $n$ vertices with $d(\overline{K}_r;G) \geq p$. Fix such an extremal $G$ with $d(\overline{K}_r;G) = p'\geq p$ and $d(K_s;G) = M$. Now apply Theorem \[maintheorem\_unrestricted\] with $a = p$ and $b = M$ and the same $n$, $r$ and $s$. The extremal graph $G'$ that Theorem \[maintheorem\_unrestricted\] yields, satisfies $$f(G') \ge f(G) = \min\{a\cdot d(K_s;G), b\cdot d(\overline{K}_r;G)\} = p \cdot M,$$ hence $d(K_s;G') \ge M$ and $d(\overline{K}_r;G')\ge p$. Therefore, the same $G'$ is extremal for Theorem \[maintheorem\] as well and we know that the maximum in this theorem is achieved asymptotically by a graph of $\mathcal Q_n$. Note that we can always assume that in the extremal graph $d(\overline{K}_r;G')=p$ since otherwise we can add edges to $G'$ without decreasing $d(K_s;G')$ until $d(\overline{K}_r;G')=p$ is obtained. Therefore the maximum is attained either by a graph of the form $\overline Q_{n,p^{1/r}n}$ or by $Q_{n,(1-q)n}$, where $q^r+rq^{r-1}(1-q)=p$. This implies that asymptotically the maximum in Theorem \[maintheorem\] is indeed $$M_{r,s,p} = \max \{(1-p^{1/r})^s + sp^{1/r}(1-p^{1/r})^{s-1}, (1-q)^s\}.$$ By Corollary \[corollary\_increase\] and Lemma \[lemma\_threshold\], $f(G)$ is maximized by a threshold graph. We turn to prove Theorem \[maintheorem\_unrestricted\] for threshold graphs. Let $G$ be a threshold graph on an ordered vertex set $V$, as in Definition \[definition\_threshold\]. There exists an integer $k>0$, and a partition $A_1,\ldots,A_{2k}$ of $V$ such that 1. If $v\in A_i$ and $u\in A_j$ for $i<j$, then $v<u$. 2. Every vertex in $A_{2i-1}$ (respectively $A_{2i}$) is adjacent to all (none) of its preceding vertices. Let $x_i=\frac{|A_{2i-1}|}{|V|}$ and $y_i=\frac{|A_{2i}|}{|V|}$. Clearly $\sum_{i=1}^k (x_i + y_i) = 1$. Up to a negligible error-term, $$\begin{aligned} d(K_s;G)=p(\mathbf{x},\mathbf{y}) &:= \left(\sum_{i=1}^{k} x_i\right)^s + s \cdot \sum_{i = 1}^{k-1} \left[y_i\cdot \left(\sum_{j=i + 1}^k x_j\right)^{s-1}\right], \\ d(\overline K_r;G)=q(\mathbf{x},\mathbf{y}) &:= \left(\sum_{i=1}^{k} y_i\right)^r + r \cdot \sum_{i = 1}^{k} \left[x_i\cdot \left(\sum_{j=i}^k y_j\right)^{r-1}\right].\end{aligned}$$ Where $\mathbf{x} = (x_1,x_2,\ldots, x_k)$ and $\mathbf{y} = (y_1,y_2,\ldots, y_k)$. Occasionally, $p$ will be denoted by $p_s$ and $q$ by $q_r$ to specify the parameter of these functions. Our problem can therefore be reformulated as follows. For given integers $k\ge 2$, $r,s\ge 3$ and real $a,b>0$, let $W_k \subseteq \mathbb{R}^{2k}$ be the set $$W_k := \left\{(x_1,x_2,\ldots, x_k,y_1,y_2,\ldots,y_k)\in \mathbb{R}^{2k} : x_i,y_i \ge 0 \text{ for all $i$ and } \sum_{i=1}^k (x_i + y_i) = 1\right\}.$$ Let $p, q : W_k \to \mathbb{R}$ be the two homogeneous polynomials defined above, We are interested in maximizing the real function $$\varphi(\mathbf{x},\mathbf{y}) := \min \{a \cdot p(\mathbf{x},\mathbf{y}), b \cdot q(\mathbf{x},\mathbf{y})\}.$$ This problem is well defined since $W_k$ is compact and $\varphi$ is continuous. We say that $(\mathbf x,\mathbf y)\in W_k$ is *non-degenerate* if the set of zeros in the sequence $(y_1, x_2, y_2,\ldots, x_k, y_k)$, with $x_1$ omitted, forms a suffix. If $(\mathbf x,\mathbf y)\in W_k$ is degenerate, then there is a non-degenerate $(\mathbf x',\mathbf y')\in W_k$ with $\varphi(\mathbf{x},\mathbf{y})=\varphi(\mathbf{x}',\mathbf{y}')$. Indeed, if $y_i=0$ and $x_{i+1}\ne 0$ for some $1\le i <k$, let $(\mathbf x',\mathbf y')\in W_{k-1}$ be defined by $$\mathbf x' = (x_1,\ldots,x_{i-1},x_i+x_{i+1},x_{i+2},\ldots,x_k)$$ $$\mathbf y' = (y_1,\ldots,y_{i-1},y_{i+1},\ldots,y_k)$$ It is easy to verify that $p(\mathbf x,\mathbf y)=p(\mathbf x',\mathbf y')$ and $q(\mathbf x,\mathbf y)=q(\mathbf x',\mathbf y')$. By induction on $k$, we assume that $(\mathbf x',\mathbf y')$ is non-degenerate, and by padding $\mathbf x'$ and $\mathbf y'$ with a zero, the claim is proved. The case $x_i=0$ and $y_i\ne 0$ is proved similarly. In particular, $\varphi$ has a non-degenerate maximum in $W_k$. Our purpose is to show that the original problem is optimized by graphs from $\mathcal{Q}_n$. This translates to the claim that a non-degenerate $(\mathbf{x},\mathbf{y})$ that maximizes $\varphi$ is supported only on either $x_1,y_1$ or $y_1,x_2$, which corresponds to either a clique $Q_{n,t}$ or a complement of a clique $\overline{Q}_{n,t}$, respectively. \[lemma\_mainLemma\] Let $(\mathbf{x},\mathbf{y}) \in W_k$ be a non-degenerate maximum of $\varphi$. If $x_1>0$, then for every $i\ge 2$, $x_i=y_i=0$. On the other hand, if $x_1=0$ then $y_i=0$ for every $i\ge 2$, and $x_i=0$ for every $i\ge 3$. We note first that the second part of the lemma is implied by the first part. Define $\mathbf{x}'$ by $$x'_i := \left\{\begin{array}{ll} x_{i+1} & \text{ if } i < k, \\ 0 & \text{ if } i = k. \end{array}\right.$$ Clearly, if $x_1=0$, then $p_s(\mathbf{x,y})=q_s(\mathbf{y,x'})$, $q_r(\mathbf{x,y})=p_r(\mathbf{y,x'})$, and $$\varphi'(\mathbf{y},\mathbf{x'}) := \min \{b \cdot p_r(\mathbf{y},\mathbf{x'}), a \cdot q_s(\mathbf{y},\mathbf{x'})\}=\varphi(\mathbf{x},\mathbf{y}).$$ Since $\varphi$ attains its maximum when $x_1=0$, maximizing it is equivalent to maximizing $\varphi'(\mathbf{y,x'})$. Since $(\mathbf{x},\mathbf{y})$ is non-degenerate, $y_1>0$, and applying the first part of Lemma \[lemma\_mainLemma\] for $\varphi '(\mathbf y,\mathbf x ')$ finishes the proof, by obtaining that for every $i\geq 2$, $y_i=x'_i=0$. The first part of Lemma \[lemma\_mainLemma\] is proved in the following lemmas. We successively show that $x_3=0$, then $y_2=0$ and finally $x_2=0$. Here is a local condition that maximum points of $\varphi$ satisfy. \[lemma\_equality\] If $\varphi$ takes its maximum at a non-degenerate $(\mathbf{x}, \mathbf{y}) \in W_k$, then $a \cdot p(\mathbf{x}, \mathbf{y}) = b\cdot q(\mathbf{x}, \mathbf{y})$. Note that $0<y_1<1$, since $(\mathbf{x},\mathbf{y})\in W$ is non-degenerate. We consider two perturbations of the input, one of which increases $p(\mathbf{x},\mathbf{y})$, and the other increases $q(\mathbf{x},\mathbf{y})$. Consequently, if $a\cdot p(\mathbf{x},\mathbf{y}) \ne b\cdot q(\mathbf{x},\mathbf{y})$, by applying the appropriate perturbation, we increase the smallest between $a\cdot p(\mathbf{x},\mathbf{y})$ and $b\cdot q(\mathbf{x},\mathbf{y})$, thus increasing $\min\{a\cdot p(\mathbf{x},\mathbf{y}), b\cdot q(\mathbf{x},\mathbf{y})\}$, contrary to the maximality assumption. To define the perturbation that increases $p$, let $\mathbf{x'}=\mathbf{x}+t\mathbf{e_1}$ and $\mathbf{y'}=\mathbf{y}-t\mathbf{e_1}$, where $0<t<y_1$, and $\mathbf{e_1}$ is the first unit vector in $\mathbb{R}^k$. Then, $(\mathbf{x'},\mathbf{y'}) \in W$ and $$\frac{\partial p(\mathbf{x'},\mathbf{y'})}{\partial t}= s \left(t+\sum_{i=1}^{k} x_i\right)^{s-1} - s \cdot \left(\sum_{j=2}^k x_j\right)^{s-1} > 0$$ as claimed. In order to increase $q$, consider two cases. If $x_1=0$, let $\mathbf{x'}=\mathbf{x}-t\mathbf{e_2}$ and $\mathbf{y'}=\mathbf{y}+t\mathbf{e_1}$, where $0<t<x_2$. Then, $(\mathbf{x'},\mathbf{y'}) \in W$ and $$\frac{\partial q(\mathbf{x'},\mathbf{y'})}{\partial t}= r \left(t+\sum_{i=1}^{k} y_i\right)^{r-1} - r \cdot \left(\sum_{j=k}^n y_j\right)^{r-1} > 0.$$ If $x_1>0$, we let $\mathbf{x'}=\mathbf{x}-t\mathbf{e_1}$ and $\mathbf{y'}=\mathbf{y}+t\mathbf{e_1}$, where $0<t<x_1$. Then, $$\frac{\partial q(\mathbf{x'},\mathbf{y'})}{\partial t}= r(x_1-t)(r-1)\left(t+\sum_{i=1}^{k} y_i\right)^{r-2}> 0.$$ \[lemma\_small0\] If $(\mathbf{x},\mathbf{y}) \in W_k$ is a non-degenerate maximum of $\varphi$ with $x_1>0$, then $x_3=0$. Suppose, that $x_3 > 0$ and let $1 \le l \le m \le k$. Then $$\begin{aligned} \frac{\partial p}{\partial x_l} &= s \cdot \left(\sum_{i=1}^{k} x_i\right)^{s-1} + s(s-1) \cdot \sum_{i = 1}^{l-1} \left[y_i\cdot \left(\sum_{j=i + 1}^k x_j\right)^{s-2}\right], \\ \frac{\partial q}{\partial x_l} &= r \cdot \left(\sum_{j=l}^k y_j\right)^{r-1},\end{aligned}$$ and $$\begin{aligned} \frac{\partial^2 p}{\partial x_l \partial x_m} &= s(s-1)\cdot \left(\sum_{i=1}^{k} x_i\right)^{s-2} + s(s-1)(s-2) \cdot \sum_{i = 1}^{l-1} \left[y_i\cdot \left(\sum_{j=i + 1}^k x_j\right)^{s-3}\right], \\ \frac{\partial^2 q}{\partial x_l \partial x_m} &\equiv 0.\end{aligned}$$ Clearly $\frac{\partial^2 p}{\partial x_l \partial x_m} = \frac{\partial^2 p}{\partial x_l^2}$, for $l \le m$. We define two matrices $\mathbf{A}$ and $\mathbf{B}$ as following. $$\mathbf{A} = \begin{bmatrix} 1 & 1 & 1 \\ \frac{\partial p}{\partial x_1} & \frac{\partial p}{\partial x_2} & \frac{\partial p}{\partial x_3} \\ \frac{\partial q}{\partial x_1} & \frac{\partial q}{\partial x_2} & \frac{\partial q}{\partial x_3} \\ \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} \frac{\partial^2 p}{\partial x_1^2} & \frac{\partial^2 p}{\partial x_1\partial x_2} & \frac{\partial^2 p}{\partial x_1\partial x_3} \\ \frac{\partial^2 p}{\partial x_1\partial x_2} & \frac{\partial^2 p}{\partial x_2^2} & \frac{\partial^2 p}{\partial x_2\partial x_3} \\ \frac{\partial^2 p}{\partial x_1\partial x_3} & \frac{\partial^2 p}{\partial x_2\partial x_3} & \frac{\partial^2 p}{\partial x_3^2} \\ \end{bmatrix}=\begin{bmatrix} \frac{\partial^2 p}{\partial x_1^2} & \frac{\partial^2 p}{\partial x_1^2} & \frac{\partial^2 p}{\partial x_1^2} \\ \frac{\partial^2 p}{\partial x_1^2} & \frac{\partial^2 p}{\partial x_2^2} & \frac{\partial^2 p}{\partial x_2^2} \\ \frac{\partial^2 p}{\partial x_1^2} & \frac{\partial^2 p}{\partial x_2^2} & \frac{\partial^2 p}{\partial x_3^2} \\ \end{bmatrix}.$$ It is easy to see that if $(\mathbf{x},\mathbf{y})$ is non-degenerate with $x_3>0$, then $\frac{\partial^2 p}{\partial x_3^2}>\frac{\partial^2 p}{\partial x_2^2}>\frac{\partial^2 p}{\partial x_1^2}>0$. This implies that $\mathbf{B}$ is positive definite. For a vector $\mathbf{v}\in\mathbb{R}^3$ and $\epsilon>0$, we define $\mathbf{x'}$ by $$x_i'= \left\{\begin{array}{ll} x_i + \epsilon v_i & \text{ if } i \le 3, \\ x_i & \text{ if } i > 3, \\ \end{array}\right.$$ If $\mathbf{A}$ is invertible, let $\mathbf{v}$ be the (unique) vector for which $$\mathbf{A} \cdot \mathbf{v}^{T} = \begin{bmatrix} 0 \\ 1 \\ 1 \\ \end{bmatrix}.$$ In particular $\sum_i x_i' = \sum_i x_i$. For $\epsilon$ sufficiently small, $$p(\mathbf{x}', \mathbf{y}) = p(\mathbf{x},\mathbf{y}) + \epsilon + O(\epsilon^2)>p(\mathbf{x}, \mathbf{y})$$$$q(\mathbf{x}', \mathbf{y}) = q(\mathbf{x}, \mathbf{y}) + \epsilon> q(\mathbf{x}, \mathbf{y})$$ contrary to the maximality of $(\mathbf{x},\mathbf{y})$. If $\mathbf{A}$ is singular, pick some $\mathbf{v} \neq 0$ with $\mathbf{A} \cdot \mathbf{v}^{T} = \mathbf{0}$. Again $\sum_i x_i' = \sum_i x_i$. Since $\mathbf{B}$ is positive definite, for a sufficiently small $\epsilon$, $$p(\mathbf{x}', \mathbf{y}) = p(\mathbf{x},\mathbf{y}) + \frac{\epsilon^2}{2}\cdot \mathbf{v} \cdot \mathbf{B} \cdot \mathbf{v}^{T} + O(\epsilon^3)>p(\mathbf{x},\mathbf{y})$$$$q(\mathbf{x}', \mathbf{y}) = q(\mathbf{x}, \mathbf{y}),$$ Contradicting Lemma \[lemma\_equality\]. \[lemma\_small1\] If $(\mathbf{x},\mathbf{y}) \in W_k$ is a non-degenerate maximum of $\varphi$ with $x_1> 0$, then $y_2=0$. By Lemma \[lemma\_small0\] we may assume that $x_i=y_i=0$ for all $i \ge 3$. Suppose, towards contradiction, that $y_2\ne 0$. Let $$\mathbf{M} = \begin{bmatrix} a_1 & a_2 \\ b_1 & b_2 \end{bmatrix},$$ where $$\begin{aligned} a_1 = \frac{\partial p}{\partial x_1} - \frac{\partial p}{\partial x_2} &= -s(s-1)\cdot y_1 \cdot x_2^{s-2},\quad & b_1 = \frac{\partial q}{\partial x_1} - \frac{\partial q}{\partial x_2} &= r\cdot ((y_1 + y_2)^{r-1} - y_2^{r-1}), \\ a_2 = \frac{\partial p}{\partial y_1} - \frac{\partial p}{\partial y_2} &= s\cdot x_2^{s-1}, \quad & b_2 = \frac{\partial q}{\partial y_1} - \frac{\partial q}{\partial y_2} &= -r(r-1) \cdot x_2\cdot y_2^{r-2}, \\\end{aligned}$$ If $\mbox{rank}(\mathbf{M})=2$, then there is a vector $\mathbf{v}=\left(v_1 \atop v_2\right)$ such that $\mathbf{M}\cdot \mathbf{v}= \left(1 \atop 1\right)$. Define $x'_1=x_1+\epsilon v_1, x'_2=x_2-\epsilon v_1$ and $y'_1=y_1+\epsilon v_2, y'_2=y_2-\epsilon v_2$. Then $x'_1+x'_2+y'_1+y'_2=1$ and for sufficiently small $\epsilon>0$ $$\begin{aligned} p(\mathbf{x}', \mathbf{y}') &= & p(\mathbf{x},\mathbf{y})+\epsilon\Big(\frac{\partial p}{\partial x_1}v_1-\frac{\partial p}{\partial x_2}v_1+ \frac{\partial p}{\partial y_1}v_2-\frac{\partial p}{\partial y_2}v_2 \Big)+ O(\epsilon^2)\\ &= &p(\mathbf{x},\mathbf{y})+\epsilon\big(a_1v_1+a_2v_2\big)+ O(\epsilon^2)=p(\mathbf{x},\mathbf{y})+\epsilon+ O(\epsilon^2)>p(\mathbf{x},\mathbf{y}).\end{aligned}$$ Similarly $q(\mathbf{x}', \mathbf{y}')=q(\mathbf{x},\mathbf{y})+\epsilon+ O(\epsilon^2)>q(\mathbf{x},\mathbf{y})$. Thus $(\mathbf{x},\mathbf{y})$ cannot be a maximum of $\varphi$. Hence, $\mbox{rank}(\mathbf{M})\le 1$, and in particular $$\det \begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \end{bmatrix} = 0,$$ which implies that $$0 = x_2^{s-1}y_2^{r-1}\left((r-1)(s-1)\frac{y_1}{y_2} - \left(\frac{y_1}{y_2}+1\right)^{r-1} + 1\right), \\$$ The function $$g(\alpha) = (r-1)(s-1)\alpha-(\alpha + 1)^{r-1}+1$$ is strictly concave for $\alpha > 0$ and vanishes at $0$. Since $\alpha=0$ is not a maximum of $g$, the equation $g\left(\frac{y_1}{y_2}\right)=0$ determines $\frac{y_1}{y_2}$ uniquely. Denote $\alpha=\frac{y_1}{y_2}$, and consider the following change of variables. $$\begin{aligned} x_1' &= x_1 + \frac{1}{1 + (r-1)(s-1)\alpha}\cdot x_2,\quad & x_2' &= \frac{(r-1)(s-1)\alpha}{1 + (r-1)(s-1)\alpha} \cdot x_2\\ y_1' &= y_1 + y_2 = (\alpha + 1) y_2,\quad & y_2' &= 0\end{aligned}$$ Clearly, $x_1' + x_2' = x_1 + x_2$ and $y_1' + y_2' = y_1 + y_2$. Moreover, $$\begin{aligned} q(\mathbf{x}', \mathbf{y}') &= (y_1')^{r} + r\cdot x_1' \cdot (y_1')^{r-1}\\ &= (y_1 + y_2)^r + r\cdot x_1 \cdot (y_1 + y_2)^{r-1} + \frac{r\cdot x_2 \cdot (y_1 + y_2)^{r-1}}{1 + (r-1)(s-1)\alpha} \\ &= (y_1 + y_2)^r + r\cdot x_1 \cdot (y_1 + y_2)^{r-1} + \frac{r\cdot (1+\alpha)^{r-1} \cdot x_2 \cdot y_2^{r-1}}{(1 + \alpha)^{r-1}} = q(\mathbf{x}, \mathbf{y}) \\ p(\mathbf{x}', \mathbf{y}') &= (x_1' + x_2')^{s} + s \cdot y_1' \cdot (x_2')^{s-1} \\ &= (x_1 + x_2)^s + s\cdot (\alpha + 1)\cdot \left(\frac{(r-1)(s-1)\alpha}{1 + (r-1)(s-1)\alpha}\right)^{s-1}\cdot y_2 \cdot x_2^{s-1} \\ &> (x_1 + x_2)^s + s\cdot \alpha \cdot y_2 \cdot x_2^{s-1} = p(\mathbf{x}, \mathbf{y}),\end{aligned}$$ Where the last inequality is a consequence of Lemma \[lemma\_ineq\] below. This contradicts Lemma \[lemma\_equality\]. \[lemma\_ineq\] Let $r,s \ge 3$ be integers. Let $\alpha > 0$ be the unique positive root of $$(\alpha + 1)^{r-1} - 1 = (r-1)(s-1)\alpha.$$ Then $$\left(1 + \frac{1}{(r-1)(s-1)\alpha}\right)^{s-1} < 1 + \frac{1}{\alpha}.$$ First, we show that $(r-1) \alpha > 1$. Let $t = (r-1)\alpha$ and assume, by contradiction, that $t \le 1$. For $0 < t\le 1$, we have $e^t < 1 + 2t$. On the other hand, $e \ge (1+\alpha)^{1/\alpha}$, implying $e^t \ge \left( 1 + \alpha\right)^{t / \alpha} = (1 + \alpha)^{r-1}$. Thus we have $2t > (1+\alpha)^{r-1}-1 = (r-1)(s-1)\alpha = (s-1)t$, which implies $2 > s-1$, a contradiction. Therefore $(r-1)\alpha > 1$. Also, since $1+x<e^x$ for all $x>0$, we have that $\big(1 + \frac{1}{(r-1)(s-1)\alpha}\big)^{s-1} < e^{\frac{1}{(r-1)\alpha}}$. So it suffices to show that $e^{\frac{1}{(r-1)\alpha}} \le 1 + \frac{1}{\alpha}$. But since $(r-1)\alpha > 1$, we have $$\left(1 + \frac{1}{\alpha}\right)^{(r-1)\alpha} > 1 + \frac{(r-1)\alpha}{\alpha} = r \ge 3 > e,$$ which finishes the proof of the lemma. \[lemma\_small2\] If $(\mathbf{x},\mathbf{y}) \in W_k$ is a non-degenerate maximum of $\varphi$ with $x_1>0$, then $x_2=0$. This proof is very similar to the proof of Lemma \[lemma\_small1\]. Now $x_1, x_2, y_1 > 0$ and $x_1 + x_2 + y_1 = 1$. Also $$\begin{aligned} p(\mathbf{x},\mathbf{y}) &= (x_1 + x_2)^s + s \cdot y_1 \cdot x_2^{s-1}, \\ q(\mathbf{x},\mathbf{y}) &= y_1^r + r\cdot x_1 \cdot y_1^{r-1}.\end{aligned}$$ Let $$\mathbf{M} = \begin{bmatrix} a_1 & a_2 \\ b_1 & b_2 \end{bmatrix},$$ where $$\begin{aligned} a_1 = \frac{\partial p}{\partial x_1} - \frac{\partial p}{\partial x_2} &= -s(s-1) \cdot y_1 \cdot x_2^{s-2}, \quad & b_1 = \frac{\partial q}{\partial x_1} - \frac{\partial q}{\partial x_2} &= r\cdot y_1^{r-1}, \\ a_2 = \frac{\partial p}{\partial y_1} - \frac{\partial p}{\partial x_1} &= -s\cdot ((x_1+x_2)^{s-1} - x_2^{s-1}), \quad & b_2 = \frac{\partial q}{\partial y_1} - \frac{\partial q}{\partial x_1} &= r(r-1)\cdot x_1\cdot y_1^{r-2}, \\\end{aligned}$$ If $\mathbf{M}$ is nonsingular, then there is a vector $\mathbf{v}=\left(v_1 \atop v_2\right)$ such that $\mathbf{M}\cdot \mathbf{v}= \left(1 \atop 1\right)$. Define $x'_1=x_1+\epsilon (v_1-v_2), x'_2=x_2-\epsilon v_1$ and $y'_1=y_1+\epsilon v_2$. Then $x'_1+x'_2+y'_1=1$ and for sufficiently small $\epsilon>0$ $$\begin{aligned} p(\mathbf{x}', \mathbf{y}') &=& p(\mathbf{x},\mathbf{y})+\epsilon\Big(\frac{\partial p}{\partial x_1}(v_1-v_2)-\frac{\partial p}{\partial x_2}v_1+ \frac{\partial p}{\partial y_1}v_2\Big)+ O(\epsilon^2)\\ &=& p(\mathbf{x},\mathbf{y})+\epsilon\big(a_1v_1+a_2v_2\big)+ O(\epsilon^2)=p(\mathbf{x},\mathbf{y})+\epsilon+ O(\epsilon^2)>p(\mathbf{x},\mathbf{y}).\end{aligned}$$ Similarly $q(\mathbf{x}', \mathbf{y}')=q(\mathbf{x},\mathbf{y})+\epsilon+ O(\epsilon^2)>q(\mathbf{x},\mathbf{y})$ and therefore $(\mathbf{x},\mathbf{y})$ cannot be a maximum of $\varphi$. Hence, $$\det \begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \end{bmatrix} = 0,$$ which implies $$0 = y_1^{r-1}x_2^{s-1}\left((r-1)\cdot (s-1) \cdot\frac{x_1}{x_2} - \left(\frac{x_1}{x_2}+1\right)^{s-1} + 1\right).$$ Let $\gamma = \frac{x_1}{x_2} > 0$. Then $1+(r-1)(s-1)\gamma-(1+\gamma)^{s-1}=0$ and concavity of the left hand side shows that $\gamma$ is determined uniquely by this equation. Now make the following substitution: $$\begin{aligned} x_1' &= 0\\ x_2' &= x_1 + x_2 = (1 + \gamma) \cdot x_2\\ y_1' &= \frac{1}{1 + (r-1)(s-1)\gamma}\cdot y_1\\ y_2' &= \frac{(r-1)(s-1)\gamma}{1 + (r-1)(s-1)\gamma} \cdot y_1\end{aligned}$$ Clearly $x_1' + x_2' = x_1 + x_2$ and $y_1' + y_2' = y_1$. Since $(1+\gamma)^{s-1}=1+(r-1)(s-1)\gamma$, we have $$\begin{aligned} p(\mathbf{x}', \mathbf{y}') &= (x_2')^{s} + s\cdot y_1' \cdot (x_2')^{s-1}\\ &= (x_1 + x_2)^s + s\cdot y_1\cdot x_2^{s-1} = p(\mathbf{x},\mathbf{y})\\ q(\mathbf{x}', \mathbf{y}') &= (y_1' + y_2')^{r} + r\cdot x_2' \cdot (y_2')^{r-1} \\ &= y_1^r + r\cdot \frac{(1+\gamma)}{\gamma}\cdot\left(\frac{(r-1)(s-1)\gamma}{1 + (r-1)(s-1)\gamma}\right)^{r-1}\cdot x_1 \cdot y_1^{r-1} \\ &> y_1^r + r\cdot x_1 \cdot y_1^{r-1} = q(\mathbf{x}, \mathbf{y}),\end{aligned}$$ Where the last inequality follows from Lemma \[lemma\_ineq\], with $r$ and $s$ switched. Again, this contradicts Lemma \[lemma\_equality\]. By combining Lemmas \[lemma\_equality\] – \[lemma\_small2\], we obtain a proof of Lemma \[lemma\_mainLemma\], which states that the maximum of $\varphi$ is attained by a non-degenerate $(\mathbf{x},\mathbf{y})$ supported only on either $x_1,y_1$ or $y_1,x_2$. In the first case, let $x_1=\alpha$ and $y_1=1-\alpha$. Then by Lemma \[lemma\_equality\], $a\cdot p(\mathbf{x},\mathbf{y})=a\cdot \alpha^s=b\cdot q(\mathbf{x},\mathbf{y})=b\big[(1-\alpha)^r+r\alpha(1-\alpha)^{r-1}\big]$ and $\varphi(\mathbf{x},\mathbf{y})=a \cdot \alpha^s$. In the second case, let $y_1=\beta$ and $x_2=1-\beta$. Then $b\cdot q(\mathbf{x},\mathbf{y})=b\cdot \beta^r=a \cdot p(\mathbf{x},\mathbf{y})=a\big[(1-\beta)^s+s(1-\beta)^{s-1}\big]$ and $\varphi(\mathbf{x},\mathbf{y})=b \cdot \beta^r$. This shows that the maximum of $\varphi$ is $\max \{a \cdot \alpha^{s}, b \cdot \beta^{r}\}$ with $\alpha, \beta$ satisfying the above equations. In terms of the original graph, this proves that $\varphi$ is maximized by a graph of the form $Q_{n,t}$ or $\overline Q_{n,t}$, respectively. In particular, our problem has at most two extremal configurations (in some cases a clique and the complement of a clique can give the same value of $\varphi$). Stability analysis {#section_stability} ================== In this section we discuss the proof of Theorem \[stabilitytheorem\]. In essentially the same way that Theorem \[maintheorem\_unrestricted\] implies Theorem \[maintheorem\], this theorem follows from a stability version of Theorem \[maintheorem\_unrestricted\]: \[stabilitytheorem\_unrestricted\] Let $r, s \ge 3$ be integers and let $a,b > 0$ be real. For every $\epsilon > 0$, there exists $\delta > 0$ and an integer $N$ such that every $n$-vertex $G$ with $n > N$ for which $$f(G) \ge \max \{a \cdot \alpha^{s}, b \cdot \beta^{r}\} - \delta$$ is $\epsilon$-close to some graph in $\mathcal{Q}_n$. Here $f, \alpha$ and $\beta$ are as in Theorem \[maintheorem\_unrestricted\]. If $G$ is a threshold graph, the claim follows easily from Lemma \[lemma\_mainLemma\]. Since $G$ is a threshold graph, $f(G) = \varphi(\mathbf{x},\mathbf{y})+o(1)$ for some $(\mathbf{x},\mathbf{y})\in W_k$ and some integer $k$. As this lemma shows, the continuous function $\varphi$ attains its maximum on the compact set $W_k$ at most twice, and this in points that correspond to graphs from $\mathcal{Q}_n$. Since $f(G)$ is $\delta$-close to the maximum, it follows that $(\mathbf{x},\mathbf{y})$ must be $\epsilon'$-close to at least one of the two optimal points in $W_k$. This, in turn implies $\epsilon$-proximity of the corresponding graphs. For the general case, we use the stability version of the Kruskal-Katona theorem due to Keevash [@keevash]. Suppose $G$ is a large graph such that $f(G) \ge \max \{a \cdot \alpha^{s}, b \cdot \beta^{r}\} - \delta$. Let $G_1$ be the shifted graph obtained from $G$. Thus $G_1$ is a threshold graph with the same edge density as $G$, and $f(G_1) \ge f(G)$ by Corollary \[corollary\_increase\]. Pick a small $\epsilon' > 0$. We just saw that for $\delta$ sufficiently small, $G_1$ is $\epsilon'$-close to $G_{max} \in \mathcal{Q}_n$. As we know, either $G_{max}= Q_{n,t}$ or $G_{max}= \bar Q_{n,t}$ for some $0 < t \le n$. We deal with the former case, and the second case can be done similarly. Now $|d(K_2;G) - d(K_2;G_{max})| \le \epsilon'$, since $G$ and $G_1$ have the same edge density. Moreover, $d(K_s; G) \ge d(K_s;G_{max}) -\delta/a$, because $f(G) \ge f(G_{max}) - \delta$. Since $G_{max}$ is a clique, it satisfies the Kruskal-Katona inequality with equality. Consequently $G$ has nearly the maximum possible $K_s$-density for a given number of edges. By choosing $\epsilon'$ and $\delta$ small enough and applying Keevash’s stability version of Kruskal-Katona inequality, we conclude that $G$ and $G_{max}$ are $\epsilon$-close. Second proof {#section_shift} ============ In this section we briefly present the main ingredients for an alternative approach to Theorem \[maintheorem\]. We restrict ourselves to the case $r=s$. This proof reduces the problem to a question in the calculus of variations. Such calculations occur often in the context of shifted graphs. Let $G$ be a shifted graph with vertex set $[n]$ with the standard order. Then, there is some $n \ge i \ge 1$ such that $A=\{1,...,i\}$ spans a clique, whereas $B=\{i+1,...,n\}$ spans an independent set. In addition, there is some non-increasing function $F:A\rightarrow B$ such that for every $j \in A$ the highest index neighbor of $j$ in $B$ is $F(j)$, and all vertices of $B$ up to index $F(j)$ are connected to $j$. Let $x$ be the relative size of $A$ and $1-x$ the relative size of $B$. In this case we can express (up to a negligible error term) $$\begin{aligned} d(\overline{K}_k; G)&=&{n \choose k}^{-1}\left[{(1-x)n \choose k} + \sum_{1 \leq j \leq xn} {n-F(j) \choose k-1} \right]=(1-x)^k+\frac{k}{n}\sum_{1 \leq j \leq xn}\left(\frac{n-F(j)}{n}\right)^{k-1}\\ &=& (1-x)^k+kx(1-x)^{k-1}\sum_{1 \leq j \leq xn} \frac{1}{nx}\left(1-\frac{F(j)-xn}{(1-x)n}\right)^{k-1}.\end{aligned}$$ Let $f$ be a non-increasing function $f:{[0,1]}\rightarrow{[0,1]}$ such that $f(t)=\frac{F(j)-xn}{(1-x)n}$ for every $\frac{j-1}{xn} \leq t \leq \frac{j}{xn}$ (Think of $f$ as a relative version of $F$ both on its domain with respect to $A$ and its codomain with respect to $B$). Then we can express $d(\overline{K}_k; G)$ in terms of $x$ and $f$ $$d(\overline{K}_k; G)=(1-x)^k+kx(1-x)^{k-1}\int_{0}^{1}(1-f(t))^{k-1}dt={d(\overline{K_k};G_{x,f})}.$$ Similarly one can show that $$d(K_k; G)=x^k+kx^{k-1}(1-x)\int_{0}^{1}(k-1)t^{k-2}f(t)dt={d(K_k;G_{x,f})}.$$ Note that in this notation, $x=\theta$, $f=0$ (resp. $x=1-\theta$, $f=1$) corresponds to $Q_{n, \theta \cdot n}$, (resp. $\overline Q_{n, \theta \cdot n}$). To prove Theorem \[maintheorem\] for the case $r=s=k$, we show that assuming ${d(K_k;G_{x,f})}\geq \alpha$, the maximum of ${d(\overline{K_k};G_{x,f})}$ is attained for either $f=0$ or $f=1$. For this purpose, we prove upper bounds on the integrals. \[lemma\_integrals\] If $f:{[0,1]}\rightarrow{[0,1]}$ is a non-increasing function, then $$\int_{0}^{1}(1-f(t))^{k-1}dt\leq\max\left\{ 1-\left(\int_{0}^{1}(k-1)t^{k-2}f(t)dt\right)^{\frac{1}{k-1}},\left(1-\int_{0}^{1}(k-1)t^{k-2}f(t)dt\right)^{k-1}\right\}.$$ The bounds in Lemma \[lemma\_integrals\] are tight. Equality with the first term holds for $f$ that takes only the values $1$ and $0$, and equality with the second term occurs for $f$ a constant function. Proving Theorem \[maintheorem\] for such functions is done using rather standard (if somehow tedious) calculations. Lemma \[lemma\_integrals\] itself is reduced to the following lemma through a simple affine transformation and normalization. What non-decreasing function in ${[0,1]}$ minimizes the inner product with a given monomial? \[lemma\_reduced\] Let $g:{[0,1]}\rightarrow[0,B]$ be a non-decreasing function with $B\geq 1$ and $\|g\|_{k-1}=1$. Then $$\langle (k-1)t^{k-2},g \rangle=\int_{0}^{1}(k-1)t^{k-2}g(t)dt\geq\min\left\{B\left(1-\left(1-\frac{1}{B^{k-1}}\right)^{k-1}\right),1\right\}.$$ Equality with the first term holds for $$g(t)=\left\{ \begin{matrix} 0 & t<1-\frac{1}{B^{k-1}}\\ B & t\geq 1-\frac{1}{B^{k-1}} \end{matrix} \right.$$ The second equality holds for $g=1$. We omit the proof which is based on standard calculations and convexity arguments. Shifting in hypergraphs {#section_hyper} ======================= In this section, we will discuss a possible extension of Lemma \[lemma\_increase\] to hypergraphs. Consider two set systems $\mathcal{F}_1$ and $\mathcal{F}_2$ with vertex sets $V_1$ and $V_2$ respectively. A (not necessarily induced) *labeled copy of $\mathcal{F}_1$ in $\mathcal{F}_2$* is an injection $I:V_1 \to V_2$ such that $I(F) \in \mathcal{F}_2$ for every $F \in \mathcal{F}_1$. We denote by ${\textup{Cop}}(\mathcal{F}_1;\mathcal{F}_2)$ the set of all labeled copies of $\mathcal{F}_1$ in $\mathcal{F}_2$ and let $$t(\mathcal{F}_1; \mathcal{F}_2):= |{\textup{Cop}}(\mathcal{F}_1;\mathcal{F}_2)|.$$ Recall that a vertex $u$ *dominates* vertex $v$ if $S_{v\to u}(\mathcal{F})=\mathcal{F}$. If either $u$ dominates $v$ or $v$ dominates $u$ in a family $\mathcal{F}$, we call the pair $\{u, v\}$ *stable* in $\mathcal{F}$. If every pair is stable in $\mathcal{F}$, then we call $\mathcal{F}$ a *stable set system*. \[theorem\_increase\] Let $\mathcal{H}$ be a stable set system and let $\mathcal{F}$ be a set system. For every two vertices $u, v$ of $\mathcal{F}$ there holds $$t(\mathcal{H}; S_{u\to v}(\mathcal{F})) \ge t(\mathcal{H}; \mathcal{F}).$$ Let $G$ be an arbitrary graph and let $H$ be a threshold graph $H$. Then $$t(H; S_{u\to v}(G)) \ge t(H; G),$$ for every two vertices $u, v$ of $G$. We define a new shifting operator $\tilde{S}_{u\to v}$ for sets of labeled copies. First, for every $u,v\in V$, and a labeled copy $I:U\to V$, define $I_{u\leftrightarrow v}: U\to V$ by $$I_{u\leftrightarrow v}(w) = \left\{\begin{array}{ll} I(w) & \text{ if } I(w) \ne u, v,\\ v & \text{ if } I(w) = u, \\ u & \text{ if } I(w) = v \end{array}\right.$$ For $\mathcal{I}$ a set of labeled copies, $I\in\mathcal{I}$, we let $$\tilde{S}_{u\to v}(I, \mathcal{I}) = \left\{\begin{array}{ll} I_{u\leftrightarrow v} & \text{if } I_{u\leftrightarrow v}\not\in \mathcal{I} \text{ and } \textup{Im}(I) \cap \{u,v\} = \{u\},\\ I_{u\leftrightarrow v} & \text{if } I_{u\leftrightarrow v}\not\in \mathcal{I}, \{u,v\}\subset \textup{Im}(I), \text{ and } I^{-1}(u) \text{ dominates } I^{-1}(v) \text{ in } \mathcal{H},\\ I & \text{otherwise}. \end{array}\right.$$ Finally, let $\tilde{S}_{u\to v}(\mathcal{I}):= \{\tilde{S}_{u\to v}(I,\mathcal{I}) : I \in \mathcal{I}\}$. Clearly, $|\tilde{S}_{u\to v}(\mathcal{I})| = |\mathcal{I}|$, and we prove that $$\tilde{S}_{u\to v}({\textup{Cop}}(\mathcal{H};\mathcal{F}))\subseteq {\textup{Cop}}(\mathcal{H};S_{u\to v}(\mathcal{F}))$$ thereby proving that $t(\mathcal{H};S_{u\to v}(\mathcal{F})) \ge t(\mathcal{H}; \mathcal{F})$. As often in shifting, the proof is done by careful case analysis which is omitted. Concluding remarks {#section_concluding} ================== In this paper, we studied the relation between the densities of cliques and independent sets in a graph. We showed that if the density of independent sets of size $r$ is fixed, the maximum density of $s$-cliques is achieved when the graph itself is either a clique on a subset of the vertices, or a complement of a clique. On the other hand, the problem of minimizing the clique density seems much harder and has quite different extremal graphs for various values of $r$ and $s$ (at least when $\alpha=0$, see [@das-et-al; @pikhurko]). Given that $d(\overline{K}_r; G) = \alpha$ for some integer $r \ge 2$ and real $\alpha \in [0,1]$, which graphs minimize $d(K_s; G)$? In particular, when $\alpha=0$ we ask for the least possible density of $s$-cliques in graphs with independence number $r-1$. This is a fifty-year-old question of Erdős, which is still widely open. Das et al [@das-et-al], and independently Pikhurko [@pikhurko], solved this problem for certain values of $r$ and $s$. It would be interesting if one could describe how the extremal graph changes as $\alpha$ goes from $0$ to $1$ in these cases. As mentioned in the introduction, the problem of minimizing $d(K_s; G)$ in graphs with fixed density of $r$-cliques for $r<s$ is also open and so far solved only when $r=2$.\ [**Note added in proof.**]{} After writing this paper, we learned that P. Frankl, M. Kato, G. Katona and N. Tokushige [@frankl-kato-katona-tokushige] independently considered the same problem and obtained similar results when $r=s$. [**Acknowledgment.**]{} We would like to thank the anonymous referee for valuable comments and suggestions which improve the presentation of the paper. [99]{} M. H. Albert, M. D. Atkinson, C. C. Handley, D. A. Holton and W. Stromquist, , (2002), \#R5. V. Chvátal and P. Hammer, , (1977), 145–162. S. Das, H. Huang, J. Ma, H. Naves, and B. Sudakov, , (2013), 344–373. P. Erdős, , (1962), 459–464. P. Erdős and A. Stone, , (1946), 1087–1091. P. Frankl, , **123** (1987), 81–110. P. Frankl, M. Kato, G. Katona, and N. Tokushige, , , **103** (2013), 415–427. A. Goodman, , (1959), 778–783. G. Katona, , in Theory of Graphs, Akadémia Kiadó, Budapest (1968), 187–207. P. Keevash, , (2008), 1685–1703. J. Kruskal, , Mathematical Optimization Techniques, Univ. of California Press (1963), 251–278. V. Nikiforov, , (2011), 1599–1618. O. Pikhurko and E. R. Vaughan, , (2013), 910–934. A. Razborov, , , [**17**]{} (2008), 603–618. C. Reiher, , manuscript. A. Thomason, , (1989), 246–255. P. Turán, , , [**48**]{} (1941), 436–452. A. Zykov, , 66 (1949), 163–188. F. Franek and V. Rödl, , (1993), 199–203. [^1]: School of Mathematics, Institute for Advanced Study, Princeton 08540. Email: [huanghao@math.ias.edu]{}. Research supported in part by NSF grant DMS-1128155. [^2]: School of Computer Science and engineering, The Hebrew University of Jerusalem, Jerusalem 91904, Israel. Email: [nati@cs.huji.ac.il]{}. Research supported in part by the Israel Science Foundation and by a USA-Israel BSF grant. [^3]: Department of Mathematics, UCLA, Los Angeles, CA 90095. Email: [hnaves@math.ucla.edu]{}. [^4]: School of Computer Science and engineering, The Hebrew University of Jerusalem, Jerusalem 91904, Israel. Email: [yuvalp@cs.huji.ac.il]{} [^5]: Department of Mathematics, ETH, 8092 Zurich, Switzerland and Department of Mathematics, UCLA, Los Angeles, CA 90095. Email: bsudakov@math.ucla.edu. Research supported in part by SNSF grant 200021-149111 and by a USA-Israel BSF grant.
--- abstract: 'The approximation of the eigenvalues and eigenfunctions of an elliptic operator is a key computational task in many areas of applied mathematics and computational physics. An important case, especially in quantum physics, is the computation of the spectrum of a Schrödinger operator with a disordered potential. Unlike plane waves or Bloch waves that arise as Schrödinger eigenfunctions for periodic and other ordered potentials, for many forms of disordered potentials the eigenfunctions remain essentially localized in a very small subset of the initial domain. A celebrated example is Anderson localization, for which, in a continuous version, the potential is a piecewise constant function on a uniform grid whose values are sampled independently from a uniform random distribution. We present here a new method for approximating the eigenvalues and the subregions which support such localized eigenfunctions. This approach is based on the recent theoretical tools of the localization landscape and effective potential. The approach is deterministic in the sense that the approximations are calculated based on the examination of a particular realization of a random potential, and predict quantities that depend sensitively on the particular realization, rather than furnishing statistical or probabilistic results about the spectrum associated to a family of potentials with a certain distribution. These methods, which have only been partially justified theoretically, enable the calculation of the locations and shapes of the approximate supports of the eigenfunctions, the approximate values of many of the eigenvalues, and of the eigenvalue counting function and density of states, all at the cost of solving a single source problem for the same elliptic operator. We study the effectiveness and limitations of the approach through extensive computations in one and two dimensions, using a variety of piecewise constant potentials with values sampled from various different correlated or uncorrelated random distributions.' author: - 'Douglas N. Arnold[^1]' - 'Guy David[^2]' - 'Marcel Filoche[^3]' - 'David Jerison[^4]' - Svitlana Mayboroda bibliography: - 'specpred.bib' title: 'Computing spectra without solving eigenvalue problems[^5]' --- localization, spectrum, eigenvalue, eigenfunction, Schrödinger operator 65N25, 81-08, 82B44 Introduction ============ Eigenfunctions of elliptic operators are often widely dispersed throughout the domain. For example, the eigenfunctions of the Laplacian on a rectangle are tensor products of trigonometric functions, while on a disk they vary trigonometrically in the angular variable and as Bessel functions in the radial argument. By contrast, in some situations eigenfunctions of an elliptic operator *localize*, in the sense that they are practically zero in much of the domain (after normalizing the $L^2$ or $L^\infty$ norm, say, to $1$), like the two functions pictured on the right of Figure \[fg:localization\] (which will be explained shortly). Localization may be brought about by different mechanisms including irregular coefficients of the elliptic operator, certain complexities of the geometry of the domain such as thin necks or fractal boundaries, confining potential wells, and disordered potentials. A celebrated example is Anderson localization [@Anderson1958], which refers to localization of the eigenfunctions of the Schrödinger operator on ${\mathbb{R}}^n$ induced by a potential with random values. The eigenfunctions of the Anderson system model the quantum states of an electron in a disordered alloy, and localization can even trigger a transition of the system from metallic to insulating behavior. Over the past 60 years, analogous phenomena have been observed in many other fields, and found numerous applications to the design of optical [@Riboli2011], acoustic [@Sapoval1997; @Felix2007], electromagnetic [@Laurent2007; @Sapienza2010], and photonic devices [@Filoche2017; @Li2017]. The localized functions shown on the right of Figure \[fg:localization\] are the first and second eigenfunctions of the Schrödinger operator $H=-\Delta + V$ on the square $[0, 80]\times[0, 80]$ with periodic boundary conditions. The potential $V$ is of Anderson type: it is a piecewise constant potential obtained by dividing the domain into $80^2$ unit subsquares and assigning to each a constant value chosen randomly from the interval $[0, 20]$. Although we will consider numerous disordered potentials generated by random distributions in this paper, we emphasize that our approach is deterministic. We aim to efficiently predict aspects of the spectrum that depend on a particular realization of the potential, rather than to give statistical or probabilistic results about the spectrum associated to a family of potentials with a certain distribution. For example, the location of the eigenfunctions shown in Figure 1—where they achieve their maxima, what is the shape of their effective supports—depends sensitively on the precise configuration of the disordered potential. The determination of characteristics of the spectrum such as these is an example of the issues addressed here. In this paper we shall focus on the Schrödinger operator $H=-\Delta + V$. The domain $\Omega$ will always be either an interval in one dimension or a square in two, the boundary conditions will be periodic, and the potential $V$ will be piecewise constant with respect to a uniform mesh of $\Omega$ having positive values chosen from some probability distribution, either independently or with correlation. Much of the work can be extended, e.g., to more general domains, potentials, PDEs, and boundary conditions. In particular, we remark that the choice of periodic boundary conditions is for simplicity, and that similar localization occurs with Neumann or Dirichlet boundary conditions. We generally choose unit-sized subsquares for the constant regions of the potential, but this is merely a convenient normalization. For example, instead of considering an $80\times 80$ square broken into unit subsquares as the domain in Figure \[fg:localization\], we could have chosen instead to take the unit square as the domain, with subsquares of side length $1/80$ for the potential. Had we scaled the potential to take values in the range from $0$ to $128,000$, we would have obtained the same localization ($128,000$ being $20\times 80^2$). There is a large literature concerning the localization of eigenfunctions, approaching the phenomenon from various viewpoints: spectral theory, probability, and quantum mechanics. But there is nothing like a complete explanatory and predictive theory which can quantitatively and deterministically answer such basic questions as: - How are the eigenvalues and eigenfunctions determined by a particular potential? - Given a potential, do the eigenfunctions localize, and, if so, how many? - What are the size, shape, and location of the approximate supports and how do the eigenfunctions decay away from them? - What are the associated eigenvalues? The present work aims at providing answers to these questions, based on a theory of localization recently conceived and under active development [@Filoche2012; @Filoche2013; @ADJMF2016; @Lefebvre2016; @ADFJM2017]. In the next section of the paper, we briefly survey some classical tools used to understand localization. Then, in Section \[sec:lfep\], we introduce two more recent tools: the localization landscape function and its associated effective potential, introduced in [@Filoche2012; @ADJMF2016]. These are easily defined. The landscape function $u$ is the solution to $Hu=1$ subject to periodic boundary conditions, and the effective potential $W$ is its reciprocal. Our approach is guided by estimates and relations between these objects and the spectrum of the Schrödinger operator from [@Filoche2012], [@ADJMF2016], and the recent theoretical work of [@ADFJM2017]. In Section \[sec:efun\] we show how the structure of wells and barriers of the effective potential can be incorporated into numerical algorithms to predict the locations and approximate supports of localized eigenfunctions. Then, in Section \[sec:eval\], we show how the values of the minima of the effective potential can be used to predict the corresponding eigenvalues and density of states. Throughout, the performance of our algorithms is demonstrated for various types of 1D and 2D random piecewise constant potentials (uniform, Bernoulli, Gaussian, uncorrelated and correlated). Note that the computation of the effective potential involves the solution of a single source problem, and so is far less demanding than the computation of a significant portion of the spectrum by traditional methods. However, a remarkable conclusion of our results is that, for the localized problems studied here, a great deal of information about the spectrum can be extracted easily from the effective potential. Classical confinement {#sec:cc} ===================== A simple and well-understood example of eigenfunction localization for the Schrödinger equation occurs with a classically confining potential. Such a potential is decisively different and simpler than the highly disordered potentials we consider, but we shall draw a connection in the forthcoming discussion, and for that reason we now briefly review localization by confinement. The basic example is the finite square well potential in one dimension, for which the analytic solution is derived in first courses on quantum mechanics [@Messiah1967 vol. 1, p. 78]. This problem is posed on the whole real line with the potential $V$ equal to some positive number $\nu$ for $|x|>1$ and vanishing otherwise. The fundamental eigenfunction is then $$\psi(x) = \begin{cases} \cos (\sqrt\lambda x), & |x|\le 1,\\ c_\nu \exp(-\sqrt{\nu-\lambda}|x|), & |x|>1, \end{cases}$$ where the eigenvalue $\lambda$ is uniquely determined as the solution to the equation $\cos \sqrt\lambda = \sqrt{\lambda/\nu}$ in the interval $(0,\pi^2/4)$ and $c_\nu=\sqrt{\lambda/\nu}\,\exp(\sqrt{\nu-\lambda})$. Since $\lambda<\nu$, the solution decays exponentially as $|x|\to\infty$, which captures localization in this context. A similar calculation can be made in higher dimensions, for example for spherical wells [@Messiah1967 vol. 1, p. 359-361], in which case the well height $\nu$ must be sufficiently large to ensure that there exists an eigenvalue smaller than $\nu$. A fundamental result for the localization of eigenfunctions of the Schrödinger equation with a general confining potential is Agmon’s theory [@Agmon1982], [@Helffer1988 § 3.3], [@HislopSigal Ch. 3], which demonstrates a similar exponential decay for a much larger class of potentials. In this case the domain is all of ${\mathbb{R}}^n$ and the Schrödinger operator is understood as an unbounded operator on $L^2$. One requires that the potential be sufficiently regular and bounded below and that there is an eigenfunction $\psi$ with eigenvalue $\lambda$ such that $V>\lambda$ outside a bounded set. In other words, outside a compact potential well where $V$ dips below the energy level $\lambda$, it remains above it (thus creating confinement). Agmon defined an inner product on the tangent vectors at a point $x\in{\mathbb{R}}^n$ by $$\label{agmonmetric} \langle \xi,\eta\rangle_x = \sqrt{[V(x)-\lambda]_+}\,\xi\cdot\eta,$$ where the subscript $+$ denotes the positive part. This defines a Riemannian metric, except that it degenerates to zero at points $x$ where $V(x)\le \lambda$. Its geodesics define a (degenerate) distance ${\operatorname{dist}}^V_\lambda(x, y)$ between points $x,y\in{\mathbb{R}}^n$, and, in particular, we may define $\rho(x)={\operatorname{dist}}^V_\lambda(x,0)$ to be the distance from the origin to $x$ computed using the Agmon metric. Agmon’s theorem states, with some mild restrictions on the regularity on $V$, that for any $\epsilon>0$, $$\label{agmon} \int_{{\mathbb{R}}^n} |e^{(1-\epsilon)\rho(x)}\psi|^2\,dx <\infty.$$ This result describes exponential decay of the eigenfunction in an $L^2$ sense in regions where $\rho(x)$ grows, which expresses localization in this context. For the random potentials which we investigate in this paper, the Agmon distance is highly degenerate, and, consequently, an estimate like is not generally useful. Consider, as a clear example, the Bernoulli potential shown in Figure \[fg:bernoulli\], in which the values $0$ and $4$ are assigned randomly and independently to each of the $80{\times}80$ subsquares with probabilities $70\%$ and $30\%$, respectively. As shown in color on the right of Figure \[fg:bernoulli\], the region where the potential is zero has a massive connected component which nearly exhausts it. For any positive $\lambda$, the Agmon distance between any two points in this connected component is zero, and hence an estimate like tells us nothing. Nonetheless, as exemplified by the first two eigenfunctions shown in the figure, the eigenfunctions do localize, a phenomenon for which we must seek a different justification. ![For this Bernoulli potential there are no wells surrounded by thick walls. Nonetheless, the eigenfunctions localize.[]{data-label="fg:bernoulli"}](fig2a.png "fig:"){width="35mm"} ![For this Bernoulli potential there are no wells surrounded by thick walls. Nonetheless, the eigenfunctions localize.[]{data-label="fg:bernoulli"}](fig2b.png "fig:"){width="35mm"} ![For this Bernoulli potential there are no wells surrounded by thick walls. Nonetheless, the eigenfunctions localize.[]{data-label="fg:bernoulli"}](fig2c.png "fig:"){width="45mm"} ![For this Bernoulli potential there are no wells surrounded by thick walls. Nonetheless, the eigenfunctions localize.[]{data-label="fg:bernoulli"}](fig2d.png "fig:"){width="45mm"} The landscape function and the effective potential {#sec:lfep} ================================================== An important step forward was made in [@Filoche2012] with the introduction of the *landscape function*, which is simply the solution to the PDE $Hu=1$ together with, for us, periodic boundary conditions. Note that, as long as the potential $V$ is a bounded nonnegative function, nonzero on a set of positive measure, then $u$ is a strictly positive periodic $C^1$ function (indeed, it belongs to $W^2_p$ for any $p<\infty$). The following estimate, taken from [@Filoche2012], relates the landscape function to the eigenvalues. \[th:landscape-ineq\] If $\psi:\Omega\to{\mathbb{R}}$ is an eigenfunction of $H$ with eigenvalue $\lambda$, then $$\label{le} \psi(x) \le \lambda u(x) \|\psi\|_{L^\infty}, \quad x\in\Omega.$$ If we normalize the eigenfunction $\psi$ so that $\|\psi\|_{L^\infty}=1/\lambda$, then the theorem asserts that $\psi\le u$ pointwise, a fact illustrated in Figure \[fg:le\], which shows the one-dimensional case where the potential has 256 values randomly chosen uniformly iid from $[0, 4]$. The argument was made in [@Filoche2012] that if the landscape function $u$ nearly vanishes on the boundary of a subregion $\Omega_0$ of the domain $\Omega$, then implies that any eigenfunction $\psi$ must nearly vanish there as well, and so $\psi|_{\Omega_0}$ is nearly a Dirichlet eigenfunction for $\Omega_0$ (with the same eigenvalue $\lambda$). Similarly $\psi$ restricts to a near Dirichlet eigenfunction on the subdomain complementary to $\Omega_0$. Except for the unlikely case in which these two subdomains share the eigenvalue $\lambda$, this suggests that $\psi$ must nearly vanish in one of them, and so be nearly localized to the other. This viewpoint gives an initial insight into the situation illustrated in Figure \[fg:le\], where it is seen that the eigenfunctions are essentially localized to the subdomains between two consecutive local minima of $u$. However, it must be remarked, in the case shown in Figure \[fg:le\] and many other typical cases, the landscape function merely dips, but in no sense vanishes, on the boundary of localization regions, and so a new viewpoint is needed in order to satisfactorily explain the localization which is observed. ![The potential on the left gives rise to the landscape function, plotted in blue on the right. The first four eigenfunctions are also plotted, scaled so that their maximum value is the reciprocal of their eigenvalue, illustrating the inequality of .[]{data-label="fg:le"}](fig3a.png "fig:"){height="2in"} ![The potential on the left gives rise to the landscape function, plotted in blue on the right. The first four eigenfunctions are also plotted, scaled so that their maximum value is the reciprocal of their eigenvalue, illustrating the inequality of .[]{data-label="fg:le"}](fig3b.png "fig:"){height="2in"} Such a new viewpoint was developed in the paper [@ADJMF2016], where the emphasis was placed on the *effective potential* $W$, defined as the reciprocal $1/u$ of the landscape function. A key property of $W$ and the explanation for its name, is that it is the potential of an elliptic operator which is conjugate to the Schrödinger operator $H$ and so has the same spectrum. The following essential identity was derived in [@ADJMF2016]. \[th:ep\] Suppose that the potential $V\in L^\infty(\Omega)$ is nonnegative and positive on a set of positive measure. Let $u>0$ be the landscape function and $W=1/u$ the effective potential. Define $L:H^1(\Omega)\to H^{-1}(\Omega)$ by $$\label{defL} L\phi = -\frac1{u^2}{\operatorname{div}}(u^2 \operatorname{grad}\phi).$$ Then $$(-\Delta + V)(u\phi) = u\,(L + W)\phi,\quad \phi\in H^1(\Omega).$$ In this result, $H^1$ denotes the periodic Sobolev space on $\Omega$ and $H^{-1}$ its dual. The equation holds in $H^{-1}$, making sense because $u\in C^1$. \[cr:eigfns\] Let $V$, $u$, and $W$ be as in the theorem and $\lambda\in{\mathbb{R}}$. Then $\psi\in H^1$ satisfies $(-\Delta+V)\psi=\lambda\psi$ if and only if $\phi := \psi/u$ satisfies $(L + W)\phi = \lambda\phi$. Thus the eigenvalues of the operator $L+W$ (with periodicity) are the same as those of the original Schrödinger operator, and the eigenfunctions are closely related. However, the corresponding potentials $W$ and $V$ are very different. The effective potential $W$ is often much more regular than the physical potential $V$. More importantly, it has a clear structure of wells and walls. As we shall see, these induce a sort of localization by confinement, which is not evident from the physical potential. For example, for the Bernoulli potential shown in Figure \[fg:bernoulli\], the effective potential is shown in Figure \[fg:berneffpot\]. Note that the effective potential contains many wells: small regions where its value is low, but surrounded, or nearly surrounded, by crestlines where its values are relatively high. If we think of a gradient flow starting at a generic point in the domain and ending at a local minimum, thereby associating the point to one of the local minima of $W$, the crestlines, displayed in green in Figure \[fg:berneffpot\] are the boundaries of the basins of attraction of the local minima. There are several algorithms to compute the precise location of these crestlines. The one used in this case is the watershed transform as described in [@BeareLehmann]. ![The effective potential associated to the Bernoulli potential of Figure \[fg:bernoulli\]. It is shown with its crestlines which partition the domain into a few hundred basins of attraction surrounding wells.[]{data-label="fg:berneffpot"}](fig4.png){height="65mm"} In recent work [@ADFJM2017], we have established a rigorous connection between the well and wall structure of $W$ and the exponential decay of eigenfunctions. We define the $W$-distance ${\operatorname{dist}}^W_\lambda$, to be the Agmon distance, as defined after in Section \[sec:cc\], but with the potential $V$ replaced by the effective potential $W$. Then we show, roughly speaking, that whenever a well of the effective potential exists with sufficient separation of the well depth from the height of the surrounding barriers, then eigenfunctions $\psi$ of the operator $H$ with eigenvalue $\lambda$ are localized to $\{W\le \lambda\}$ in the sense that they decay exponentially in the $W$-distance associated to the eigenvalue. More precisely, in [@ADFJM2017], we prove: Suppose $(\psi,\lambda)$ is an eigenpair of the Schrödinger operator $H=-\Delta + V$. Let $W$ be the effective potential and ${\operatorname{dist}}^W_\lambda(x,y)$ the associated $W$-distance. Let $\delta>0$, $$S = \{\,x\in\Omega\,|\, W(x)\le \lambda+\delta\,\},$$ a sublevel set of $W$, and $h(x)$ the $W$-distance from $x$ to $S$. Then there exists a constant $C$ depending only on $\|V\|_{L^\infty}$ and $\delta$ (but not the domain $\Omega$) such that $$\int_\Omega e^{h(x)}\psi^2\,dx \le C \int_\Omega\psi^2\,dx.$$ This result can be found, stated in considerably more generality and in sharper form in [@ADFJM2017 Corollary 3.5]. The dependence of the constant is given there explicitly. It grows at most linearly in $\|V\|_{L^\infty}$. The same paper also proves exponential decay for the gradient of the eigenfunctions. On the other hand these theoretical results do not capture fully the accuracy with which $W$ predicts the behavior of the eigenfunctions. Our numerical results show that the eigenfunctions typically occupy a single connected component of the set $W\le \lambda$ and decay exponentially across each green crestline of Figure \[fg:berneffpot\], whereas the theorems do not rule out that a resonance occurs resulting in eigenfunctions that have significant mass in several different components of $W\le \lambda$. Eigenfunction prediction {#sec:efun} ======================== We may apply the theory described in the previous section to predict the location and extent of the supports of localized eigenfunctions. For this we proceed in four steps. 1. Compute the landscape function $u$ from the PDE $Hu=1$, and define the effective potential $W=1/u$. 2. To approximate a desired number of localized eigenfunctions, identify the same number of local minima of $W$, selected in order of increasing minimal value. The location of these minima will be our prediction of the place where the eigenfunctions localize, in the sense that the maximum of the localized eigenfunction will occur nearby there. 3. To each of the selected local minimum we associate an energy level $E$ given by the minimum value of $W$ in the well times a constant greater than $1$. The constant is chosen so that $E$ is close to fundamental eigenvalue of the well, or perhaps somewhat larger. (We show in the next section how this may be achieved.) 4. From the corresponding sublevel set, consisting of all $x$ for which $W(x) \le E$, we compute the connected component which contains the selected local minimum. This is the region we predict to be occupied by the eigenfunction. Figure \[fg:pred1d\] shows the outcome of applying this approach to the 1D Schrödinger equation with the potential shown in Figure \[fg:le\]. The landscape function was computed using Lagrange cubic finite elements with a uniform mesh of $2,560$ subintervals ($10$ element per constant piece of the potential). The finite element solution was evaluated at $15,360$ equally-spaced points ($6$ per element), with the reciprocals giving the values of the effective potential. The local maxima and local minima of the effective potential were then identified by comparing the value at each point to that of its two immediate neighbors. For comparison, the true eigenvalues and eigenfunctions were computed by using the same finite element discretization and solving the resulting sparse matrix real symmetric generalized eigenvalue problem using a Krylov–Schur solver. The finite element discretizations were implemented using the FEniCS software environment [@Logg2012], calling the SLEPc library [@slepc] for the eigenvalue solves. Note that the locations of the four local minima of the effective potential, indicated by small circles in both plots of Figure \[fg:pred1d\], very nearly coincide with the locations of the maxima of the four corresponding eigenfunctions. Moreover the correspondence respects the ordering of the eigenvalues, in the sense that the $i$th eigenfunction corresponds to the $i$th well for $i=1,2,3,4$. To predict the extent of the localized eigenfunctions, we use as outer boundaries the level curves of the effective potential at an energy level $E$ set to be $1.875$ times the depth of the wells. This is 150% of the value we justify in the next section as an approximation of the eigenvalues. Of course, this choice is somewhat arbitrary, since the effective support of a localized function is not an absolute notion, but must be defined with respect to some tolerance. We could as well have chosen a somewhat larger level to get wider regions incorporating more of the tail of the eigenfunctions, or have chosen a somewhat smaller level to get narrower regions. ![On the left is the effective potential corresponding to the piecewise constant potential with 256 uniformly iid randomly selected values shown on the left of Figure \[fg:le\]. The first, second, third, and fourth deepest local minima are marked and labeled. The small yellow circles signify the positions of these minima. We expect the corresponding eigenfunctions to be centered near the location of the minima, with extent related to the surrounding basin of attraction. This prediction is plotted in green, and the actual first four eigenfunctions superimposed over the predictions on the right.[]{data-label="fg:pred1d"}](fig5a.png "fig:"){height="2in"} ![On the left is the effective potential corresponding to the piecewise constant potential with 256 uniformly iid randomly selected values shown on the left of Figure \[fg:le\]. The first, second, third, and fourth deepest local minima are marked and labeled. The small yellow circles signify the positions of these minima. We expect the corresponding eigenfunctions to be centered near the location of the minima, with extent related to the surrounding basin of attraction. This prediction is plotted in green, and the actual first four eigenfunctions superimposed over the predictions on the right.[]{data-label="fg:pred1d"}](fig5b.png "fig:"){height="2in"} Next, we vary this example by increasing the amplitude of the potential by a factor of $64$, so that it takes values between $0$ and $256$, but is otherwise identical to the potential shown on the left of Figure \[fg:le\]. The results analogous to Figure \[fg:pred1d\] for this potential are shown in Figure \[fg:pred1d256\]. Note that, despite the fact that the potentials are proportional in the two cases, the effective potentials look quite different and the eigenfunctions localize in entirely different places. The eigenvalue maxima occur at $17.85$, $41.47$, $90.75$, and $72.92$ for the smaller potential and at $90.57$, $73.42$, $204.5$, and $110.5$ for the second. These locations again are captured very accurately by the minima of the effective potential, and in the correct order. A crucial difference between the two examples is that the eigenvalues for the problem with the larger potential are much more tightly localized, as predicted by the thinner wells of its effective potential. ![The effective potential for the potential equal to $64$ times that shown on the left of Figure \[fg:le\], with the four deepest local minima and their wells marked and labeled, and then a comparison with the actual first four eigenfunctions.[]{data-label="fg:pred1d256"}](fig6a.png "fig:"){height="2in"} ![The effective potential for the potential equal to $64$ times that shown on the left of Figure \[fg:le\], with the four deepest local minima and their wells marked and labeled, and then a comparison with the actual first four eigenfunctions.[]{data-label="fg:pred1d256"}](fig6b.png "fig:"){height="2in"} In the examples depicted in Figures \[fg:pred1d\] and \[fg:pred1d256\], we looked at the first four eigenvalues and found that the eigenvalues are very accurately located by the local minima of the effective potential. We now look at what happens for a larger number of local minima. Obviously, at some point the eigenfunction locations cannot be predicted by the local minima, since there are infinitely many eigenfunctions and only finitely many local minima. Figure \[fg:pred1d256-16\], which is similar to the plot on the right-hand side of Figure \[fg:pred1d256\], and, in particular, uses the same potential, shows the locations of the first 16 local minima of $W$ (as yellow dots), plotted over the first 16 eigenfunctions. We see that for all 16, the location of the local minimum of $W$ predicts very accurately the location of the corresponding eigenfunction. ![For the same potential as Figure \[fg:pred1d256\] the first 16 local minima locations accurately predict the locations of the corresponding eigenfunctions.[]{data-label="fg:pred1d256-16"}](fig7.png){height="2in"} Figure \[fg:eigsvsmax\] and Table \[tb:eigsvsmax\] explore the situation further, comparing the location of the $n$th local minimum of $W$, plotted on the $x$-axis, to that of the maximum of the $n$th eigenfunction, plotted on the $y$-axis, for $n=1,\ldots, 20$. Since these nearly coincide for $n\le 16$, the first 16 points lie very nearly on the line $y=x$. From then on, however, the points deviate from the line because the ordering of the local minima does not perfectly match the ordering of the most closely associated eigenfunctions. Specifically, as can be seen from Table \[tb:eigsvsmax\], the 17th local minimum of $W$ occurs at the location of the 19th eigenfunction, and the 18th occurs at the location of the 20th eigenfunction. ![First 20 local minima on the $x$-axis versus the maximum of the corresponding eigenfunction on the $y$-axis.[]{data-label="fg:eigsvsmax"}](fig8.png){height="3in"} $n$ $W$ min eigfn max $n$ $W$ min eigfn max ----- --------- ----------- ----- --------- ----------- 1 90.57 90.55 11 47.43 47.43 2 73.42 73.43 12 44.52 44.50 3 204.50 204.50 13 180.52 180.52 4 110.50 110.50 14 158.48 158.48 5 100.48 100.48 15 41.50 41.50 6 232.50 232.50 16 253.32 253.33 7 225.48 225.48 17 177.50 17.43 8 59.48 59.48 18 2.45 90.95 9 18.22 18.18 19 146.53 177.50 10 222.48 222.48 20 163.58 2.47 : The data plotted in Figure \[fg:eigsvsmax\].[]{data-label="tb:eigsvsmax"} We now consider the two-dimensional case where the potential is the random $80{\times}80$ Bernoulli potential shown in Figure \[fg:bernoulli\], for which the effective potential is shown in Figure \[fg:berneffpot\]. To compute the landscape function we again used the finite element method with Lagrange cubic finite elements on a uniform mesh. The mesh was obtained by dividing each of the unit squares into $10{\times}10$ subsquares, each of which was further divided into two triangles, resulting in 1,280,000 triangles altogether. We then evaluated the solution at a uniform grid of $400{\times}400$ points and found the local minima of the effective potential by comparing each of these values to the values at the eight nearest neighbors (horizontally, vertically, and diagonally). In Figure \[fg:pred2db\] the first plot shows the first four local minima of the effective potential. For each, a corresponding sublevel set of the effective potential is shown. The four minima and sublevel sets are our predictors for the locations of the eigenfunctions. The energy level $E$ of the sublevel sets was taken as $1.56$ times the well depth, just slightly larger than the prediction for the eigenvalue, namely $1.5$ times the well depth, which we propose in the next section. Recall that the choice of $E$ is somewhat arbitrary. This choice gives a good visual match with the apparent support of the eigenfunctions. The second plot in Figure \[fg:pred2db\] is a plot of the sum of four eigenfunctions, each normalized in the $L^{\infty}$ norm. Since they are localized one can easily distinguish the location of each within the sum, which is very close to that predicted. The third plot is a superposition of the first two, to facilitate comparison. ![The first plot shows the prediction for the location of the first four eigenfunctions for the Bernoulli potential of Figure \[fg:bernoulli\]. The second plot shows the actual positions of these eigenfunctions, superimposed. The final plot compares the actual positions to the predictions.[]{data-label="fg:pred2db"}](fig9a.png "fig:"){height="2in"} ![The first plot shows the prediction for the location of the first four eigenfunctions for the Bernoulli potential of Figure \[fg:bernoulli\]. The second plot shows the actual positions of these eigenfunctions, superimposed. The final plot compares the actual positions to the predictions.[]{data-label="fg:pred2db"}](fig9b.png "fig:"){height="2in"} ![The first plot shows the prediction for the location of the first four eigenfunctions for the Bernoulli potential of Figure \[fg:bernoulli\]. The second plot shows the actual positions of these eigenfunctions, superimposed. The final plot compares the actual positions to the predictions.[]{data-label="fg:pred2db"}](fig9c.png){height="2.5in"} In the three cases just considered, there is a clear correspondence between the first four eigenfunctions and the four deepest wells of the effective potential, with each of eigenfunctions, when ordered as usual by increasing eigenvalue, centered at the corresponding local minimum of the effective potential, ordered by depth of the minimum. However, this ideal situation does not always pertain. When two eigenvalues, or two of the minima, are nearly equal, their ordering may not be respected by the correspondence. Another situation which may arise is that the basin surrounding one of the minima may include another. In that case the second minima does not lead to a separate eigenfunction. Both of these issues arise in the the case of the uniformly random potential of Figure \[fg:localization\]. Figure \[fg:pred2du\] shows the first five local minima of the effective potential, and their basins. The numbers provided show the ordering by the depth of the wells. Note that the third and fourth minima nearly coincide in location, and they only contribute one well, even though they are, technically, two distinct minima. The actual minimum values and eigenvalues are given in Table \[tb:vals\]. It reveals that the third and fourth minima are not only close in location, but nearly coincide in value as well. Moreover the difference between their value and values of the preceding and following minima is rather small. These close values account for the fact that the correspondence between the eigenfunctions and minima clearly visible in Figure \[fg:pred2du\] does not respect the precise ordering. Nonetheless, the structure of the effective potential clearly provides a lot of information on the localization structure of the eigenfunctions. To account for such near coincidences we could seek to develop an algorithm to identify clusters of minima with nearly equal values and relate them to clusters of nearly equal eigenvalues. However we shall not pursue this direction here. ![For the potential of Figure \[fg:localization\], the correspondence between eigenfunctions and wells of the effective potential does not respect the ordering of the well minima. Moreover, the third and fourth minima effectively define a single well.[]{data-label="fg:pred2du"}](fig10a.png "fig:"){height="2.5in"} ![For the potential of Figure \[fg:localization\], the correspondence between eigenfunctions and wells of the effective potential does not respect the ordering of the well minima. Moreover, the third and fourth minima effectively define a single well.[]{data-label="fg:pred2du"}](fig10b.png "fig:"){height="2.5in"} ------------- -------- -------- -------- -------- -------- minima: 2.3061 2.4246 2.4763 2.4796 2.5370 eigenvalue: 3.6112 3.6618 3.7075 3.7717 3.9190 ------------- -------- -------- -------- -------- -------- : The values of the effective potential at its first five minima, and the first five eigenvalues for the uniformly random potential of Figure \[fg:localization\]. Cf. Figure \[fg:pred2du\].[]{data-label="tb:vals"} Thus far we have examined random piecewise constant potentials with values taken independently and identically distributed according to some probability distribution (uniform or Bernoulli). In the final example of this section, we consider a potential for which the values are *correlated* rather than independent. To generate values for the potential, we use circulant embedding to convert uncorrelated Gaussian $N(0,1)$ random vector samples to correlated Gaussians [@Kroese2015]. We take a 1-dimensional example, in which the potential is piecewise constant with $n$ unit length pieces, with $n$ even. Define $q_i=q_{n-i}=\sigma \exp(-d i)$, $0\le i\le n/2$, where $d$ is a positive constant, and let $Q$ be the diagonal matrix with entries $q_0, q_1, \cdots q_{n-1}$. A sample vector for the values of $V$ is obtained by squaring the components of the vector $F^{-1}Q Fz$ where $z$ is a vector of length $n$ with components sampled independently from a normalized Gaussian distribution, and $F$ is the discrete Fourier transform on $\mathbb C^n$. This type of a random potential is typically created by optical speckles in a Bose-Einstein condensate. See, e.g., [@Modugno], [@Falco]. It is quite challenging to derive rigorous probabilistic results when the correlation is not negligibly small, especially in higher dimensions. The landscape theory, however, continues to apply. We consider an example with $n=1,024$, $\sigma=1.0$, $d=0.01$, shown in Figure \[fg:corr1d\] along with the corresponding effective potential. Note that, although the potential and effective potential look quite different from the previous examples, the effective potential still has clearly defined wells which allow us to apply our theory. In the final plot in Figure \[fg:corr1d\] we use the effective potential as before to predict the location and extent of the first seven eigenfunctions. As in the uncorrelated cases, the well minima very nearly coincide with the peak of the eigenfunctions. The first three minima, in order, correspond to the first three eigenfunctions, but after that the order is not the same. The discrepancy in ordering is not very significant, however, since both the minimum values and the eigenvalues are very close to one another, as indicated in Table \[tb:corr1d\]. ![A correlated potential and the corresponding effective potential with its seven lowest minima marked. On the top left is the original correlated potential, on the top right the corresponding effective potential. On the bottom are the first seven eigenfunctions with the color giving the order as indicated by the inset. A rectangle is superimposed on each eigenfunction. The heights of the rectangles and eigenfunctions are normalized to unity, while each rectangle’s width indicates the extent of the localized eigenfunction as predicted by the effective potential (using a sublevel set at an energy level equal to $1.875$ times the depth of the wells as was done for Figure \[fg:pred1d\]). The small yellow circles give the position of the local minima and the numbers over them, the order of the local minima. Notice that the order differs from the order of the corresponding eigenfunctions after the first three.[]{data-label="fg:corr1d"}](fig11a.png "fig:"){height="2in"} ![A correlated potential and the corresponding effective potential with its seven lowest minima marked. On the top left is the original correlated potential, on the top right the corresponding effective potential. On the bottom are the first seven eigenfunctions with the color giving the order as indicated by the inset. A rectangle is superimposed on each eigenfunction. The heights of the rectangles and eigenfunctions are normalized to unity, while each rectangle’s width indicates the extent of the localized eigenfunction as predicted by the effective potential (using a sublevel set at an energy level equal to $1.875$ times the depth of the wells as was done for Figure \[fg:pred1d\]). The small yellow circles give the position of the local minima and the numbers over them, the order of the local minima. Notice that the order differs from the order of the corresponding eigenfunctions after the first three.[]{data-label="fg:corr1d"}](fig11b.png "fig:"){height="2in"} ![A correlated potential and the corresponding effective potential with its seven lowest minima marked. On the top left is the original correlated potential, on the top right the corresponding effective potential. On the bottom are the first seven eigenfunctions with the color giving the order as indicated by the inset. A rectangle is superimposed on each eigenfunction. The heights of the rectangles and eigenfunctions are normalized to unity, while each rectangle’s width indicates the extent of the localized eigenfunction as predicted by the effective potential (using a sublevel set at an energy level equal to $1.875$ times the depth of the wells as was done for Figure \[fg:pred1d\]). The small yellow circles give the position of the local minima and the numbers over them, the order of the local minima. Notice that the order differs from the order of the corresponding eigenfunctions after the first three.[]{data-label="fg:corr1d"}](fig11c.png){height="2.5in"} ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- min: 0.012752 0.022966 0.024642 0.025647 0.025673 0.026626 0.028347 eig: 0.016534 0.030069 0.031620 0.032953 0.033822 0.034479 0.034735 ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- : The values of the effective potential at its first seven minima, and the first seven eigenvalues for the correlated potential of Figure \[fg:corr1d\].[]{data-label="tb:corr1d"} In this section we have shown how we can deduce the approximate locations and approximate supports of eigenfunctions just by processing the effective potential, without solving eigenvalue problems. We remark that this information could be refined to give an approximation of the precise shape of the eigenfunction. To do so, one could solve for the eigenfunction with a standard PDE eigensolver, but with the domain taken as a regular domain just slightly larger than the approximate support, and with Dirichlet boundary conditions. Because of the localization, this computational domain will be much smaller than the original domain and this computation much less expensive than a global eigenvalue solve. The development and study of such algorithms, however, goes beyond the scope of this paper, and is left for future work. Eigenvalue prediction {#sec:eval} ===================== We now turn to the question of predicting eigenvalues of the Schrödinger operator $H$ from the effective potential. As a simple illustration of the utility of the effective potential for eigenvalue estimation, we start by recalling the basic lower bound on the fundamental eigenvalue in terms of the potential $V$, and show how it can be improved by using the effective potential. For an eigenfunction $\psi$ of $H$ with eigenvalue $\lambda$, normalized to have $L^2$ norm $1$, we have $$\label{ekp} \lambda = (H\psi,\psi) = \|\operatorname{grad}\psi\|^2 + (V\psi,\psi)$$ which represents the decomposition into kinetic and potential energy. Dropping the kinetic energy term and replacing $V$ by its infimum gives a lower bound on the eigenvalues: $$\label{vbd} \lambda \ge \inf V.$$ Now we use the fundamental identity of Proposition \[th:ep\] to decompose the eigenvalue in terms of the effective potential: $$(H\psi,\psi) = (u^2 L\phi,\phi) + (W\psi,\psi),$$ where $\phi=\psi/u$. In view of the form of $L$ , the first term on the right hand side is positive, so dropping it and replacing $W$ by its infimum gives another lower bound: $$\label{wbd} \lambda \ge \inf W.$$ Figure \[fg:comp\] allows one to compare the two bounds for a random potential with $64$ values chosen uniformly iid in the range $[0,8]$. The fundamental eigenvalue for this realization is $1.58$, indicated on the plot in red. The infimum of $V$ is, however, very near zero: $0.00009$, and so the bound is nearly worthless. (In this realization $\inf V$ happens to be particularly small, but the expected value of $1/65=0.015$ is again of little use.) By contrast, $\inf W = 1.22$, which is a useful lower bound. In fact, the fundamental eigenvalue is equal to about $1.3~\inf W$. We shall see below that this factor of roughly $1.25$ or $1.3$ applies for a wide range of random potentials in one dimension. ![A random potential (in shaded gray), the corresponding effective potential (the solid blue line), and the fundamental eigenvalue (the horizontal red line).[]{data-label="fg:comp"}](fig12.png){width="2.5in"} In the remainder of this section, we shall consider two approaches to eigenvalue prediction, one based solely on the local minima values of the effective potential, and the other based on a variant of Weyl’s law utilizing the effective potential. Although we shall explain the thinking behind these approaches, it has to be noted that neither has yet been justified rigorously. Eigenvalues from minima of the effective potential {#ssec:eval1} -------------------------------------------------- In this section we discuss the approximation of the eigenvalues of the Schrödinger operator $H$ using the effective potential $W$. We shall be interested in both the approximation of individual eigenvalues, and in the distribution of the eigenvalues. The latter is captured by the *density of states* (DOS). Defined precisely, the DOS is the distribution on ${\mathbb{R}}$ obtained by summing the delta functions centered at each eigenvalue. For visualization, it is often converted to a piecewise constant function with respect to a partition of the real line into intervals of some length $\epsilon>0$, with the value over any interval being the integral of the DOS over the interval (so a plot of this function is a histogram of the eigenvalues using bins of width $\epsilon$). The integrated density of states (IDOS) is the integral of the DOS from $-\infty$ to a real value $E$. The resulting function $N:{\mathbb{R}}\to \mathbb N$ is simply the eigenvalue counting function, with $N(E)$ defined as the number of eigenvalues $\le E$. To motivate our first approach to eigenvalue prediction, consider again the potential shown in Figure \[fg:localization\], a piecewise constant function with $80{\times}80$ pieces and constant values chosen randomly and independently from $[0, 20]$. In the first plot of Figure \[fg:ratio\] we have computed the values of the effective potential $W$ at its local minima and compared the first 100 of these, in increasing order, to the first 100 eigenvalues of $W$. Just below we plot the quotients of each of these eigenvalues divided by the corresponding local minimum value of $W$. Observe that the quotient is quite constant, taking on a value of roughly $1.5$. We shall endeavor to explain this value below, but first we observe that this ratio of roughly $1.5$ between the $m$th eigenvalue and the $m$th minimum value of the effective potential holds over a wide range of random potentials in two dimensions. In the remainder of the first two rows of Figure \[fg:ratio\] we show the same results also for the Bernoulli potential of Figure \[fg:bernoulli\] and the correlated potential shown in Figure \[fg:correlated\]. (The correlated potential was constructed with circulant embedding as discussed above for one dimension, except that we of course used the two-dimensional discrete Fourier transform, and we took as aperture function $\sigma \chi(d |t|)$ with $\sigma=4$ and $d=0.05$, where $\chi$ is the characteristic function of the unit interval, instead of $\sigma\exp(-d|t|)$ as we took previously.) The three pairs of plots in the final two rows of the figure show the case of a uniformly random $40{\times}40$ potential with values taken between $0$ and $4$, $16$, and $64$, respectively. The quotient is quite close to being constant with value $1.5$, particularly in the last two cases. In the first case, with the lowest disorder, the ratio drifts away from $1.5$ to about $1.75$ after approximately 50 eigenvalues. ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13a.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13b.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13c.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13d.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13e.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13f.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13g.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13h.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13i.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13j.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13k.png "fig:"){height="1.25in"} ![A comparison of $100$ eigenvalues with the corresponding minimum values of $W$ for six different potentials with the ratio of the two shown beneath each one. Top two rows: (left) random $80{\times}80$ piecewise potential with values chosen uniformly in $[0, 20]$; (center) Bernoulli potential of Figure \[fg:bernoulli\]; (right) correlated potential of Figure \[fg:correlated\]. Bottom two rows: random $40{\times}40$ piecewise potential with values chosen uniformly in (left) $[0, 4]$, (center) $[0, 16]$, (right) $[0, 64]$. In all these cases, one can notice that the ratio of the minimum values of $W$ to the corresponding eigenvalues remains remarkably close the value 1.5.[]{data-label="fg:ratio"}](fig13l.png "fig:"){height="1.25in"} ![A correlated potential with $80{\times}80$ pieces and the corresponding effective potential. The potential takes values ranging from $2.364{\times}10^{-9}$ to $60.56$, while the effective potential’s values range from $0.374$ to $57.17$.[]{data-label="fg:correlated"}](fig14a.png "fig:"){width="58mm"} ![A correlated potential with $80{\times}80$ pieces and the corresponding effective potential. The potential takes values ranging from $2.364{\times}10^{-9}$ to $60.56$, while the effective potential’s values range from $0.374$ to $57.17$.[]{data-label="fg:correlated"}](fig14b.png "fig:"){width="65mm"} Of course, the minima of the effective potential can only predict a limited number of eigenvalues. Indeed, $W$ has only a finite number of local minima, while there are infinitely many eigenvalues. In Figure \[fg:ratioall\] we revisit the fifth case shown in Figure \[fg:ratio\], with a uniformly random $40{\times}40$ potential with values in $[0,16]$. For this realization of the potential, $W$ has exactly 252 local minima. These are plotted alongside the first 300 eigenvalues in the left-hand side of Figure \[fg:ratioall\], and their ratios with the corresponding eigenvalues are plotted on the left. We see that, in this case, the ratio of $1.5$ remains quite accurate for more than 150 of the 252 minima. ![A plot of *all* the minima of $W$ versus the eigenvalues, and, on right, their ratio, for the same potential realization as the fifth case shown in Figure \[fg:ratio\].[]{data-label="fg:ratioall"}](fig15a.png "fig:"){width="60mm"} ![A plot of *all* the minima of $W$ versus the eigenvalues, and, on right, their ratio, for the same potential realization as the fifth case shown in Figure \[fg:ratio\].[]{data-label="fg:ratioall"}](fig15b.png "fig:"){width="60mm"} In Table \[tb:ratio\] we display the mean and standard deviation of these ratios computed for the first 10, 50, and 100 eigenvalues for each of the six potentials of Figure \[fg:ratio\]. The main observation is that, across all this range, the ratio stays quite close to $1.5$. ------------------ ---- -- ------- ------- -- ------- ------- -- ------- ------- -- potential uniform $[0,20]$ 80 1.519 0.024 1.520 0.018 1.524 0.017 Bernoulli 80 1.585 0.034 1.607 0.041 1.646 0.049 correlated 80 1.519 0.018 1.532 0.028 1.530 0.022 uniform $[0,4]$ 40 1.479 0.018 1.510 0.034 1.582 0.087 uniform $[0,16]$ 40 1.518 0.041 1.515 0.026 1.516 0.021 uniform $[0,64]$ 40 1.503 0.025 1.511 0.024 1.477 0.043 ------------------ ---- -- ------- ------- -- ------- ------- -- ------- ------- -- : The mean and standard deviation for the ratio of first 10, 50, and 100 eigenvalues to the corresponding minima values of $W$. For a wide range of potentials in two dimensions the ratio is roughly $1.5$ across many eigenvalues.[]{data-label="tb:ratio"} Table \[tb:ratio1d\] is similar, but shows the results for a variety of potentials in one dimension. Again, we see that the ratio of the eigenvalue to the corresponding minimum values of the effective potential is roughly constant, as it was in two dimensions. However the constant value we find in one dimension is about $1.25$ or $1.3$ rather than the value of $1.5$ we saw in two dimensions. ------------------ ------ -- ------- ------- -- ------- ------- -- ------- ------- -- potential uniform $[0,4]$ 256 1.303 0.026 1.321 0.029 1.300 0.067 1024 1.302 0.019 1.301 0.020 1.304 0.022 uniform $[0,16]$ 256 1.322 0.023 1.308 0.031 1.240 0.099 1024 1.274 0.018 1.296 0.031 1.294 0.027 Bernoulli 256 1.301 0.033 1.316 0.050 1.296 0.130 1024 1.262 0.014 1.272 0.026 1.266 0.073 correlated 256 1.335 0.055 1.404 0.090 1.310 0.171 1024 1.280 0.018 1.286 0.017 1.303 0.042 ------------------ ------ -- ------- ------- -- ------- ------- -- ------- ------- -- : The mean and standard deviation for the ratio of first 10, 25, and 50 eigenvalues to the corresponding minima values of $W$, tabulated here for 8 different types of random potentials in one dimension.[]{data-label="tb:ratio1d"} Finally, in Figures \[fg:scatter\] we plot the 1st, 10th, and 25th eigenvalues versus the corresponding minima values of $W$ for numerous different realizations of a random potential. The first figure displays 64 realizations of a 1D potential on $[0, 256]$ with $256$ values selected uniformly iid from $[0, 16]$, while the second figure displays $64$ realizations of a 2D potential on $[0, 40]{\times}[0,40]$ with $1,600$ random values again chosen uniformly iid from $[0, 16]$. We see that in the first case the points line up well along the line $\lambda=1.25 W_{\text{min}}$, and in the second along the line $\lambda=1.5 W_{\text{min}}$. ![The 1st, 10th, and 25th eigenvalues versus the corresponding minima values of the effective potential, for $64$ independent realizations of a random potential. Left in 1D where the black line shown is $\lambda=1.25 W_{\text{min}}$, right in 2D with $\lambda=1.5 W_{\text{min}}$.[]{data-label="fg:scatter"}](fig16a.png "fig:"){width="73mm"} ![The 1st, 10th, and 25th eigenvalues versus the corresponding minima values of the effective potential, for $64$ independent realizations of a random potential. Left in 1D where the black line shown is $\lambda=1.25 W_{\text{min}}$, right in 2D with $\lambda=1.5 W_{\text{min}}$.[]{data-label="fg:scatter"}](fig16b.png "fig:"){width="73mm"} From this and other evidence, we conclude that in many cases $$\label{evalap} \lambda\approx (1 + \frac{n}{4})\,W_{\text{min}}.$$ Here $\lambda$ is one of the lower eigenvalues of $H$, for which the corresponding eigenfunction is localized to a subdomain $\Omega_0$, $W_{\text{min}}$ is the minimum value of the effective potential on that subdomain, and $n$ is the number of dimensions (thus far $1$ or $2$). The constant $1+n/4$, i.e., $1.25$ in 1D and $1.5$ in 2D, is a rough approximation in accord with our observations and which we now further justify heuristically. It is remarkable that this constant, though dimension-dependent, is independent of the specific realization of the potential and of the parameters of its probability distribution, the size of the domain, etc. We now give some heuristic support of the eigenvalue approximation . Our argument will be rather crude, and we do not claim it fully explains the numerical evidence presented above. Let $\psi$ denote one of the eigenfunctions associated to a smaller eigenvalue. We assume $\psi$ to be localized, i.e., essentially supported in a small subdomain $\Omega_0$, for which it is the fundamental Dirichlet eigenfunction. We also assume that on the subdomain $\Omega_0$ the landscape function $u$ is well approximated by a constant multiple of the fundamental eigenfunction $\psi$ of the subdomain. This is roughly supported by experimental results such as shown in Figure [\[fg:le\]]{}. Another supporting argument comes from the expansion of the constant $1$ on $\Omega_0$ in terms of the Dirichlet eigenfunctions of the domain, retaining only the first term $c_0\psi$, and dropping the terms coming from the eigenfunctions which change sign. Then $u\approx (c_0/\lambda)\psi$, indeed a multiple of $\psi$. We may use these two assumptions, together with the definition $Hu=1$ of the landscape function, to approximate the Rayleigh quotient: $$\lambda= \frac{\int_{\Omega} \psi H\psi\,dx}{\int_{\Omega}\psi^2\,dx} \approx \frac{\int_{\Omega_0} \psi H\psi\,dx}{\int_{\Omega_0}\psi^2\,dx} \approx \frac{\int_{\Omega_0} u Hu\,dx}{\int_{\Omega_0}u^2\,dx} = \frac{\int_{\Omega_0} u\,dx}{\int_{\Omega_0}u^2\,dx}.$$ Next we assume that on $\Omega_0$ the landscape function $u$ (or $\psi$ which we are supposing is a constant multiple of $u$ there) can be approximated by the simplest sort of positive bump-like function, the positive part of a concave quadratic function. After rotating and translating the coordinate system, this means that $$u\approx u_{\text{max}}[1-\sum(x_i/a_i)^2 ] \text{ on $\Omega_0\approx\{\,x\in{\mathbb{R}}^n\,|\, \sum(x_i/a_i)^2 \le 1\,\}$},$$ for some positive constants $a_i$. Thus our approximation to the eigenvalue is $$\lambda\approx \frac{\int_{\Omega_0} u\,dx}{\int_{\Omega_0} u^2\,dx} = \frac c{u_{\text{max}}},\quad c = \frac{\int_{\Omega_0} [1-\sum(x_i/a_i)^2 ]\,dx} {\int_{\Omega_0}[1-\sum(x_i/a_i)^2 ]^2\,dx}.$$ Finally, we compute $c$ using the change of variables $\hat x_i = x_i/a_i$ to convert the integrals in the numerator and denominator into integrals over the unit ball $B$ which can be computed with polar coordinates. This gives $c= 1+n/4$ (independent of the values of the $a_i$ and so of the size and shape of the ellipsoid). Since $1/u_{\text{max}}= W_{\text{min}}$, this indeed gives the approximation . The results we have shown in Figure \[fg:ratio\] and Tables \[tb:ratio\] and \[tb:ratio1d\] demonstrate that the approximation can be used to estimate 100 eigenvalues with errors of a few percent. Note that it is nonetheless very cheap to apply , the cost being that of solving a single source problem and the extraction of some maxima, much less than the cost of computing many eigenvalues. As another example of the utility of we now use it to approximate the density of states in the interval $[0,1]$ of the 1D Schrödinger operator for which the piecewise constant potential has $2^{19}=524,288$ pieces. (Specifically, we compute on the interval $[0, 2^{19}]$ and assign the random values to unit subintervals uniformly iid in $[0,4]$.) We display the DOS as a histogram with 100 bins. The top left plot in Figure \[fg:dos\] shows the actual density of states, requiring the computation of all 7,122 of the eigenvalues which belong to the interval $[0,1]$. The finite element mesh we use to approximate the Schrödinger operator in this case has $10{\times}2^{19}$ elements, and we use piecewise cubic finite elements, so that the problem has about 15.7 million degrees of freedom. The calculation of 7,000 eigenvalues is thus a very large computation. It required about 40 CPU hours on a workstation with an Intel Core i7-4930K processor, using sprectral slicing and the Krylov-Schur method of SLEPc. However, an accurate approximation of the density of states can be obtained quickly using the effective potential without resorting to the computation of any eigenvalues. This approximation is shown in the top right plot of Figure \[fg:dos\]. It is a histogram of all those values of $1.25 W_{\text{min}}$ which belong to the same interval $[0, 1]$ (8,800 in all). The computation of these values is much less demanding. It required slightly over 5 minutes of CPU time of the same workstation, i.e., was about 480 times faster. The two histograms are compared in the bottom plot. ![Density of states on $[0,1]$ for the case of a uniformly random potential with $2^{19}$ pieces, displayed with 100 bins. On top left, histogram of the actual eigenvalues. On top right, histogram of $1.25$ times the local minima of $W$. The bottom highlights the differences between the two.[]{data-label="fg:dos"}](fig17a.png "fig:"){height="2.in"} ![Density of states on $[0,1]$ for the case of a uniformly random potential with $2^{19}$ pieces, displayed with 100 bins. On top left, histogram of the actual eigenvalues. On top right, histogram of $1.25$ times the local minima of $W$. The bottom highlights the differences between the two.[]{data-label="fg:dos"}](fig17b.png "fig:"){height="2.in"} ![Density of states on $[0,1]$ for the case of a uniformly random potential with $2^{19}$ pieces, displayed with 100 bins. On top left, histogram of the actual eigenvalues. On top right, histogram of $1.25$ times the local minima of $W$. The bottom highlights the differences between the two.[]{data-label="fg:dos"}](fig17c.png){height="2.in"} Eigenvalues from a variant of Weyl’s law {#ssec:eval2} ---------------------------------------- The approximation can be used to predict, at most, one eigenvalue per local minimum of $W$. Intuitively, it approximates a localized eigenfunction by the fundamental mode of the well around the local minimum. Now we present an alternative approach, which again relies on the effective potential $W$, but which gives some sort of prediction for all the eigenvalues. For large eigenvalues, it provides information similar to Weyl’s law. Recall that Weyl’s law for the Schrödinger equation is an asymptotic formula for the eigenvalue counting function: $$\label{weyl} N(E)\sim N_V(E):= (2\pi)^{-n}\operatorname{vol}\{\,(x,\zeta)\in\Omega{\times}{\mathbb{R}}^n\,|\, V(x) + |\zeta|^2\le E\,\}\text{ as $E\to\infty$},$$ where the volume term is the $2n$-dimensional measure of the indicated subset of the phase space $\Omega{\times}{\mathbb{R}}^n$. Assuming smoothness and growth conditions for the potential, Weyl’s law holds asymptotically as $E$ tends to $+\infty$ and so for a large number of eigenvalues [@zworski Theorem 6.8]. Inverting the counting function, we can thus view Weyl’s law as furnishing an approximation of the $n$th eigenvalue, which is asymptotically valid for $n$ large. Weyl’s law is generally not expected to be accurate for a small number of eigenvalues. However, experimentally we have found that, for the sorts of random potentials considered in this paper, a variant of Weyl’s law invoking the effective potential $W$ gives very good results right down to the first few eigenvalues, while remaining asymptotically correct. The variant, which we shall refer to as the *effective Weyl’s law*, is obtained by simply replacing the potential $V$ in by the effective potential $W$: $$N_W(E) = (2\pi)^{-n}\operatorname{vol}\{\,(x,\zeta)\in\Omega{\times}{\mathbb{R}}^n\,|\, W(x) + |\zeta|^2\le E\,\}.$$ Figure \[fg:weyl1d\] compares the true eigenvalue counting function $N$ (shown in black), Weyl’s law (green), and the effective Weyl’s law (red), for four different types of potential. The first is uniformly random iid with 512 pieces and values in $[0,1]$. The second is Bernoulli potential where the 512 random values are either $0$ or $1$, each with probability $1/2$. The third is a correlated Gaussian squared potential like that of Figure \[fg:corr1d\]. The fourth is quite different: the 512 Boolean values $0$ and $1$ are assigned alternately. For this potential, there is no localization. Nonetheless, we see that, in each case, Weyl’s law becomes a good approximation only after 100 or so eigenvalues and, in every case, it incorrectly predicts many eigenvalues in the interval from $0$ to the least eigenvalue. By contrast, the effective Weyl’s law provides a very good approximation of the counting function for many eigenvalues, starting from the first. It is revealing to compare the two potentials which take on only Boolean values (the second and the fourth). Because the classical Weyl’s law is unaffected by rearrangement of the potential, it gives the same prediction for the counting function in both cases. But the actual counting functions differ very significantly, a fact which is well captured by the effective Weyl’s law. (Similar results were published in [@ADJMF2016].) ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for some potentials in one dimension. Top row: uniform and Boolean random potentials. Bottom row: correlated and periodic Boolean potential.[]{data-label="fg:weyl1d"}](fig18a.png "fig:"){width="2.5in"} ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for some potentials in one dimension. Top row: uniform and Boolean random potentials. Bottom row: correlated and periodic Boolean potential.[]{data-label="fg:weyl1d"}](fig18b.png "fig:"){width="2.5in"} ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for some potentials in one dimension. Top row: uniform and Boolean random potentials. Bottom row: correlated and periodic Boolean potential.[]{data-label="fg:weyl1d"}](fig18c.png "fig:"){width="2.5in"} ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for some potentials in one dimension. Top row: uniform and Boolean random potentials. Bottom row: correlated and periodic Boolean potential.[]{data-label="fg:weyl1d"}](fig18d.png "fig:"){width="2.5in"} Finally, in Figure \[fg:weyl2d\] we show similar results for a single 2D potential, namely the uniformly random $80{\times}80$ potential of Figure \[fg:localization\]. The first plot shows the first 100 eigenvalues, while the second zooms in on the first 10 eigenvalues. The predictive power of the effective Weyl’s law does not seem to be as great as in one dimension, but again it displays a great improvement over the classical Weyl’s law for small eigenvalues. ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for the 2D potential of Figure \[fg:localization\], showing the first 100 eigenvalues on left and restricting to the first 10 on right.[]{data-label="fg:weyl2d"}](fig19a.png "fig:"){width="2.5in"} ![The eigenvalue counting function $N$, the Weyl’s law approximation $N_V$, and the effective Weyl’s law approximation $N_W$ for the 2D potential of Figure \[fg:localization\], showing the first 100 eigenvalues on left and restricting to the first 10 on right.[]{data-label="fg:weyl2d"}](fig19b.png "fig:"){width="2.5in"} Conclusion {#sec:conc} ========== We have demonstrated numerically that the effective potential, defined as the reciprocal of the localization landscape function, accurately captures a great deal of information about the localization properties of a random potential, and shown how to employ it to predict eigenvalues and eigenfunctions. These predictions are attained by a solving a single PDE source problem, without the direct solution of any eigenvalue problems, and so at a very low computational price. The wells of the effective potential reveal the main localization subdomains, and the values of its minima are found to be very good predictors of the corresponding fundamental eigenvalues. We have tested this approach on piecewise constant potentials with several types of random distributions, uniformly random and Bernoulli, and with a certain correlated distribution, both in one and two dimensions. We have further used the effective potential to predict the density of states and obtain good precision even for small eigenvalues, something which is not attained by the classical Weyl law asymptotics. In highly demanding computations where the Schrödinger equation has to be solved for a large number eigenfunctions and eigenvalues (as for instance in semiconductor physics), the resulting computational efficiency makes it now possible to reproduce numerically, to analyze, and to understand the behavior of quantum disordered materials. [^1]: School of Mathematics, University of Minnesota, Minneapolis, MN (, ). Arnold was supported by NSF grant DMS-1719694, Mayboroda by NSF INSPIRE grant DMS-1344235 and a Simons Foundation Fellowship. [^2]: Univ Paris-Sud, Laboratoire de Mathématiques, CNRS, UMR 8658 Orsay, France (). Supported by an ANR grant, programme blanc GEOMETRYA, ANR-12-BS01-0014. [^3]: Physique de la Matière Condensée, Ecole Polytechnique, CNRS, Palaiseau, France (). [^4]: Mathematics Department, Massachusetts Institute of Technology, Cambridge, MA (). Supported by NSF grant DMS-1500771 and a Simons Foundation Fellowship. [^5]: Submitted to the editors November 12, 2017. This work was supported by grants to each of the authors from the Simons Foundation (601937, DNA; 601941, GD; 601944, MF; 601948, DJ; 563916, SM)
--- --- [**Social inhibition maintains adaptivity and consensus of foraging honeybee swarms in dynamic environments**]{}\ Subekshya Bidari^1^, Orit Peleg^2^, and Zachary P Kilpatrick^1,\*^\ **1** Department of Applied Mathematics, University of Colorado, Boulder CO, USA\ **2** Department of Computer Science and BioFrontiers Institute, University of Colorado, Boulder, CO, USA\ **\*** zpkilpat@colorado.edu Abstract {#abstract .unnumbered} ======== To effectively forage in natural environments, organisms must adapt to changes in the quality and yield of food sources across multiple timescales. Individuals foraging in groups act based on both their private observations and the opinions of their neighbors. How do these information sources interact in changing environments? We address this problem in the context of honeybee swarms, showing inhibitory social interactions help maintain adaptivity and consensus needed for effective foraging. Individual and social interactions of a mathematical swarm model shape the nutrition yield of a group foraging from feeders with temporally switching food quality. Social interactions improve foraging from a single feeder if temporal switching is fast or feeder quality is low. When the swarm chooses from multiple feeders, the most effective form of social interaction is direct switching, whereby bees flip the opinion of nestmates foraging at lower yielding feeders. Model linearization shows that effective social interactions increase the fraction of the swarm at the correct feeder (consensus) and the rate at which bees reach that feeder (adaptivity). Our mathematical framework allows us to compare a suite of social inhibition mechanisms, suggesting experimental protocols for revealing effective swarm foraging strategies in dynamic environments.\ [**Keywords:**]{} collective decision-making, foraging, optimality, social insects, dynamic environments Introduction ============ Social insects forage in groups, scouting food sources and sharing information with their neighbors [@sumpter03; @visscher07; @holldobler08]. The emergent global perspective of animal collectives helps them adapt to dynamic and competitive environments in which food sources’ quality and location can vary [@ward08]. Importantly, decisions made by groups involve nonlinear interactions between individuals, temporally integrating information received from neighbors [@ame06]. For example, honeybees [*waggle dance*]{}[^1] to inform nestmates of profitable nectar sources [@seeley00; @seeley10], and use [*stop signaling*]{} [^2] to dissuade them from perilous food sources [@nieh10] or less suitable nest sites [@seeley12]. While waggle dancing rouses bees from indecision, stop signaling prevents decision deadlock and builds consensus when two choices are of similar quality [@pais13]. Thus, both positive and negative feedback interactions within the group regulate swarm decisions and foraging [@cinquin02; @garnier07]. Honeybee colonies live in dynamic environments, in which the best adjacent nest or foraging sites can vary across time [@real88; @fewell99]. Bees adapt to change by abandoning less profitable nectar sources for those with higher yields [@seeley91], and by modifying the number of foragers [@benshahar02; @tenczar14]. Prior studies focused on how waggle dance recruitment or the heterogeneity of individual bee roles shape swarm adaptivity [@dornhaus04; @granovskiy12]. Inhibitory social interactions, whereby bees stop each other from foraging, have been mostly overlooked as a swarm communication mechanism for facilitating adaptation to change [@kietzman15; @gray18]. Bayesian principles and experiments suggest individuals discount prior evidence at a timescale matched to the change rate of their environment [@mcnamara06; @glaze15]. However, the mechanics of evidence discounting by collectives is not well understood. Here we propose that inhibitory social interactions play an important role in adapting collective beliefs of swarms foraging in a fluid world. To study how social inhibition shapes foraging yields, we focus on a task in which the nectar quality of feeders is switched periodically. In prior studies [@seeley91; @granovskiy12], swarm foraging targets shifted in response to food quality switches, suggesting bee collectives can detect such changes. Granovskiy et al (2012) emphasized that uncommitted inspector bees can lead bees away from the previously dominant foraging site [@granovskiy12]. They also found recruitment via waggle dancing is unimportant for effective foraging in changing environments (See also [@price19]). Here we also find recruitment can be detrimental, but negative feedback interactions can rapidly pull bees from low to high yielding feeders. This paired with ‘abandonment’ whereby bees spontaneously stop foraging facilitates the swarm-wide temporal discounting of prior evidence. In contrast, strong positive feedback via recruitment causes bees to congregate at feeders even after food quality has dropped, biasing a swarm’s behavior based on past states of the world. We quantify the contribution of these positive and negative feedback interactions within a mathematical swarm model. Our study focuses on four potential inhibitory social interactions – discriminate and indiscriminate stop signaling [@nieh10; @seeley12], direct switching [@britton02; @marshall09], and self inhibition – by which foraging bees alter the behavior of other foraging bees. Strategies are compared by measuring the rate of foraging yield over the timescale of feeder quality switches. When bees have a single feeder, social interactions are less important, but in the case of two feeders the performance of different forms of social interactions is clearly delineated. Direct switching, by which a bee converts another forager to their own preference, is the most effective means for a swarm to adapt to food site quality changes. Also, foraging yields are most sensitive to swarm interaction tuning in rapidly changing environments with lower food quality. Model linearizations allow us calculate a correspondence between social interaction parameters and the [*consensus*]{} (steady state fraction of bees at the high yielding site) and [*adaptivity*]{} (the rate of switching from low to high yielding sites). This provides a clear means of determining the impact of social interactions on a swarm’s foraging efficacy. Results ======= The mathematical swarm model assumes individual bees may be uncommitted or committed to one of the possible feeders [@marshall09]. Uncommitted bees spontaneously commit by observing a feeder or by being recruited by another currently foraging bee. Committed bees may spontaneously abandon their chosen feeder, or may be influenced to stop foraging or switch their foraging target based on inhibitory social interactions we describe [@marshall09; @seeley12]. A population level model emerges in limit of large swarms. Stochastic effects of the finite system do not qualitatively change our results in most cases (See Appendix \[stochsys\]). We mostly focus on two feeder ($A$ and $B$) systems, in which the fraction of the swarm committed to either site is described by a pair of nonlinear differential equations in the limit of large swarms (See Fig. \[fig1\]a for a schematic): \[2siteswarm\] $$\begin{aligned} \dot{u}_A &= (1-u_A - u_B)( \alpha_A(t) + \beta u_A) - \gamma u_A - {{\mathcal}S}(u_A, u_B), \\ \dot{u}_B &= (1-u_A - u_B)( \alpha_B(t) + \beta u_B) - \gamma u_B - {{\mathcal}S}(u_B, u_A), \end{aligned}$$ where $\alpha_{A,B}(t)$ are time-dependent food qualities at sites $A,B$ (See Fig. \[fig1\]b for examples); $\beta$ min$^{-1}$ is the rate bees recruit nest mates to their feeder via waggle dancing; $\gamma$ min$^{-1}$ is the rate bees spontaneously abandon a feeder[^3]; and ${{\mathcal}S}(x,y)$ is a nonlinear function describing inhibitory social interactions (e.g., stop-signaling, direct switching as described in Appendix \[socialin\]). Since the swarm commitment fractions are bounded within the simplex $0 \leq u_{A,B} \leq 1$ and $0 \leq u_A + u_B \leq 1$, the commitment ($\alpha_{A,B}$) and recruitment ($\beta$) provide positive feedback and the abandonment ($\gamma$) and inhibition (${{\mathcal}S}$) provide negative feedback. Foraging efficacy is quantified by the reward rate (RR) of the swarm, assuming net nutrition is proportional to both the fraction of the swarm at a feeder $u_X$ times the current quality of that feeder minus the foraging cost $c$ (e.g., the energy required to forage and/or the predator risk), $\alpha_X(t) - c$. Integrating this product and scaling by time yields the effective RR: $$\begin{aligned} J(\alpha_{A,B}(t), \beta, \gamma, {{\mathcal}S}) = \frac{1}{T_f} \int_0^{T_f} \left[ u_A(t) \cdot (\alpha_A(t) - c) + u_B(t) \cdot (\alpha_B(t) - c) \right] {{\rm d}}t. \label{rr2}\end{aligned}$$ Given a food quality switching schedule $\alpha_{A,B}(t)$ and total foraging time $T_f$, swarms with more efficient foraging strategies $(\beta, \gamma, {{\mathcal}S})$ have higher RRs $J$. Before studying how social inhibition shapes swarm foraging in two feeder environments, we analyze the single feeder model, finding that commitment and negative feedback from either abandonment or inhibition are usually sufficient for the swarm to rapidly adapt to feeder quality switches. Shaping swarm adaptivity and consensus for single feeders --------------------------------------------------------- Inhibitory social interactions in a single feeder model can only take the form of [*self inhibition*]{}, by which a foraging bee stops another based on a detected change in food quality (Fig. \[fig2\]a). Since transit from the hive to the feeder takes time, we incorporate a delay of $\tau$ minutes, so the fraction of foraging bees $u$ evolves as: $$\begin{aligned} \label{singledyn} \dot{u} &= (1-u) ( \alpha (t) + \beta u) - \gamma u - \rho({\bar{\alpha}}- \alpha(t-\tau))u^2, \end{aligned}$$ where $\alpha (t)$ is the food quality schedule of the feeder that switches at time intervals $T$ (minutes) between $\alpha(t) =0$ and $\alpha(t) = \bar{\alpha}$ [@seeley91; @granovskiy12] (Fig. \[fig2\]b), $\beta$ min$^{-1}$ and $\gamma$ min$^{-1}$ are the recruitment and abandonment rates, and $\rho$ min$^{-1}$ is the rate of self-inhibition. Swarm adaptivity and consensus is shaped by both individual behavior changes (commitment $\alpha (t)$ and abandonment $\gamma$) and interactions (recruitment $\beta$ and inhibition $\rho$) [@seeley12]. Periodic solutions to Eq. (\[singledyn\]) can be found explicitly, allowing us to to compute a swarm’s reward rate (RR) (See Appendix \[evolnmean\]). Adaptive swarms rapidly return to the hive when no food is available and quickly populate the feeder when there is food (Fig. \[fig2\]c,d). Eq. (\[singledyn\]) admits one stable equilibrium in each time interval: When no food is available ($\alpha(t) =0$) the nonforaging ($\bar{u}=0$) equilibrium is stable as long as recruitment is not stronger than abandonment ($\beta < \gamma$). When food becomes available ($\alpha (t) = \bar{\alpha} >0$) the stable fraction of foragers $\bar{u}$ increases with food quality (See Fig. \[fig2\]c and Appendix \[equilanaz\]). This fraction $\bar{u}$ corresponds to the [*consensus*]{} of the swarm [@conradt05], and the rate $\lambda$ we deem the swarm’s [*adaptivity*]{}. Robust foraging should adapt to the environmental conditions ------------------------------------------------------------ The performance of swarm interaction strategies strongly depends on the feeder quality $\bar{\alpha}$ and switching time $T$. Swarms with stronger rates of abandonment $\gamma$ and self-inhibition $\rho$ more quickly leave the feeder once there is no food ($\alpha(t): \bar{\alpha} \mapsto 0$), but have limited consensus $\bar{u}$ when food becomes available ($\alpha(t): 0 \mapsto {\bar{\alpha}}$). Increasing the recruitment rate $\beta$, on the other hand, boosts consensus but can slow the rate at which the swarm abandons an empty feeder (Fig. \[fig2\]d). To quantify the effect of abandonment $\gamma$, recruitment $\beta$, and self-inhibition $\rho$, we compute the long term RR of the swarm, measuring the foraging yield over a single period ($2T$ minutes) once the swarm equilibrates to its periodic switching behavior (See Appendix \[evolnmean\]): $$\begin{aligned} J(\gamma, \beta, \rho) = \frac{1}{2T} \int_0^{2T} (\alpha(t) - c) u(t) {{\rm d}}t. \label{rr1}\end{aligned}$$ where $0<c < \bar{\alpha}$ is the cost of foraging and $\alpha(t)\in \{ 0, \bar{\alpha}\}$ is the quality of foraging site. For each foraging site quality level, $\bar{\alpha}$, there is an optimal foraging strategy (abandonment $\gamma$, recruitment $\beta$ and stop signaling $\rho$) within our set of possible strategies (See Appendix \[optimone\]) that maximizes the RR $J(\gamma, \beta, \rho)$ (Fig. \[fig2\]e). Here, private information is sufficient for individual bees to commit to foraging (quality sensing $\alpha (t)$), and recruitment does not benefit the swarm ($\beta = 0$). Reinforcing the majority opinion via recruitment is detrimental once the environment changes, as opposed to static environments [@camazine99; @franks02; @seeley12]. In rapid (small $T$) or low food quality ($\bar{\alpha}$ low) environments, stronger inhibition (large $\rho$) is needed to swap swarm commitment when the environment changes (white region, Fig. \[fig2\]e). This nonlinear mechanism increases the adaptivity of the swarm, but tempers the initial stage of consensus after the feeder is switched on (See Appendix \[equilanaz\] for details). On the other hand, when food is plentiful (high $\bar{\alpha}$) (brown regions, Fig. \[fig2\]e), inhibition should be weak (small $\rho$). In intermediate environments, the best strategies interpolate these extremes. Linearizing solutions to the model Eq. (\[singledyn\]) provides us with a closer look at how swarm dynamics impact foraging yields. In sufficiently slow environments (large $T$) with small delays ($\tau \to 0$), we can linearly approximate the evolving foraging fraction (See Appendix \[linearappone\]): $$\begin{aligned} u(t) \approx \left\{ \begin{array}{cc} \bar{u}(1 - {{\rm e}}^{-\lambda_{\rm on} t}), & t \in [0,T), \\ \bar{u} {{\rm e}}^{- \lambda_{\rm off} t}, & t \in [T, 2T), \end{array} \right. \label{uoneprox}\end{aligned}$$ where $\bar{u}$ is the consensus foraging fraction and $\lambda_{{\rm on}/{\rm off}}$ are the rates the swarm arrives/departs the feeder once food is switched on/off. Plugging Eq. (\[uoneprox\]) into Eq. (\[rr1\]), we estimate the RR: $$\begin{aligned} J \approx \frac{\bar{u}}{2} \left[ (\bar{\alpha} - c) \left( 1 - \frac{1- {{\rm e}}^{- \lambda_{\rm on} T}}{\lambda_{\rm on} T} \right) - c \frac{1 - {{\rm e}}^{- \lambda_{\rm off} T}}{\lambda_{\rm off} T} \right]. \label{linJone}\end{aligned}$$ It can be shown that ${\partial}_{\lambda} J > 0$ for $\lambda= \lambda_{{\rm on}/{\rm off}}$, so the RR increases with the rates at which the swarm switches behaviors. These rates increase as abandonment $\gamma$ and social inhibition $\rho$ are strengthened (Appendix \[equilanaz\]). Clearly, $J$ increases with $\bar{u}$ since more bees forage when food is available. Increasing abandonment $\gamma$ tends to decrease consensus, so the most robust foraging strategies cannot use abandonment that is too rapid (Appendix \[equilanaz\]). We conclude that the volatility ($1/T$) and profitability ($\bar{\alpha}$) of the environment dictate the swarm interactions that yield efficient foraging strategies. One important caveat is that we bounded the interaction parameters, so swarm communication cannot be arbitrarily fast. This biological bound may be lower in practice, explaining slow adaptation of swarms to feeder changes in experiments [@seeley91; @granovskiy12]. Our qualitative finding, that social inhibition is more effective in slow and high quality environments, should be robust to even tighter bounds. We have also shown that when social inhibition is not present, abandonment must be increased as the speed and quality of the environment is increased (Appendix \[abandon\] and Fig. \[fig7\]). In the next section, we extend these principles to two feeder environments, particularly showing how specific forms of social inhibition shape foraging yields. Foraging decisions between two dynamic feeders ---------------------------------------------- For the swarm to effectively decide between two feeders, it must collectively inhibit foraging at the lower quality feeder. Our mean field model, Eq. (\[2siteswarm\]), generalizes house-hunting swarm models with stop-signaling [@franks02; @seeley12; @pais13] to a foraging swarm in a dynamic environment with different forms of social inhibition (Fig. \[fig1\]). How do these inhibitory interactions contribute to foraging efficacy? Honeybees can deliver inhibitory signals to nestmates foraging at potentially perilous sites [@nieh93; @nieh10; @pastor05], but swarm-level effects of these mechanisms are not well studied in foraging tasks in dynamic environments [@tan16]. As we will show, the specific form of social inhibition can strongly determine how a swarm will adapt to change. Forms of social inhibition -------------------------- Generalizing previous models [@britton02; @seeley12], we consider four forms of social inhibition (all parameterized by $\rho$ as before): (a) direct switching: bees foraging at the superior feeder directly switch the preference of opposing foragers to the better feeder; (b) indiscriminate stop-signaling: when two foraging bees meet, one will stop foraging; (c) self-inhibition: when two bee foraging at the same feeder meet, one will stop foraging; and (d) discriminate stop-signaling: when bees foraging different feeders meet, one stops foraging. These interactions are visualized in Fig. \[fig3\]a,b,c,d and their evolution equations are given in Appendix \[socialin\] (See also [@seeley12] supplement). We can divide these forms of social inhibition into two classes, based on the swarm dynamics they produce: monostable or bistable consensus behaviors. The first three forms of social inhibition yield swarms with monostable consensus behaviors (See Appendix \[twolinstab\]), tending to a single stable foraging fraction when the feeder qualities are fixed (Fig. \[fig3\]e). The swarm will thus mostly forage at the higher yielding feeder. On the other hand, strong discriminant stop-signaling can produce swarms with bistable consensus behaviors (Fig. \[fig3\]f). As a result, the swarm can remain stuck at an unfavorable feeder, after the feeder qualities are switched. This is similar to “winner-take-all" regimes in mutually inhibitory neural networks [@wong06; @marshall09]. Inhibition from bees holding the swarm’s dominant preference is too strong for bees with the opposing preference to overcome, even with new evidence from the changed environment. Direct switching leads to most robust foraging ---------------------------------------------- To determine the most robust forms of social inhibition for foraging in dynamic environments, we studied how the rate of reward, Eq. (\[rr2\]), depended on the foraging strategy used. Environments are parameterized by the time between switches $T$ (min), the better feeder quality $\bar{\alpha}$ and the lower feeder quality $\bar{\alpha}/2$, which periodically switch between feeders $A$ and $B$. As in the single feeder case, we tune interactions of each strategy (Fig. \[fig3\]a,b,c,d) to maximize reward rate (RR) over a discrete set of strategies (See Appendix \[optimtwo\] for details). Comparing each social inhibition strategy type’s RR in different environments (Fig. \[fig4\]a), we find direct switching generally yields higher RRs than other strategies. Deviations between the effectiveness of different strategies are most pronounced at intermediate environmental timescales $T$. As expected, RRs increase with the maximal food site quality $\bar{\alpha}$ (Fig. \[fig4\]b,c). Direct switching is likely a superior strategy because it allows for continually foraging (Fig. \[fig3\]a), rather than other strategies’ interruption by an uncommitted stage (Fig. \[fig3\]b,c,d), which must rely on recruitment $\beta$ to restart foraging. To study how interactions should be balance to yield effective foraging, we examined how to optimally tune $(\beta, \gamma, \rho)$ across environments in the direct switching model (Fig. \[fig5\]). Analysis of other models are show in Figs \[fig8\] and \[fig9\] of Appendix \[foragtune\]. As in the single feeder environments, we see a delineation between strategies optimized to slow/high quality environments as opposed to rapid/low quality environments. Weak recruitment $\beta$ (Fig. \[fig5\]a) and abandonment $\gamma$ (Fig. \[fig5\]b), and strong direct switching (Fig. \[fig5\]c) yield the highest RRs in slow (large $T$) and high quality (large $\bar{\alpha}$) environments. Recruitment $\beta$ may be inessential since the food quality signals $\bar{\alpha}$ and $\bar{\alpha}/2$ are significantly different. Also, direct switching $\rho$ provides strong adaptation to change. In fact, for virtually all environments, we found it was best to take $\rho$ as strong as possible. The strategy changes significantly when the environment is fast (small $T$) and low quality (small $\bar{\alpha}$), in which case abandonment $\gamma$ should be strong, and in extreme cases direct switching $\rho$ can be made weak (Fig. \[fig5\]b,c). Changes in the optimal recruitment strength are less systematic, however, and there are stratified regions in which the best $\beta$ can change significantly for small shifts in environmental parameters. Overall, a mixture of abandonment and direct switching is more effective in more difficult environments (lower $T$ and $\bar{\alpha}$). Direct switching does underperform self-inhibition in rapid environments (Fig. \[fig4\]a), since the swarm is more efficient by keeping some bees uncommitted, and not risking the cost of foraging the lower yielding feeder. Strong self-inhibition $\rho$ keeps more bees from foraging. Overall, both direct switching and self-inhibition can perform similarly, as recruitment interactions can be strengthened in self-inhibiting swarms, so more bees return to foraging after such inhibitory encounters (Fig. \[fig4\]). This balances adaptivity, so the swarm’s preferences change with the environment, and consensus, so the swarm mostly builds up to forage at the better feeder given sufficient time. We now study this balance in each model using linearization techniques. Overall, these measures can account for discrepancies between the RR yields of swarms using different social inhibition strategies. Linearization reveals strategy adaptivity and consensus ------------------------------------------------------- Each swarm interaction mechanism differentially shapes both the fraction of bees that forage at the better site in the long time limit (consensus $\bar{u}$) and the rate at which this bound is approached (adaptivity $\lambda$). Focusing specifically on these measures, we demonstrate both how they shape foraging efficiency and how they distinguish each social inhibition strategy. We leverage our approach developed for the single feeder model, and consider linear approximations of Eq. (\[2siteswarm\]) in the limit of long switching times $T$ (See Appendix \[linearapptwo\] and Fig. \[fig10\] in Appendix \[linac\]). In the specific case $c : = \bar{\alpha}/2$, we can approximate the RR solely in terms of the consensus $\bar{u}$ (long term fraction of the swarm at the better feeder) and adaptivity $\lambda$ (rate this fraction is approached): $$\begin{aligned} J \approx \frac{\bar{\alpha}}{2} \left( \bar{u} + (1 - 2 \bar{u}) \frac{1 - {{\rm e}}^{- \lambda T}}{\lambda T} \right). \label{twofeedj}\end{aligned}$$ The RR $J$ increases with consensus $\bar{u}$ and adaptivity $\lambda$ (Fig. \[fig6\]a). Efficient swarms rapidly recruit a high fraction of the swarm to the better feeder. Consensus and adaptivity are approximated in each model using linear stability (Appendix \[twolinstab\]). The impact of varying abandonment $\gamma$ and social inhibition $\rho$ on $\bar{u}$ and $\lambda$ is consistent with our optimality analysis of the full nonlinear model (Fig. \[fig6\]b,c): Social inhibition generates more robust swarm switching between feeders than abandonment. While strengthening abandonment $\gamma$ can increase $\lambda$, it decreases consensus $\bar{u}$ since it causes bees to become uncommitted (Fig. \[fig6\]b). Such consensus-adaptivity trade-offs do not occur in most models, as social inhibition $\rho$ is strengthened (See also Figs \[fig11\] and \[fig12\] in Appendix \[adconmod\]). Only indiscriminate stop-signaling exhibits this behavior (Fig. \[fig6\]c), but the other three models (direct switching, discriminate stop signaling and self inhibition) do not. Rather, consensus $\bar{u}$ increases with social inhibition, while adaptivity can vary nonmonotonically (direct switching) or even decrease (self-inhibition). Overall, direct switching swarms attain the highest levels of consensus and adaptivity, consistent with our finding that it is the most robust model (Fig. \[fig4\]). Direct switching sustains both high levels of consensus $\bar{u}$ and adaptivity $\lambda$ (Fig. \[fig6\]c; See also Figs \[fig11\] and \[fig12\] in Appendix \[adconmod\]). The resulting swarms quickly discard their prior beliefs about the highest yielding feeder, exhibiting leaky evidence accumulation [@glaze15]. On the other hand, strong abandonment $\gamma$ (Fig. \[fig6\]b) or indiscriminate stop signaling (Fig. \[fig6\]c) increase adaptivity but limit consensus $\bar{u}$ at the better feeder. Strengthening recruitment $\beta$ leads to stronger consensus $\bar{u}$ at the expense of adaptivity $\lambda$ to environmental changes. Swarms likely use a combination of social inhibition and abandonment mechanisms [@jack15], so such consensus-adaptivity trade-offs are important to manage in dynamic environments. Discussion ========== Foraging animals constantly encounter temporal and spatial changes in their food supply [@owen10]. The success of a foraging animal groups thus depends on how efficiently they communicate and act upon environmental changes [@grueter12]. Our swarm model analysis pinpoints specific social inhibition mechanisms that facilitate adaptation to changes in food availability and consolidate consensus at better feeding sites. If bees interact by direct switching, they can immediately update their foraging preference without requiring recruitment, keeping foragers active following environmental changes. Recruitment is less important to the foraging success of a swarm in dynamic conditions; bees can initiate commitment via their own scouting behavior. Individuals should balance their social and private information in an environment-dependent way to decide and forage most efficiently [@czaczkes11; @torney15]. Efficient group decision making combines individual private evidence accumulation and information sharing across the cohort [@conradt08]. However, in groups where social influence is strong, opinions generated from weak and potentially misleading private evidence can cascade through the collective, resulting in rapid but lower value decisions [@dall05; @sumpter08; @pais13]. Our analysis makes these notions quantitatively concrete by associating the accuracy of the swarm decisions with the consensus fraction at the better feeder and the speed of decisions with the adaptivity or rate the swarm approaches steady state consensus (Fig. \[fig6\]). The best foraging strategies balance these swarm level measures of decision efficiency. Social insects do appear to balance the speed and accuracy of decision to increase their rate of food intake [@chittka03; @burns08], and collective tuning is likely influenced by individuals varying their response to social information. We find that social recruitment can speed along initial foraging decisions, but it can limit adaptation to change. This is consistent with experimental studies that show a reduction in positive feedback can help collectives steer away from lower value decisions. For example, challenging environmental conditions (e.g., volatile and low food quality) are best managed by honeybee swarms whose individuals do not wait for recruitment but rely on their own individual scouting [@price19]. Ants encountering crowded environments tend to deposit less pheromone to keep their nestmates from less efficient foraging paths [@czaczkes13]. These experimental findings suggest social insects adapt to changing environmental conditions by limiting communication that promotes positive feedback [@grueter14]. Foragers must then be proactive in dynamic environments, since they cannot afford to wait for new social information [@dechaume05]. Thus, the advantages of social learning depend strongly on environmental conditions [@laland04]. In concert with a reduction in recruitment, we predict that honeybee swarms foraging in volatile environments will benefit from strengthening inhibitory mechanisms at the individual and group level. Bees enacting social inhibition dissuade their nestmates from foraging at opposing feeders. We found the most efficient form of social inhibition is direct switching whereby bees flip the opinion of committed bees to their own opinion. So do honeybees utilize this mechanism in dynamic environments? Observations of swarms making nest site decisions show scouts directly switching their dance allegiance [@gould94; @britton02], but these events seem to be relatively rare in the static environments of typical nest site selection experiments [@camazine99]. Other forms of social inhibitory signals, especially stop-signaling, appear to be used to promote consensus in nest decisions [@seeley12; @pais13] and escaping predators while foraging [@nieh93; @nieh10]. Thus, the role and prevalence of social inhibition as a means for foraging adaptively in dynamic environments warrants further investigation. Our simple parameterized model, developed from previously validated house-hunting models [@franks02; @marshall09; @seeley12], is amenable to analysis and could be validated with time series measurements from dynamic foraging experiments. Past experimental work focused on shorter time windows in which only a few switches in feeder quality occurred [@seeley91; @granovskiy12], which may account for the relatively slow adaptation of the swarms to environmental changes. We predict swarms will slowly tune their social learning strategies to suit the volatility of the environment, but this could require several switch observations. Foraging tasks conducted within a laboratory could be controlled to track bee interactions over long time periods using newly developed automated monitoring techniques [@gernat18]. Our study also identifies key regions in parameter space in which different foraging strategies diverge in their performance, suggesting that placing swarms in rapid environments with relatively low food supplies will help distinguish which social communication mechanisms are being used. Collective decision strategies and outcomes can depend on group size [@hill82; @krause10], though decision accuracy does not necessarily increase with group size [@kao14]. We approximated swarm dynamics using a population level model, which is the deterministic mean field limit of a stochastic individual interaction based model [@seeley12]. Finite group size considerations would result in a models which behaves stochastically, in which the same conditions can generate different swarm dynamics [@pais13]. The qualitative predictions of our mean field generally did not change dramatically when considering the stochastic finite size model (See Appendix \[stochsys\] and Fig. \[fig13\]). However discriminate stop-signaling swarms exhibit bistable decision dynamics (Fig. \[fig3\]d,f), so we expect in the stochasticity in the finite sized model would allow swarms to break free from less profitable feeders, similar to noise-driven escapes of a particle in double potential well models [@gammaitoni98]. Fluctuation-induced switching may provide an additional mechanism for flexible foraging [@dussutour09; @biancalani14], and would be an interesting extension of our present modeling work.  \ [**Data Accessibility:**]{} Code for producing figures is available at <https://github.com/sbidari/dynamicbees>\ [**Authors’ contributions:**]{} Formulated scientific question, modeling approaches, developed models: SB, OP, ZPK; implemented mathematical analysis and computer simulations: SB; wrote the article: SB, OP, ZPK. All authors were involved in discussions on different aspects of the study.\ [**Competing interests:**]{} We declare we have no competing interests.\ [**Funding:**]{} SB and ZPK were supported by an NSF grant (DMS-1615737). SB was also supported by a Dissertation Fellowship from the American Association of University Women. ZPK was also supported by NSF/NIH CRCNS grant(R01MH115557).\ [**Acknowledgements:**]{} We thank Tahra Eissa for feedback on a draft of this manuscript. [10]{} D Sumpter and Stephen Pratt. A modelling framework for understanding social insect foraging. , 53(3):131–144, 2003. P Kirk Visscher. Group decision making in nest-site selection among social insects. , 52(1):255–275, 2007. Bert Holldobler and E O Wilson. . W. W. Norton & Company; 1 edition, 2009. Ashley JW Ward, David JT Sumpter, Iain D Couzin, Paul JB Hart, and Jens Krause. Quorum decision-making facilitates information transfer in fish shoals. , 105(19):6948–6953, 2008. Jean-Marc Am[é]{}, Jos[é]{} Halloy, Colette Rivault, Claire Detrain, and Jean Louis Deneubourg. Collegial decision making based on social amplification leads to optimal group formation. , 103(15):5835–5840, 2006. Thomas D Seeley. . Princeton Univ. Press, Princeton, NJ, 2010. Thomas D Seeley, Alexander S Mikheyev, and Gary J Pagano. Dancing bees tune both duration and rate of waggle-run production in relation to nectar-source profitability. , 186(9):813–819, 2000. James C Nieh. A negative feedback signal that is triggered by peril curbs honey bee recruitment. , 20(4):310–315, 2010. TD Seeley, PK Visscher, T Schlegel, PM Hogan, NR Franks, and JA Marshall. Stop signals provide cross inhibition in collective decision-making by honeybee swarms. , 335(6064):108, 2012. Darren Pais, Patrick M Hogan, Thomas Schlegel, Nigel R Franks, Naomi E Leonard, and James AR Marshall. A mechanism for value-sensitive decision-making. , 8(9):e73216, 2013. Olivier Cinquin and Jacques Demongeot. Positive and negative feedback: striking a balance between necessary antagonists. , 216(2):229–241, 2002. Simon Garnier, Jacques Gautrais, and Guy Theraulaz. The biological principles of swarm intelligence. , 1(1):3–31, 2007. Leslie Real and Beverly J Rathcke. Patterns of individual variability in floral resources. , 69(3):728–735, 1988. Jennifer H Fewell and Susan M Bertram. Division of labor in a dynamic environment: response by honeybees (apis mellifera) to graded changes in colony pollen stores. , 46(3):171–179, 1999. Thomas D Seeley, Scott Camazine, and James Sneyd. Collective decision-making in honey bees: how colonies choose among nectar sources. , 28(4):277–290, 1991. Y Ben-Shahar, A Robichon, MB Sokolowski, and GE Robinson. Influence of gene action across different time scales on behavior. , 296(5568):741–744, 2002. Paul Tenczar, Claudia C Lutz, Vikyath D Rao, Nigel Goldenfeld, and Gene E Robinson. Automated monitoring reveals extreme interindividual variation and plasticity in honeybee foraging activity levels. , 95:41–48, 2014. Anna Dornhaus and Lars Chittka. Why do honey bees dance? , 55(4):395–401, 2004. Boris Granovskiy, Tanya Latty, Michael Duncan, David JT Sumpter, and Madeleine Beekman. How dancing honey bees keep track of changes: the role of inspector bees. , 23(3):588–596, 2012. Parry M Kietzman and P Kirk Visscher. The anti-waggle dance: use of the stop signal as negative feedback. , 3:14, 2015. Rebecca Gray, Alessio Franci, Vaibhav Srivastava, and Naomi Ehrich Leonard. Multiagent decision-making dynamics inspired by honeybees. , 5(2):793–806, 2018. John M McNamara, Richard F Green, and Ola Olsson. Bayes’ theorem and its applications in animal behaviour. , 112(2):243–251, 2006. Christopher M Glaze, Joseph W Kable, and Joshua I Gold. Normative evidence accumulation in unpredictable environments. , 4:e08825, 2015. RI’Anson Price, N Dulex, N Vial, C Vincent, and C Gr[ü]{}ter. Honeybees forage more successfully without the “dance language” in challenging environments. , 5(2):eaat0450, 2019. NF Britton, NR Franks, SC Pratt, and TD Seeley. Deciding on a new home: how do honeybees agree? , 269(1498):1383–1388, 2002. James AR Marshall, Rafal Bogacz, Anna Dornhaus, Robert Planqu[é]{}, Tim Kovacs, and Nigel R Franks. On optimal decision-making in brains and social insect colonies. , 6(40):1065–1074, 2009. Nicolaas Godfried Van Kampen. . Elsevier, 1992. Larissa Conradt and Timothy J Roper. Consensus decision making in animals. , 20(8):449–456, 2005. Scott Camazine, PK Visscher, Jennifer Finley, and RS Vetter. House-hunting by honey bee swarms: collective decisions and individual behaviors. , 46(4):348–360, 1999. Nigel R Franks, Stephen C Pratt, Eamonn B Mallon, Nicholas F Britton, and David JT Sumpter. Information flow, opinion polling and collective intelligence in house–hunting social insects. , 357(1427):1567–1583, 2002. James C Nieh. The stop signal of honey bees: reconsidering its message. , 33(1):51–56, 1993. Kristen A Pastor and Thomas D Seeley. The brief piping signal of the honey bee: begging call or stop signal? , 111(8):775–784, 2005. Ken Tan, Shihao Dong, Xinyu Li, Xiwen Liu, Chao Wang, Jianjun Li, and James C Nieh. Honey bee inhibitory signaling is tuned to threat severity and can act as a colony alarm signal. , 14(3):e1002423, 2016. Kong-Fatt Wong and Xiao-Jing Wang. A recurrent network mechanism of time integration in perceptual decisions. , 26(4):1314–1328, 2006. Ralph T Jack-McCollough and James C Nieh. Honeybees tune excitatory and inhibitory recruitment signalling to resource value and predation risk. , 110:9–17, 2015. N Owen-Smith, JM Fryxell, and EH Merrill. Foraging theory upscaled: the behavioural ecology of herbivore movement. , 365(1550):2267–2278, 2010. Christoph Grueter, Roger Schuerch, Tomer J Czaczkes, Keeley Taylor, Thomas Durance, Sam M Jones, and Francis LW Ratnieks. Negative feedback enables fast and flexible collective decision-making in ants. , 7(9):e44501, 2012. Tomer J Czaczkes, Christoph Gr[ü]{}ter, Sam M Jones, and Francis LW Ratnieks. Synergy between social and private information increases foraging efficiency in ants. , 7(4):521–524, 2011. Colin J Torney, Tommaso Lorenzi, Iain D Couzin, and Simon A Levin. Social information use and the evolution of unresponsiveness in collective systems. , 12(103):20140893, 2015. Larissa Conradt and Christian List. Group decisions in humans and animals: a survey. , 364(1518):719–742, 2008. Sasha RX Dall, Luc-Alain Giraldeau, Ola Olsson, John M McNamara, and David W Stephens. Information and its use by animals in evolutionary ecology. , 20(4):187–193, 2005. David JT Sumpter, Jens Krause, Richard James, Iain D Couzin, and Ashley JW Ward. Consensus decision making by fish. , 18(22):1773–1777, 2008. Lars Chittka, Adrian G Dyer, Fiola Bock, and Anna Dornhaus. Psychophysics: bees trade off foraging speed for accuracy. , 424(6947):388, 2003. James G Burns and Adrian G Dyer. Diversity of speed-accuracy strategies benefits social insects. , 18(20):R953–R954, 2008. Tomer J Czaczkes, Christoph Gr[ü]{}ter, and Francis LW Ratnieks. Negative feedback in ants: crowding results in less trail pheromone deposition. , 10(81):20121009, 2013. Christoph Grueter and Ellouise Leadbeater. Insights from insects about adaptive social information use. , 29(3):177–184, 2014. Fran[ç]{}ois-Xavier Dechaume-Moncharmont, Anna Dornhaus, Alasdair I Houston, John M McNamara, Edmund J Collins, and Nigel R Franks. The hidden cost of information in collective foraging. , 272(1573):1689–1695, 2005. Kevin N Laland. Social learning strategies. , 32(1):4–14, 2004. James L Gould and Carol Grant Gould. . WH Freeman, New York, 1994. Tim Gernat, Vikyath D Rao, Martin Middendorf, Harry Dankowicz, Nigel Goldenfeld, and Gene E Robinson. Automated monitoring of behavior reveals bursty interaction patterns and rapid spreading dynamics in honeybee social networks. , 115(7):1433–1438, 2018. Gayle W Hill. Group versus individual performance: Are n+ 1 heads better than one? , 91(3):517, 1982. Jens Krause, Graeme D Ruxton, and Stefan Krause. Swarm intelligence in animals and humans. , 25(1):28–34, 2010. Albert B Kao and Iain D Couzin. Decision accuracy in complex environments is often maximized by small group sizes. , 281(1784):20133305, 2014. Luca Gammaitoni, Peter H[ä]{}nggi, Peter Jung, and Fabio Marchesoni. Stochastic resonance. , 70(1):223, 1998. Audrey Dussutour, Madeleine Beekman, Stamatios C Nicolis, and Bernd Meyer. Noise improves collective decision-making by ants in dynamic environments. , 276(1677):4353–4361, 2009. Tommaso Biancalani, Louise Dyson, and Alan J McKane. Noise-induced bistable states and their mean switching time in foraging colonies. , 112(3):038101, 2014. Mario Bernardo, Chris Budd, Alan Richard Champneys, and Piotr Kowalczyk. , volume 163. Springer Science & Business Media, 2008. Steven H Strogatz. . CRC Press, 2018. Daniel T Gillespie. Exact stochastic simulation of coupled chemical reactions. , 81(25):2340–2361, 1977.  \ [**Appendix**]{} Swarm foraging dynamics for a single switching feeder ===================================================== Consider model Eq. (\[singledyn\]) for which the food quality $\alpha (t)$ switches between two values $\alpha (t) = \bar{\alpha}$ and 0 at length $T$ minutes, similar to previous experiments [@seeley91; @granovskiy12]. Before analyzing the temporal dynamics $u(t)$ of the swarm in response to food quality switches, we study equilibria and their stability to determine different swarm interactions impact foraging consensus and the rate at which it is approached. Equilibrium and linear stability analysis {#equilanaz} ----------------------------------------- At any given time $t$, the dynamics of Eq. (\[singledyn\]) are determined by the food quality function $\alpha(t)$ values at $t$ and $t - \tau$. In the time interval $ t \in [0,\tau] $, $\alpha(t) = {\bar{\alpha}}$ and $\alpha(t - \tau) = 0$ equilibria of Eq. (\[singledyn\]) are solutions to $$\begin{aligned} 0 &= (1-u)({\bar{\alpha}}+ \beta u) - \gamma u - \rho {\bar{\alpha}}u^2, \end{aligned}$$ which can be solved using the quadratic formula $$\begin{aligned} {\bar{u}}_{\pm}^1 := \frac{1}{2} \left[ {{\mathcal}B} \pm \sqrt{{{\mathcal}D}} \right], \hspace{10mm} {{\mathcal}B} = \frac{\beta - \gamma - {\bar{\alpha}}}{\beta + \rho {\bar{\alpha}}}, \hspace{8mm} {{\mathcal}D} = {{\mathcal}B}^2 + \frac{4 {\bar{\alpha}}}{\beta + \rho {\bar{\alpha}}}, \label{u1eq}\end{aligned}$$ with linear stability given by the eigenvalues $$\begin{aligned} \lambda_{\pm}^1 = \mp \sqrt{(\beta - \bar{\alpha} - \gamma)^2 + 4 (\beta + \rho{\bar{\alpha}}) \bar{\alpha}},\end{aligned}$$ so the positive equilibrium $\bar{u}_+^1$ is stable and the negative (extraneous) equilibrium $\bar{u}_-^1$ is unstable. On $ t \in [\tau,T] $, $\alpha(t) = \alpha(t- \tau) = {\bar{\alpha}}$ the equilibrium equation $$\begin{aligned} 0 &= (1-u)({\bar{\alpha}}+ \beta u) - \gamma u .\end{aligned}$$ has solutions and eigenvalues $$\begin{aligned} \bar{u}_{\pm}^2 = \frac{\beta - \bar{\alpha} - \gamma \pm \sqrt{(\beta - \bar{\alpha} - \gamma)^2 + 4 \beta \bar{\alpha}}}{2 \beta}, \ \ \ \ \ \lambda_{\pm}^2 = \mp \sqrt{(\beta - \bar{\alpha} - \gamma)^2 + 4 \beta \bar{\alpha}}.\end{aligned}$$ Again, the positive equilibrium $\bar{u}_+^2$ is stable and the negative equilibrium $\bar{u}_-^2$ is unstable. On $t \in [T, T+ \tau)$, $\alpha(t) = 0$ and $\alpha(t- \tau) = \bar{\alpha}$, equilibria satisfy $0 = (1-u) \beta u - \gamma u$, so $$\begin{aligned} &\bar{u}_0^3 = 0, \quad \bar{u}_1^3 = \frac{\beta - \gamma}{\beta},\end{aligned}$$ and on $t \in [T+ \tau, 2T)$, $\alpha(t)= \alpha(t - \tau)= 0$, so $0 = (1-u) \beta u - \gamma u - \rho \bar{\alpha} u^2$ and $$\begin{aligned} &\bar{u}_0^4 = 0, \quad \bar{u}_1^4 = \frac{\beta - \gamma}{\beta + \rho \bar{\alpha}}.\end{aligned}$$ Both pairs of equilibria have associated eigenvalues $$\begin{aligned} \lambda_0 = \beta - \gamma, \ \ \ \ \ \lambda_1 = \gamma - \beta,\end{aligned}$$ so the zero equilibria $\bar{u}_0^3 = \bar{u}_0^4 = 0$ are stable when $\gamma > \beta$ and the nonzero equilibria $\bar{u}_1^3$ and $\bar{u}_1^4$ are positive and stable when $\beta > \gamma$. Thus, to ensure no bees continue foraging when there is no food, abandonment $\gamma$ should be stronger than recruitment $\beta$. We deem $\bar{u} : = \bar{u}_+^2$ the [*consensus*]{} level, as it is the upper limit on the fraction of the swarm foraging the feeder, when it supplies food. The eigenvalues $\lambda_{\rm on} : = \lambda_+^2$ and $\lambda_{\rm off} : = \lambda_0^4$ define the [*adaptivity*]{} of the swarm, or the rates of arrival to/departure from the feeder when it does/does not supply food. Periodically forced swarm foraging {#evolnmean} ---------------------------------- Long term periodic solutions to Eq. (\[singledyn\]) result from switching the food quality $\alpha(t)$ between $\bar{\alpha}$ and $0$ every $T$ minutes. These are obtained by solving Eq. (\[singledyn\]) iteratively using separation of variables. For example, when $\alpha (t) \equiv \bar{\alpha}$ and $\alpha(t-\tau) \equiv 0$ we can separate variables and factor the resulting fraction: $$\begin{aligned} \frac{ {{\rm d}}u}{u - \bar{u}_+} - \frac{ {{\rm d}}u}{u - {\bar{u}}_-} = - (\beta + \rho {\bar{\alpha}}) \sqrt{{{\mathcal}D}} {{\rm d}}t, \end{aligned}$$ where ${{\mathcal}D}$ is defined in Eq. (\[u1eq\]). Integrating, isolating $u$, and applying $u(0) = u_0$, we find $$\begin{aligned} \label{u1soln} u(t) = \frac{{\bar{u}}_+ ( u_0 - {\bar{u}}_-) - {\bar{u}}_-(u_0 - {\bar{u}}_+) {{\rm e}}^{- (\beta + \rho {\bar{\alpha}}) \sqrt{{\mathcal}D} t}}{u_0 - {\bar{u}}_- - (u_0 - {\bar{u}}_+) {{\rm e}}^{- (\beta + \rho {\bar{\alpha}})\sqrt{{\mathcal}D}t}}, \end{aligned}$$ consistent with our equilibrium analysis showing $\lim_{t \to \infty} u(t) = {\bar{u}}= {\bar{u}}_+^2$. Now, taking $\alpha (t) \equiv \bar{\alpha}$ on $t \in [2nT, (2n+1)T)$ for $n=0,1,2,3,...$ and $\bar{\alpha} \equiv 0$ otherwise, we will have $$\begin{aligned} \dot{u} = (1-u)(\alpha(t) + \beta u) - \gamma u - R(t) u^2, \label{simpsing}\end{aligned}$$ where $R(t) \equiv \rho \bar{\alpha}$ for $t \in [2nT + \tau, (2n+1)T + \tau)$ and $R(t) \equiv 0$ otherwise. The periodic solution to Eq. (\[simpsing\]) can be derived self-consistently by starting with an unknown initial condition $u(0) = u_0$, and then requiring $u(2T) = u_0$. Thus, within $t \in [0,\tau)$, we have the solution given by Eq. (\[u1soln\]), and $$\begin{aligned} \label{u1eqn} u_1 : = u(\tau) = \frac{{\bar{u}}_+ ( u_0 - {\bar{u}}_-) - {\bar{u}}_-(u_0 - {\bar{u}}_+) {{\rm e}}^{- (\beta + \rho {\bar{\alpha}}) \sqrt{{\mathcal}D} \tau}}{u_0 - {\bar{u}}_- - (u_0 - {\bar{u}}_+) {{\rm e}}^{- (\beta + \rho {\bar{\alpha}})\sqrt{{\mathcal}D} \tau }}. \end{aligned}$$ At $t = \tau$, self-inhibition vanishes and the solution is a special case of Eq. (\[u1soln\]) for which $\rho = 0$. Thus, we can solve Eq. (\[singledyn\]) with $u(\tau) = u_1$ as an initial condition and write for $t \in [\tau, T)$: $$\begin{aligned} \label{u2soln} u(t) = \frac{{\bar{u}}_+ ( u_1 - {\bar{u}}_-) - {\bar{u}}_-(u_1 - {\bar{u}}_+) {{\rm e}}^{- \beta \sqrt{{\mathcal}D} t}}{u_1 - {\bar{u}}_- - (u_1 - {\bar{u}}_+) {{\rm e}}^{- \beta \sqrt{{\mathcal}D}t}},\end{aligned}$$ so that at $t = T$, we have $$\begin{aligned} \label{u2eqn} u_2 : = u(T) = \frac{{\bar{u}}_+ ( u_1 - {\bar{u}}_-) - {\bar{u}}_-(u_1 - {\bar{u}}_+) {{\rm e}}^{- \beta \sqrt{{\mathcal}D} (T-\tau)}}{u_1 - {\bar{u}}_- - (u_1 - {\bar{u}}_+) {{\rm e}}^{- \beta \sqrt{{\mathcal}D}(T -\tau)}}.\end{aligned}$$ Beyond $t=T$, the dynamics is governed by a special case of Eq. (\[u2soln\]) for which $({\bar{u}}_+, {\bar{u}}_-) = (1-\frac{\gamma}{\beta} , 0)$ if $\beta > \gamma$ and $({\bar{u}}_+, {\bar{u}}_-) = (0,1-\frac{\gamma}{\beta} )$ if $\beta < \gamma$, so on $t \in [T, T + \tau)$: $$\begin{aligned} u(t) = \frac{u_2( \beta - \gamma )}{\beta u_2 - (\beta u_2 + \gamma - \beta) {{\rm e}}^{(\gamma - \beta ) t}},\end{aligned}$$ for $\beta \neq \gamma$, and the limit as $\gamma \to \beta$ is $u(t) = \frac{u_2}{1 + u_2 \beta t}$, which can both be evaluated at $t = T+ \tau$ to yield, $$\begin{aligned} \label{u3eqn} u_3 : = u(T+\tau) = \left\{ \begin{array}{cc} \frac{u_2( \beta - \gamma )}{\beta u_2 - (\beta u_2 + \gamma - \beta) {{\rm e}}^{(\gamma - \beta )\tau}} & : \ \beta \neq \gamma \\ \frac{u_2}{1 + u_2 \beta \tau } & : \ \beta = \gamma \end{array} \right.\end{aligned}$$ At $t = T+\tau$, self inhibition returns since $\alpha (t- \tau) \equiv 0$, increases the negative feedback acting on foragers. The long term steady state is determined by the balance of abandonment and recruitment: $({\bar{u}}_+, {\bar{u}}_-) = (\frac{\beta - \gamma}{\beta + \rho {\bar{\alpha}}} , 0)$ if $\beta > \gamma$ and $({\bar{u}}_+, {\bar{u}}_-) = (0,\frac{\beta - \gamma}{\beta + \rho {\bar{\alpha}}} )$ if $\beta < \gamma$. Thus, $$\begin{aligned} u(t) = \frac{u_3( \beta - \gamma )}{ (\beta + \rho {\bar{\alpha}}) u_3 - ((\beta + \rho {\bar{\alpha}}) u_3 + \gamma - \beta) {{\rm e}}^{(\gamma - \beta ) t}}, \end{aligned}$$ for $\beta \neq \gamma$, and in the limit $\beta \to \gamma$, $u(t) = \frac{u_3}{1 + u_3 (\beta + \rho {\bar{\alpha}}) t}$. Both expressions can be evaluated at $t = 2T$, and self-consistency of the periodic solution requires $u_4 \equiv u_0$, $$\begin{aligned} \label{u4eqn} u_0 = u_4 : = u(2T) = \left\{ \begin{array}{cc} \frac{u_3( \beta - \gamma )}{ (\beta + \rho {\bar{\alpha}}) u_3 - ((\beta + \rho {\bar{\alpha}}) u_3 + \gamma - \beta) {{\rm e}}^{(\gamma - \beta ) (T-\tau) }} & : \ \beta \neq \gamma \\ \frac{u_3}{1 + u_3 (\beta + \rho {\bar{\alpha}}) (T - \tau)} & : \ \beta = \gamma \end{array} \right.\end{aligned}$$ Eqs. (\[u1eqn\]), (\[u2eqn\]), (\[u3eqn\]), and (\[u4eqn\]) can be solved explicitly for $(u_0, u_1, u_2, u_3)$, although the expressions are quite cumbersome, so we omit them here. These analytic solution techniques were used to generate the foraging fraction trajectories plotted in Fig. \[fig2\]d and to identify model parameter that optimize the RR $J$ in different environments (plotted in Fig. \[fig2\]e) as we now describe. Optimizing reward rate over strategy sets {#optimone} ----------------------------------------- We optimized the reward rate (RR) of the swarm foraging a single switching feeder by restricting the strategies to a discrete set of interaction parameter values. The RR in large regions of parameter space was relatively flat since it involves the sum of several exponentially small terms. To avoid spurious convergence, we focused on each parameter’s relevant order of magnitude which led to the highest long term RR. For a given environment $(\alpha, T)$, we identified the combination of interaction parameter values from the set $(\beta,\gamma,\rho) \in \{ 0.01,0.1,1,10\}^3$ (in 1/min) yielding the highest RR $J$ computed from Eq. (\[rr1\]). Bounds on interaction parameters were imposed so that a swarm could not completely dispense with any interaction or feedback mechanism or strengthen any to be arbitrarily rapid. This was performed over a mesh of environmental parameters $\bar{\alpha} \in [0.5,20]$ (at $\Delta \bar{\alpha} = 0.1$ steps) and $T \in [1,200]$ (at $\Delta T = 1$ minute). We found that $\beta = 0.01 \text{ min}^{-1}$ was optimal across all environment types, but that $\gamma$ and $\rho$ varied in strength dependent on the environmental conditions (See Fig. \[fig2\]e). Linear approximation of the periodic solution and reward rate {#linearappone} ------------------------------------------------------------- The RR Eq. (\[rr1\]) for the single feeder can be estimated by linearly approximating the swarm dynamics using results from our equilibrium analysis. Assuming the interval $T$ and between feeder quality switches ($\alpha: \bar{\alpha} \mapsto 0; \alpha: 0 \mapsto \bar{\alpha}$) and the delay $\tau$ are large, the swarm will nearly equilibrate before each switch, suggesting the following linear approximation of the foraging fraction: $$\begin{aligned} u(t) = \left\{ \begin{array}{cc} \bar{u}^{1} + {{\rm e}}^{- \lambda^{1} t} (\bar{u}^{4} -\bar{u}^{1} ), & t \in [0,\tau] \\ \bar{u}^{2} + {{\rm e}}^{- \lambda^{2} (t-\tau)} (\bar{u}^{1} -\bar{u}^{2} ), & t \in [\tau,T] \\ \bar{u}^{3} + {{\rm e}}^{- \lambda^{3} (t-T)} (\bar{u}^{2} - \bar{u}^{3} ), & t \in [T,T+\tau] \\ \bar{u}^{4} + {{\rm e}}^{- \lambda^{4} (t-T-\tau)} (\bar{u}^{3} - \bar{u}^{4} ), & t \in [T+\tau,2T] . \end{array} \right.\end{aligned}$$ where $\bar{u}^{i}$ are the stable equilibria and $\bar{u}^3 = \bar{u}^{4} = 0$ when $\beta < \gamma$. Considering this case, we can compute the RR using the single feeder version of Eq. (\[rr2\]) in the long time limit $$\begin{aligned} J =& \frac{1}{2T} \int_0^{2T} u(t) (\alpha (t) - c) {{\rm d}}t \\ =& \frac{\bar{\alpha} - c}{2T} \int_0^\tau \bar{u}^1(1 - {{\rm e}}^{- \lambda^1 t} ) {{\rm d}}t + \frac{\bar{\alpha} - c}{2T} \int_0^{T-\tau} (\bar{u}^2 + {{\rm e}}^{- \lambda^2 t} (\bar{u}^{1} -\bar{u}^{2} )) {{\rm d}}t - \frac{c}{2T} \int_0^\tau \bar{u}^{2} {{\rm e}}^{- \lambda^3 t} {{\rm d}}t \\ =& \frac{\bar{\alpha} - c}{2T} \left( \bar{u}^1 \tau - \bar{u}^1 \frac{1- {{\rm e}}^{- \lambda^1 \tau} }{\lambda^1} \right)+ \frac{\bar{\alpha} - c}{2T} \left( \bar{u}^2 (T-\tau) + \frac{\bar{u}^{1} -\bar{u}^{2}}{\lambda^2} (1- {{\rm e}}^{- \lambda^2 (T-\tau)} ) \right) - \frac{c}{2T} \left( \frac{\bar{u}^{2}}{\lambda^3} (1- {{\rm e}}^{- \lambda^3 \tau} ) \right).\end{aligned}$$ In the the long interval $\lim_{T \to \infty}$ and short delay $\lim_{\tau \to 0}$ (omitting the intermediate delay equilibria) limits, we can simplify the expression as $$\begin{aligned} J &= \frac{\bar{u}}{2} \left[ (\bar{\alpha} - c) \left( 1 - \frac{1 - {{\rm e}}^{- \lambda_{\rm on} T}}{\lambda_{\rm on} T} \right)- c \frac{1 - {{\rm e}}^{- \lambda_{\rm off} T}}{\lambda_{\rm off} T} \right],\end{aligned}$$ where $\bar{u} = \bar{u}^2$, $\lambda_{\rm on} = \lambda^2 = \lambda_+^2$ and $\lambda_{\rm off} = \lambda^4 = \lambda_0$, as written in Eq. (\[linJone\]). For the specific case in which $\bar{\alpha} = 2$ and $c = 1$, we can write this more cleanly as $$\begin{aligned} J(\alpha(t), \beta, \gamma, \rho) = \frac{\bar{u}}{2} \left[ \left( 1 - \frac{1 - {{\rm e}}^{- \lambda_{\rm on} T}}{\lambda_{\rm on} T} \right)- \frac{1 - {{\rm e}}^{- \lambda_{\rm off} T}}{\lambda_{\rm off} T} \right].\end{aligned}$$ Clearly, increasing consensus ($\bar{u}$) and adaptivity ($\lambda_{\rm on/off}$) increases the swarm RR. As $\beta \to 0$, $\lambda_{\rm off} = - \gamma$, $\bar{u} = 2/[ 2 + \gamma ]$, with $\lambda_{\rm on} = - (2 + \gamma)$. Increasing the rate of abandonment $\gamma$ decreases consensus $\bar{u}$ but will increase the adaptivity of the swarm as both $\lambda_{\rm off} = - \gamma$ and $\lambda_{\rm off} = - (\gamma +2)$ increase in amplitude. Optimizing the RR then requires balancing these two effects. Identifying the $\gamma$ value that maximizes the RR can then done by finding the maximum of $$\begin{aligned} J(\alpha(t), 0, \gamma, \rho) = \frac{1}{2 + \gamma} \left[ \left( 1 - \frac{1 - {{\rm e}}^{- (2 + \gamma) T}}{(2 + \gamma)T} \right)- \frac{1 - {{\rm e}}^{- \gamma T}}{\gamma T} \right],\end{aligned}$$ given by the $\gamma$ solving ${\partial}_{\gamma} J(\alpha(t), 0, \gamma, \rho) = 0$. This analysis can be extended to consider the solutions to the full nonlinear equations, but the general trends are the same. Increasing negative feedback will tend to limit consensus while making the swarm more adaptive to change. Dynamics of swarms foraging at two switching feeders ==================================================== Here we provide more details and analysis on our swarm model Eq. (\[2siteswarm\]) foraging between two feeders. As in the single feeder model, we can leverage equilibrium, stability, and linearization to better understand the impact of model tuning on the RRs of the foraging swarm. Forms of social inhibitions {#socialin} --------------------------- Here, we provide detailed descriptions of the dynamical models associated with each type of social inhibition used in the model of a swarm foraging two feeders, as generalized in Eq. (\[2siteswarm\]). In the main text, we simply indicate the general form of social inhibition with the function ${{\mathcal}S}(u_A, u_B)$, but we provide the functional form for these interactions in the descriptions below.\ [*Direct Switching Model:*]{} A bee committed to a feeder inhibits bees with opposing opinions by causing them to switch the feeder to which they are committed: $$\begin{aligned} \dot{u_A} = & (1-u_A-u_B)(\alpha_A(t) + \beta u_A) - \gamma u_A - \rho(\alpha_B(t-\tau) - \alpha_A(t-\tau)) u_Au_B, \\ \dot{u_B} = & (1-u_A-u_B)(\alpha_B(t) + \beta u_B) - \gamma u_B - \rho(\alpha_A(t-\tau) - \alpha_B(t-\tau))u_Au_B, \end{aligned}$$ where $\tau$ (in minutes) indicates the time delay required for the strength of the direct switching signal (based on detected food quality) to update following a switch in the food quality. Notice that the social inhibition terms in either evolution equation ($u_A$ or $u_B$) will necessarily be of opposite sign since each is the negative of the other and $\alpha_A(t) \neq \alpha_B(t)$ for all $t>0$.\ [*Indiscriminate Stop Signal Model:*]{} A bee committed to a feeder indiscriminately inhibits bees committed to either feeder, affecting both bees committed to the same feeder and those committed to a different feeder, and causes them become uncommitted: $$\begin{aligned} \dot{u_A} = & (1-u_A-u_B)(\alpha_A(t) + \beta u_A) - \gamma u_A - \frac{1}{2}\rho(\alpha_A(t-\tau)u_A^2 + \alpha_B(t-\tau) u_Au_B), \\ \dot{u_B} = & (1-u_A-u_B)(\alpha_B(t) + \beta u_B) - \gamma u_B - \frac{1}{2}\rho(\alpha_B(t-\tau)u_B^2 + \alpha_A(t-\tau) u_Au_B),\end{aligned}$$ where $\tau$ (in minutes) is the time delay required for food quality switch detection as in the direct switching model. This form of social inhibition will always lead to negative feedback to both populations as long as $\rho >0$.\ [*Self-Inhibition Model:*]{} A bee committed to one feeder inhibits bees committed to the same feeder, causing them to become uncommitted: $$\begin{aligned} \dot{u_A} = & (1-u_A-u_B)(\alpha_A(t) + \beta u_A) - \gamma u_A - \rho(\bar{\alpha} - \alpha_A(t-\tau)) u_A^2, \\ \dot{u_B} = & (1-u_A-u_B)(\alpha_B(t) + \beta u_B) - \gamma u_B - \rho(\bar{\alpha} - \alpha_B(t-\tau))u_B^2,\end{aligned}$$ where $\tau$ is the time delay. Self-inhibition is only active in the population of foragers for which the swarm detects there is less than the maximum supply of food available ($\alpha_{A,B}(t-\tau) < \bar{\alpha}$).\ [*Discriminate Stop Signal:*]{} A bee committed to a feeder inhibits bees committed to different feeders, causing them to become uncommitted: $$\begin{aligned} \dot{u_A} = & (1-u_A-u_B)(\alpha_A(t) + \beta u_A) - \gamma u_A - \rho \alpha_B(t-\tau) u_Au_B, \\ \dot{u_B} = & (1-u_A-u_B)(\alpha_B(t) + \beta u_B) - \gamma u_B - \rho \alpha_A(t-\tau) u_Au_B, \end{aligned}$$ where $\tau$ is the time delay. The strength of the stop signal varies with the detected quality of each feeder. Equilibria and linear stability {#twolinstab} ------------------------------- We determined linear approximations of periodic solutions to the full nonlinear model Eq. (\[2siteswarm\]) by studying the equilibria and linear stability properties of the full system. For any $t$, the dynamics of Eq. (\[2siteswarm\]) is governed by the piecewise constant values of the food quality functions denoted $\bar{\alpha}_{A,B}^t = \alpha_{A,B}(t)$ and $\bar{\alpha}_{A,B}^{\tau} = \alpha_{A,B}(t - \tau)$ as the actual current and delay-observed values so that: \[2dfp\] $$\begin{aligned} 0 &= (1-{\bar{u}}_A - {\bar{u}}_B)( {\bar{\alpha}}_A^t + \beta {\bar{u}}_A) - \gamma {\bar{u}}_A - {{\mathcal}S}({\bar{u}}_A, {\bar{u}}_B; \rho, {\bar{\alpha}}_A^{\tau}, {\bar{\alpha}}_B^{\tau}), \\ 0 &= (1-{\bar{u}}_A - {\bar{u}}_B)( {\bar{\alpha}}_B^t + \beta {\bar{u}}_B) - \gamma {\bar{u}}_B - {{\mathcal}S}({\bar{u}}_B, {\bar{u}}_A; \rho, {\bar{\alpha}}_B^{\tau}, {\bar{\alpha}}_A^{\tau}). \end{aligned}$$ where ${{\mathcal}S}(x,y; \alpha_x, \alpha_y)$ is the nonlinear function describing inhibitory social interactions, parameterized by the strength $\rho$ and delayed quality observations ${\bar{\alpha}}_{A,B}^{\tau}$ (See Section \[socialin\] for exact forms). Since Eq. (\[2dfp\]) is autonomous for piecewise time intervals, equilibria can be defined on each interval [@bernardo08]. Eq. (\[2dfp\]) can be explicitly solved using the quartic equation for all models and time intervals, but the expressions for ${\bar{u}}_{A,B}$ are unwieldy, so we do not write them here. Linear stability was classified using the eigenvalues $\lambda_{\pm} = \frac{1}{2} \left[ {\rm Tr} ({\mathcal D}) \pm \sqrt{{\rm Tr}({\mathcal D})^2 - 4 {\rm det}({\mathcal D})} \right]$ of the Jacobian about fixed points $(u_A, u_B) = ({\bar{u}}_A, {\bar{u}}_B)$: $$\begin{aligned} \begin{aligned} \mathcal{D} = \begin{bmatrix} {- {\bar{\alpha}}+ \beta(1- \bar{u}_B - 2 \bar{u}_A) - \gamma - \partial_{u_A} {{\mathcal}S}(\bar{u}_A, \bar{u}_B)} & { - {\bar{\alpha}}- \beta \bar{u}_A - \partial_{u_B} {{\mathcal}S}(\bar{u}_A, \bar{u}_B)} \\ {- \frac{{\bar{\alpha}}}{2} - \beta \bar{u}_B - \partial_{u_A} {{\mathcal}S}(\bar{u}_B, \bar{u}_A)} & { - \frac{{\bar{\alpha}}}{2} + \beta(1 - \bar{u}_A - 2 \bar{u}_B) - \gamma - \partial_{u_B} {{\mathcal}S}(\bar{u}_B, \bar{u}_A)} \end{bmatrix}. \end{aligned} \end{aligned}$$ Specific cases are stable nodes (with two negative real eigenvalues, $\lambda_{\pm} <0$) and saddles (with one negative/one positive real eigenvalue, $\lambda_{\pm} \gtrless 0$) as illustrated Fig. \[fig3\]. Direct switching, self inhibition and indiscriminate stop signaling yield monostable behavior – a phase space only containing a single stable node (Fig. \[fig3\]e) – and the majority of bees forage at the high yielding feeder (${\bar{u}}_A > {\bar{u}}_B$). Discriminate stop signaling model can generate bistability for weak abandonment $\gamma$ and strong recruitment $\beta$ and stop-signaling $\rho$. In this case, the phase space is occupied by two stable nodes separated by a saddle point (Fig. \[fig3\]f). In this case, consensus is lower in some phases of the foraging cycle, since discriminate stop-signaling prevents switching between feeders. Optimizing reward rate over strategy sets {#optimtwo} ----------------------------------------- As in the one feeder case, identified the set $(\beta,\gamma,\rho) \in \{ 0.001,0.1,1,10\}^3$ (min$^{-1}$) yielding the highest RR from Eq. (\[rr2\]) in each environment $(\bar{\alpha},T)$. For each parameterized form of social inhibition, we numerically found periodic solutions to Eq. (\[2siteswarm\]) taking $\alpha_A = \bar{\alpha}$ and $\alpha_B = \frac{{\bar{\alpha}}}{2}$ initially and flipping the feeder qualities every $T$ minutes. This was performed over a mesh of environmental parameters $\bar{\alpha} \in [0.5,20]$ (at $\Delta \bar{\alpha} = 0.1$ steps) and $T \in [1,200]$ (at $\Delta T = 1$ minute). See Fig \[fig5\] for direct switching and Figs. \[fig8\] and \[fig9\] for discriminate and indiscriminate stop signaling model respectively. For the self inhibition model, the optimal strategy is low abandonment ($\gamma = 0.01 \text{ min}^{-1}$) with high recruitment and social inhibition ($\beta = \rho = 10 \text{ min}^{-1}$). The maximum RR is plotted for each of the four models in a given environmental condition (food quality ${\bar{\alpha}}$ and switching period $T$) in Fig \[fig4\]. Linear approximation of the periodic solution {#linearapptwo} --------------------------------------------- To compute consensus and adaptivity, we derived a linear approximation to the period solution in the two feeder case. Feeder qualities started with $\alpha_A(t) = \bar{\alpha}$ and $\alpha_B(t) = \frac{{\bar{\alpha}}}{2}$ and switched every $T$ minutes. Assuming $T$ large, the swarm will equilibrate between condition switching, yielding the following estimate of the $u_A(t)$ part of the periodic solution: $$\begin{aligned} u_A(t) = \left\{ \begin{array}{cc} \bar{u}^{1} + {{\rm e}}^{- \lambda^{1} t} (\bar{u}^{4} -\bar{u}^{1} ), & t \in [0,\tau], \\ \bar{u}^{2} + {{\rm e}}^{- \lambda^{2} (t-\tau)} (\bar{u}^{1} -\bar{u}^{2} ), & t \in [\tau,T], \\ \bar{u}^{3} + {{\rm e}}^{- \lambda^{3} (t-T)} (\bar{u}^{2} - \bar{u}^{3} ), & t \in [T,T+\tau], \\ \bar{u}^{4} + {{\rm e}}^{- \lambda^{4} (t-T-\tau)} (\bar{u}^{3}- \bar{u}^{4} ), & t \in [T+\tau,2T], \end{array} \right.\end{aligned}$$ where $\bar{u}^{i}$ are stable equilibria in each time interval and $\lambda^j$ are the least negative associated eigenvalues determining the decay rate to the fixed point. There is a similar expression for the opposing feeder population, $u_B(t) = u_A(t+T)$. This implies $\lambda^1 = \lambda^3$ and $\lambda^2 = \lambda^4$. In the long time limit, the RR Eq. (\[rr2\]) can be computed: $$\begin{aligned} J =& \frac{1}{T_f} \int_0^{T_f} \left[ u_A(t) \cdot (\alpha_A (t) - c) + u_B(t) \cdot (\alpha_B (t) - c) \right] {{\rm d}}t \\ =& \frac{\bar{\alpha} - c}{2T} \int_0^\tau (\bar{u}^1 + {{\rm e}}^{- \lambda^1 t} (\bar{u}^{4} -\bar{u}^{1} )) {{\rm d}}t + \frac{\bar{\alpha}/2-c}{2T} \int_0^\tau (\bar{u}^3 + {{\rm e}}^{- \lambda^1 t} (\bar{u}^{2} -\bar{u}^{3} )) {{\rm d}}t \\ & + \frac{{\bar{\alpha}}- c}{2T} \int_0^{T-\tau} (\bar{u}^2 + {{\rm e}}^{- \lambda^2 t} (\bar{u}^{1} -\bar{u}^{2} )) {{\rm d}}t + \frac{\bar{\alpha}/2-c}{2T} \int_0^{T - \tau} (\bar{u}^4 + {{\rm e}}^{- \lambda^2 t} (\bar{u}^{3} -\bar{u}^{4} )) {{\rm d}}t \\ & + \frac{\bar{\alpha}/2-c}{2T} \int_0^\tau ( \bar{u}^3 + {{\rm e}}^{- \lambda^1 t} (\bar{u}^{2} - \bar{u}^{3} )){{\rm d}}t + \frac{{\bar{\alpha}}- c}{2T} \int_0^{\tau}( \bar{u}^1 + {{\rm e}}^{- \lambda^1 t} (\bar{u}^{4} - \bar{u}^{1} )){{\rm d}}t \\ & + \frac{\bar{\alpha}/2-c}{2T} \int_0^{T-\tau} ( \bar{u}^4 + {{\rm e}}^{- \lambda^2 t} (\bar{u}^{3} - \bar{u}^{4} )){{\rm d}}t + \frac{{\bar{\alpha}}- c}{2T} \int_0^{T-\tau} ( \bar{u}^2 + {{\rm e}}^{- \lambda^2 t} (\bar{u}^{1} - \bar{u}^{2} )){{\rm d}}t \\ =& \frac{\bar{\alpha} - c}{T} \left[ \left( \bar{u}^1 \tau + \frac{\bar{u}^{4} -\bar{u}^{1}}{\lambda^1} (1- {{\rm e}}^{- \lambda^1 \tau} ) \right)+ \left( \bar{u}^2 (T-\tau) + \frac{\bar{u}^{1} -\bar{u}^{2}}{\lambda^2} (1- {{\rm e}}^{- \lambda^2 (T-\tau)} ) \right) \right] \\ & + \frac{\bar{\alpha}/2 - c}{T} \left[ \left( \bar{u}^3 \tau + \frac{\bar{u}^{2} -\bar{u}^{3}}{\lambda^3} (1- {{\rm e}}^{- \lambda^3 \tau} ) \right) + \left( \bar{u}^4 (T-\tau) + \frac{\bar{u}^{3} -\bar{u}^{4}}{\lambda^4} (1- {{\rm e}}^{- \lambda^4(T-\tau)} ) \right)\right] .\end{aligned}$$ Considering the limit of long time intervals $\lim_{T \to \infty}$ and short delays $\lim_{\tau \to 0}$ and the case in which $\bar{u}^2 + \bar{u}^4 \approx 1$ (no uncommitted bees in the long time limit) we further simplify the expression: $$\begin{aligned} J &= ({\bar{\alpha}}- c) \left[ \bar{u}^2 + (1 - 2 \bar{u}^2) \frac{1 - {{\rm e}}^{- \lambda^2 T}}{\lambda^2 T} \right] + ({\bar{\alpha}}/2 - c) \left[ 1-\bar{u}^2 + (2 \bar{u}^2-1) \frac{1 - {{\rm e}}^{- \lambda^4 T}}{\lambda^4 T} \right].\end{aligned}$$ For the specific case in which $c = \frac{{\bar{\alpha}}}{2}$, we remove the superscripts so $\bar{u} = \bar{u}^2$ and $\lambda = \lambda^2$: $$\begin{aligned} J = \frac{\bar{\alpha}}{2} \left( \bar{u} + (1- 2\bar{u})\frac{1 - {{\rm e}}^{- \lambda T}}{\lambda T} \right).\end{aligned}$$ The gradient of the RR along $\bar{u}$ and $\lambda$ can then be computed as: $$\begin{aligned} \partial_{\bar{u}}J &= \frac{{\bar{\alpha}}}{2} \left( 1 - 2 \frac{1 - {{\rm e}}^{- \lambda T}}{\lambda T} \right), \ \ \ \ \ \ \ \ \partial_{\lambda}J = (2 \bar{u} - 1)\frac{{\bar{\alpha}}{{\rm e}}^{-\lambda T}}{2 \lambda^2 T} ({{\rm e}}^{\lambda T} - \lambda T - 1),\end{aligned}$$ showing $J$ is increasing in $\bar{u}$ as long as $\lambda T > 1.594$ and increasing in $\lambda$ as long as $\bar{u}>0.5$. Supplemental figures and table ============================== ---------------------- ---------------------------------------- ------------------------------------------------ ------------------------------------------ [parameter]{} [description]{} [numerical range]{} [citation]{} \[1ex\] [quality of food source]{} [$[0.5,20]$ M(mol/l)]{} [[@seeley00; @granovskiy12]]{} \[1ex\] [$\beta$]{} [recruitment rate]{} [$\mathcal{O}(10^{-1} - 10^{1})$ min$^{-1}$]{} [[@sumpter03], [@seeley12] supplement]{} \[1ex\] [$\gamma$]{} [abandonment rate]{} [$\mathcal{O}(10^{-2} - 10^{1})$ min$^{-1}$]{} [[@seeley12] supplement]{} \[1ex\] [$\rho$]{} [rate of social inhibition]{} [$\mathcal{O}(10^{-1} - 10^{1})$ min$^{-1}$]{} [[@seeley12] supplement]{} \[1ex\] [$T$]{} [period of environment switch]{} [$1-200$ min]{} [[@granovskiy12]]{} \[1ex\] [$\tau$]{} [time delay for switch to be sensed]{} [$ 0.1 \cdot T$ min]{} [–]{} \[1ex\] [$c$]{} [cost of foraging]{} [ $ \frac{\bar{\alpha}}{2} $ ]{} [–]{} \[1ex\] ---------------------- ---------------------------------------- ------------------------------------------------ ------------------------------------------ : Model parameters for single feeder Eq. (\[singledyn\]) and two feeder Eq. (\[2siteswarm\]) swarm foraging models.[]{data-label="table1"} ![Reward rate, Eq. (\[rr1\]), maximizing values of abandonment ($\gamma$) parameter for a given food quality ($\alpha$) and switching period ($T$) in the single feeder model. Swarm can maximize the reward $J$ by calibrating the level of abandonment with the switching rate and feeder quality, discounting faster as the environment changes more quickly.[]{data-label="fig7"}](opt_gamma.png){width=".4\linewidth"} Matching abandonment rate to switching rate in a single dynamic feeder {#abandon} ---------------------------------------------------------------------- Considering the single feeder foraging swarm model without nonlinear negative feedback ($\rho = 0$ so delays $\tau$ are irrelevant), we can explicitly compute the RR $J$ as a function of other parameters. We found that the best strategies do not utilize recruitment ($\beta = 0$), so the abandonment rate $\gamma$ is the only parameter that needs to be tuned with the environmental switching time $T$ and food quality $\bar{\alpha}$. Thus, Eq. (\[singledyn\]) is linear and so the linear approximation of the periodic solution is exact, described by $$\begin{aligned} u(t) = \left\{ \begin{array}{cc} A + (u_0 - A) {{\rm e}}^{-({\bar{\alpha}}+ \gamma) t}, & t \in [0,T), \\ u_1 {{\rm e}}^{-\gamma (t-T)}, & t \in [T,2 T) \end{array} \right.\end{aligned}$$ where $u_0 =A (1 - {{\rm e}}^{- ({\bar{\alpha}}+ \gamma) T})/({{\rm e}}^{\gamma T} - {{\rm e}}^{- ({\bar{\alpha}}+ \gamma) T}) $ and $u_1 =A ({{\rm e}}^{\gamma T} - {{\rm e}}^{- {\bar{\alpha}}T})/({{\rm e}}^{\gamma T} - {{\rm e}}^{- ({\bar{\alpha}}+ \gamma) T}) $. As such, we can explicitly compute the RR Eq. (\[rr1\]), $$\begin{aligned} J = \frac{{\bar{\alpha}}- c}{2T} \left( AT + \frac{u_0 - A}{{\bar{\alpha}}+ \gamma} ( 1- {{\rm e}}^{- ({\bar{\alpha}}+ \gamma)T}) \right) - \frac{c}{2 T} \left( \frac{A}{\gamma } ( 1 - {{\rm e}}^{- \gamma T}) \right),\end{aligned}$$ determining the maximum with respect to the abandonment rate $\gamma$ by solving $\partial_{\gamma} J = 0$ (Fig. \[fig7\]). The optimal abandonment rate $\gamma$ increases with the food site quality and decreases with the switching time $T$ of the environment. Thus, the negative feedback process should adapt to the dynamics of the environment, and discounting can be more rapid when the evidence for feeder quality is stronger. Foraging strategies with discriminate and indiscriminate stop signaling {#foragtune} ----------------------------------------------------------------------- Similar to Fig. \[fig5\], we optimized interactions for the discriminate stop signaling and indiscriminate stop signaling model to yield the highest RR Eq. (\[rr2\]). In discriminate stop signaling model, weak recruitment $\beta$ (Fig. \[fig8\]a), weak abandonment $\gamma$ (Fig. \[fig8\]b), and strong stop signaling (Fig. \[fig8\]c) yield the highest RRs for most environments (${\bar{\alpha}}$, $T$). In slow (large $T$) and high quality $\bar{\alpha}$ environments, abandonment $\gamma$ should be strong, and discriminate stop signaling $\rho$ can be made weak (Fig. \[fig8\]b,c). Recruitment should be weak in most environments (Fig. \[fig8\]a). There is no clear preferred interaction profile for maximizing RR across environments (${\bar{\alpha}}$, $T$) in the case of indiscriminate stop signaling (Fig. \[fig9\]). Interestingly, the strength of indiscriminate stop signaling parameter $\rho$ should be made low for virtually all environments (Fig. \[fig9\]c), and thus it does not seem to improve foraging efficiency. Consensus is lower due to the non-selectivity of social inhibition to all foraging bees. For the self inhibition model, to maximize foraging efficiency, abandonment should be made weak ($\gamma = 0.01 \text{ min}^{-1}$) while recruitment and social inhibition should be made strong ($\beta = \rho = 10 \text{ min}^{-1}$). Accuracy of linear approximations of periodic solutions {#linac} ------------------------------------------------------- Linear approximations of the periodic solutions to the single feeder Eq. (\[singledyn\]) and two feeder Eq. (\[2siteswarm\]) match the evolution of the full models across a wide range of parameters and forms of social inhibition (for example Fig. \[fig10\]a,b). If the system not poised close to a bifurcation, the dynamics between switches roughly linearly decays to the stable equilibrium. However, in the discriminate stop-signaling model, the system can lie close to the saddle-node bifurcation beyond which the model exhibits bistability (Fig. \[fig10\]c). In this case, the ghost of the saddle-node slows the solution trajectory, a nonlinear effect which is not well characterized by a linear approximation [@strogatz18]. Computing adaptivity and consensus across models {#adconmod} ------------------------------------------------ Here we calculate consensus $\bar{u}$ and adaptivity $\lambda$ across a wider range of environments as the abandonment rate $\gamma$ (Fig. \[fig11\]) and social inhibition rate $\rho$ (Fig. \[fig12\]) are varied. The general trends observed in Fig. \[fig6\]b,c are preserved. For strong enough abandonment $\gamma$, adaptivity $\lambda$ increases as $\bar{u}$ decreasing, and direct switching tends to balances this trade off best (Fig. \[fig11\]). Indiscriminate stop-signaling presents a similar trade off as social inhibition strength $\rho$ is increased (Fig. \[fig12\]), while the other social inhibition mechanisms eventually show increases in both consensus $\bar{u}$ and adaptivity $\lambda$, but again direct switching tends to provide higher levels of both overall. Stochastic effects in the finite system size {#stochsys} -------------------------------------------- Honeybee swarms tend to be of modest size (in the 1,000s) [@seeley10], and so it is reasonable to expect some impact of finite size effect on the dynamics of foraging. In general, we found finite size effects induced fluctuations about the typical mean periodic switching solutions, but that it did not qualitatively alter the general behavior of the swarm (Fig. \[fig13\]). The finite size model is governed by a master equation, determining the probability of all possible changes in committed and uncommitted populations. In the case of the single feeder model, the master equation for the probability $p(n,t)$ of finding $n$ bees committed to foraging at time $t$ is $$\begin{aligned} \dot{p}(n,t) = r_+(n-1) p(n-1,t) + r_-(n+1) p(n+1,t) - \left[ r_+(n) + r_-(n) \right] p(n,t), \label{stochone}\end{aligned}$$ for integer $n=0,1,2,...,N$ with boundary conditions $p(-1,t) = p(N+1,t) = 0$ and forward and backward transition rates $$\begin{aligned} r_+(n) = (N-n) ( \tilde{\alpha} (t) + \tilde{\beta} n ), \hspace{5mm} r_-(n) = \tilde{\gamma} n + \tilde{\rho}( \bar{\tilde{\alpha}} - \tilde{\alpha}(t - \tau)) n^2\end{aligned}$$ for system size (total bee number) $N$. To obtain the mean field Eq. (\[singledyn\]) as $N \to \infty$ [@seeley12; @vankampen92], one must define $\tilde{\alpha}(t) = \alpha (t) /N$, $\tilde{\beta} = \beta / N^2$, $\tilde{\gamma} = \gamma / N$, and $\tilde{\rho} = \rho/N^2$. Note, the scalings correspond to the power of the population count appearing in the interaction term, ensuring the transition terms remain bounded in the thermodynamic limit. We utilized the stochastic simulation algorithm by Gillespie [@gillespie77] to evolve the stochastic system for the statistic plotted in Fig. \[fig13\]a,b. We make two remarks about our findings. First, the swarm generally increases the fraction of committed foragers when food is present at the feeder and decreases when food is removed. Second, the amplitude of fluctuations in individual simulations decreases with system size, as typically expected [@vankampen92], as evidenced by the narrower standard deviations in the solution trajectories in the $N=1000$ versus the $N=100$ simulations. In the case of the two feeder model, the master equation is more complicated as it must track the probability of transitions between uncommitted bees, bees committed to feeder A, and those committed to feeder B. Indeed, we can write the model down for any of the four forms of social inhibition, but we just provide the discriminate stop signaling model here. Others can be written similarly. The probability $p(n_A, n_B, t)$ of finding $n_A$ bees committed to A and $n_B$ committed to $B$ at time $t$ given system size $N$ is (dropping the argument in $t$ for brevity): $$\begin{aligned} \dot{p}(n_A, n_B) =& r_{0A}(n_A-1,n_B)p(n_A-1,n_B) + r_{0B}(n_A, n_B-1) + r_{A0}(n_A+1,n_B) \label{mastertwo} \\ & + r_{B0}(n_A, n_B+1) p(n_A, n_B+1) - \left[ r_{0A}(n_A, n_B) + r_{0B}(n_A, n_B) + \right. \nonumber \\ & \left. + r_{A0}(n_A,n_B) + r_{B0}(n_A, n_B) \right]p(n_A, n_B), \nonumber\end{aligned}$$ for $n_A, n_B = 0, 1, ..., N$ with the condition that $n_A + n_B \leq N$, boundary conditions $p(-1, n_B) = p(n_A, -1) = p(N+1, n_B) = p(n_A, N+1) = 0$, and transition rates $$\begin{aligned} r_{0A}(n_A, n_B) &= (N-n_A - n_B) ( \tilde{\alpha}_A(t) + \tilde{\beta} n_A), \hspace{6mm} r_{0B}(n_A, n_B) = (N-n_A - n_B) ( \tilde{\alpha_A(t)} + \tilde{\beta} n_B), \\ r_{A0}(n_A, n_B) &= \tilde{\gamma} n_A + \tilde{\rho} \alpha_B(t- \tau)n_A n_B, \hspace{15mm} r_{B0}(n_A, n_B) = \tilde{\gamma} n_B + \tilde{\rho} \alpha_A(t- \tau)n_A n_B.\end{aligned}$$ As in the single feeder model, periodic switching with environmental switches is apparent, and the amplitude of fluctuations decreases with system size (Fig. \[fig13\]c,d). A detailed study of the finite size population model would require a much more thorough treatment and statistical analysis. We expect the effects of stochasticity will not considerably impact our general findings. The only qualitative differences we would expect would be in the case of unrealistically small systems (e.g., $N=10$), and in bistable systems (like cases of the discriminate stop signaling model), where fluctuations could drive switching between multiple stable equilibria [@biancalani14]. [^1]: Worker bees perform this figure-eight dance after returning to the hive from foraging, indicating the direction and distance to water, high-quality flowers, or potential nest sites. [@seeley10]. [^2]: Bees direct a high frequency body vibration at waggle dancers to try and make them stop when problems with nest or feeding sites are detected. [@nieh10] [^3]: We have associated units of min$^{-1}$ with interaction rates. Though $\alpha_{A,B}(t)$ are in fact food qualities (See Table \[table1\] in Appendix), we assume the commitment term also carries units of min$^{-1}$ via a unit rescaling, which we do not include in Eq. (\[2siteswarm\]) to keep it from becoming too cumbersome. We make a similar assumption for the single feeder model.
--- abstract: 'The black hole of the widely used ordinary 2d–dilaton model (DBH) deviates from the Schwarzschild black hole (SBH) of General Relativity in one important feature: Whereas non-null extremals or geodesics show the expected incompleteness this turns out [*not to be the case for the null extremals*]{}. After a simple analysis in Kruskal coordinates for singularities with power behavior of this – apparently till now overlooked – property we discuss the global structure of a large family of generalized dilaton theories which does not only contain the DBH and SBH but also other proposed dilaton theories as special cases. For large ranges of the parameters such theories are found to be free from this defect and exhibit global SBH behavior.' --- =-2.5cm epsf TUW–95–24\ gr-qc/9602040\ [On the Completeness of the Black Hole\ Singularity in 2d Dilaton Theories]{}\ \ Erwin Schrödinger International Institute\ for Mathematical Physics\ Pasteurgasse 6/7, A-1090 Wien\ Austria\ \ Institut für Theoretische Physik\ Technische Universität Wien\ Wiedener Hauptstr. 8-10, A-1040 Wien\ Austria\ Vienna, February 1996 Introduction ============ Dilaton models in 1 + 1 dimensions have been studied extensively in their string theory inspired form [@wit91], as well as in generalized versions [@ban91]. Recently also models with torsion [@kat86] which can be motivated as gauge theories for the zweibein have been seen [@kat95] to be locally equivalent to generalized dilaton theories, although the global properties differ in a characteristic manner. The prime motivation for investigating such models and especially the ordinary dilaton black hole (DBH), always has been the hope to obtain information concerning problems of the ’genuine’ Schwarzschild black hole (SBH) in d = 4 General Relativity: the quantum creation of the SBH and its eventual evanescence because of Hawking radiation and the correlated difficulty of information loss by the transformation of pure quantum states into mixed ones, black hole thermodynamics etc. [@alv]. On the other hand, essential differences between DBH and SBH have been known for a long time. We just quote the Hawking temperature ($T_H$) and specific heat: For the DBH $T_H$ only depends on the cosmological constant instead of a dependence on the mass parameter as in the SBH. The specific heat is positive for DBH and negative for the SBH. A basis for any application of DBH must be a comparison of its singular behavior with the one of the SBH. However, careful studies of the singularity structure in such theories seem to be scarce. Apart from [@lemos] and our recent work [@kat95] we are not aware of such a comparison. It always seems to have been assumed that the physical features coincide at least qualitatively in all respects. During our recent work [@kat95] we noted that this is not the case: [*For the ordinary dilaton black hole of [@wit91] null extremals are complete at the singularity*]{}. Of course, non–null extremals are incomplete, and so at least that property holds for the DBH, but, from a physical point of view, it seems a strange situation that massive test bodies fall into that singularity at a finite proper time whereas it needs an infinite value of the affine parameter of the null extremal (describing the influx e.g. of massless particles) to arrive. Thus, from the point of view of the genuine SBH serious doubts may be raised against any effort to extract theoretical insight from the usual DBH. In Section 2 we exhibit the main differences in the presumably most direct manner, namely by treating the DBH as well as the SBH as special cases of a general power behavior of the metric in Kruskal coordinates. We find that the DBH lies at a point [*just outside*]{} the end of the interval of the BH-s which are qualitatively equal to the SBH in the sense that [*both*]{} null and non–null extremals are incomplete. In order to pave the way for a more realistic modelling of the SBH we then (Section 3) consider a two parameter family of generalized dilaton theories which interpolates between the DBH and other models, several of whom have been already suggested in the literature [@lemos; @lau; @mignemi; @fabri]. The Eddington–Finkelstein (EF) form of the line element, appearing naturally in 2d models when they are expressed as ’Poisson–Sigma models’ (PSM) [@very] is very helpful in this context. We indeed find large ranges of parameters for which possibly more satisfactory BH models in $d=2$ may be obtained. Completeness at a Curvature Singularity ======================================= Consider a metric expressed in Kruskal coordinates [*u,v*]{} $$\label{ds} (ds)^2=2f(uv)dudv \sim 2z^{-a}dudv$$ where we assume the metric to be dependent only on the product $uv$ and a simple leading power behavior of $f(uv)$ near $$\begin{aligned} z=1-uv \to 0. \nonumber\end{aligned}$$ Without loss of generality we consider the space-time where $f>0$ or $uv<1$. The case $a=0$ corresponds to flat Minkowskian space-time and is not considered in the following. This metric covers many interesting cases. For the DBH we have $f=z^{-1}$ with $a=1$ [@wit91]. Neglecting the angular dependence the SBH metric has also this form with $a= \frac{1}{2}$. This can be extracted easily from the formulas on p. 152 and 153 of ref. [@wald]. From the only nonvanishing components of the affine connection $$\label{umin} u^{-1}\Gamma_{11}^1=v^{-1}\Gamma_{00}^0=\frac{d}{d(uv)}\left(\ln f \right)=(\ln f)'$$ the curvature scalar becomes at $ z \rightarrow 0$ with (\[ds\]) $$\label{curvdoppel} R=\frac{2}{f} \left[(\ln f)' + uv(\ln f)'' \right] { \stackrel{z \to 0}{\longrightarrow} 2az^{a-2}}$$ Thus $ a = 2$ (de Sitter behavior) represents the border between singular ($ a <2$) and vanishing ($a >2$) curvature at this point. Both SBH and DBH belong to the singular range. With (\[umin\]) the geodesic equations are simply $$\label{udot} \ddot{u}+v(\ln f)' \dot{u}^2 =0 ,$$ $$\label{udot2} \ddot{v}+u(\ln f)' \dot{v}^2 =0 ,$$ where a dot denotes differentiation with respect to the canonical parameter $\tau.$ The analysis of these equations greatly simplifies by noting that the metric (\[ds\]) admits a Killing vector $k^{\mu}, \mu =0,1$, for arbitrary $f$, $$\label{kill2} k^{\mu}=\left( \begin{array}{r} -u \\ v \end{array} \right)$$ which implies the integral $$\label{gl7} \left( \dot{u}v-u\dot{v} \right)f=A_1.$$ In the relation between the canonical parameter and the line element $$\label{a2} \dot{u} \dot{v} f =A_2$$ the constant $A_2>0$, $A_2=0$ and $A_2<0$ describe timelike, null and spacelike extremals, respectively. The sign of $A_1$ is arbitrary. Expressing $\dot{u}$ and $\dot{v}$ from (\[gl7\])and (\[a2\]) in $\dot{z}=-u\dot{v} -v\dot{u}$ yields near the singularity $$\label{zdot} \dot{z}=\pm \sqrt{A_1^2z^{2a}+4A_2z^a(1-z)}.$$ The dependence of $\frac{u}{v}$ on $z$ is determined by $$\label{ddz} \frac{d}{dz}\ln \left| \frac{u}{v} \right|=\pm(1-z)^{-1} \left(1+\frac{4A_2} {A_1^2}(1-z)z^{-a} \right)^{-\frac{1}{2}}$$ This follows from (\[gl7\]) replacing the $\tau$-dependence by a $z$-dependence according to (\[zdot\]). The simple analysis of equation (\[zdot\]) shows that for $0<a<2$ the curvature singularity can be reached at finite affine parameter $\tau$ only by timelike and null extremals, whereas for $a<0$ this holds for all types of extremals. The difference between the asymptotic behavior of null ($A_2=0$) and non-null ($A_2\ne0$) extremals is especially transparent from (\[zdot\]). For null extremals which cross the singularity ($A_1\ne0$) it takes the form $$\begin{aligned} \label{ecapag} \tau&\sim& z^{(1-a)},~~~~a\ne1, \\ \label{ecapap} \tau&\sim&\ln z,~~~~a=1.\end{aligned}$$ So null extremals are incomplete (finite value of the canonical parameter) at the singularity if and only if $a<1$. At the same time for timelike extremals and $a>0$ equation (\[zdot\]) yields $$\begin{aligned} \label{tsim1} \tau&\sim& z^{(1-a/2)},~~~~a\ne2, \\ \label{tsim2} \tau&\sim&\ln z,~~~~a=2.\end{aligned}$$ We see that timelike extremals are incomplete if $0<a<2$. If $a<0$ then the asymptotic behavior of timelike and spacelike extremals near the singularity coincides with that of null extremals, (\[ecapag\]), and they are always incomplete. Thus we have proved that non-null extremals are always incomplete at the curvature singularity, $a<2$, $a\ne0$. Null extremals are incomplete only for $a<1$. So the DBH lies precisely at the border, $a=1$, where null extremals are still complete at the singularity. This is a qualitative difference with the SBH at which all types of extremals are incomplete. In order to be able to compare the family of dilaton theories considered in section 3 below, we also give the transformation of (1) into the EF metric $$\label{ds2} (ds)^2= d \bar{v}(2d\bar{u}+l(\bar{u})d\bar{v})$$ which explicitly depends on the norm $l = k^\alpha k_\alpha$ of the Killing vector $\partial / \partial\bar v$. Introducing $F^\prime(y) =f(y)$ the necessary diffeomorphism is ($\sigma = \pm -1$ in order to cover the whole original range of $v$) $$\begin{aligned} \label{u1} u=e^{-\bar v}h(\bar u) \\ v=\sigma e^{\bar v}\end{aligned}$$ with $h(\bar u)$ determined from $$\label{u2} \bar{u}=F(\sigma h)$$ so that in (\[ds2\]) $$\label{l1} l(\bar{u})=-2\sigma h(\bar u) f(\sigma h(\bar u)).$$ For the power behavior (\[ds\]) we obtain $$\label{l2} l(\bar{u}) \leadsto |\bar{u}|^{\frac{a}{a-1}}$$ to be taken at $\bar u \rightarrow 0_-$ for $a <1$ and at $\bar u \to +\infty$ for $a >1$. The DBH case $a = 1$ must be treated separately. The result $$\label{l3} l_{a=1} \rightarrow e^{\bar u},~~~ \bar{u} \rightarrow \infty,$$ as expected, agrees with the familiar DBH behavior [@wit91] [@alv]. Eq. (\[ds2\]) is particularly convenient to make contact with the PSM formulation which can be obtained for all covariant 2d theories [@alt]. They may be summarized in a first order action $$\label{lagr} L=\int X^+T^- +X^-T^+ +Xd\omega -e^- \wedge e^+V(X)$$ In our present case only vanishing torsion $$\label{tors} T^{\pm}=(d \pm \omega) \wedge e^{\pm}$$ as implied by (\[lagr\]) is expressed in terms of light–cone (LC) components for the zweibein one form $e^a$ and for the spin connection one form ${\omega^a}_b = {\epsilon^a}_b \omega$ where $\epsilon^{ab}=-\epsilon^{ba}$, $\epsilon^{01}=1$ is the totally antisymmetric tensor. The ’potential’ V is an arbitrary function of $X$ and determines the dynamics. It is simply related to the Killing norm $l$ in the EF gauge because (\[lagr\]) can be solved exactly for any integrable $V$ (cf. the first ref. in [@alt]) with the solutions (constant curvature is excluded) $$\begin{aligned} \label{zweib} e^+=X^+df, \\ \label{zweibb} e^-=\frac{dX}{X^+}+X^-df,\end{aligned}$$ where $f$ and $X$ are arbitrary functions of the coordinates such that $df \wedge dX \neq 0$ and $X^+$ is an arbitrary nonzero function. A similar solution for $\omega$ will not be needed in the following. The line element immediately yields the EF form (\[ds2\]) with $\bar{u}=\frac{X}{2}$ and $\bar{v}=f$. The Killing norm $$\label{kill} l=2\left[ C-w \right]$$ follows from a conservation law $$\label{newc} C=X^+X^- +\int^X V(y)dy=X^+X^- +w$$ common to all 2d covariant theories [@kat86] [@alt] [@kat93] which is related to a global nonlinear symmetry [@kumwid]. The usual dilaton models are produced by the introduction of the dilaton field $\phi$ in $X = 2 \exp (-2\phi)$, together with a Weyl transformation $e^a=exp(-\phi)\tilde{e}^a$ of $d\omega$ in (\[lagr\]) $$\label{eps} \epsilon^{\mu \nu} \partial_{\mu} \omega_{\nu} =-\frac{R \sqrt{-g}}{2},$$ with the components $\omega_\nu$ expressed by the vanishing of the torsion (\[tors\]) in terms of the zweibein. The result $$\label{change} \sqrt{-g}R=\sqrt{-\tilde{g}}\tilde{R} + 2\partial_{\mu} (\sqrt{-\tilde{g}} {\tilde g}^{\mu\nu} \partial_{\nu} \phi)$$ will be used in the below and in the next section. From the preceeding argument there is now a direct relation between the singularity in terms of Kruskal coordinates (\[ds\]) through (\[l2\]) to the singularity in the corresponding action (\[lagr\]) using (\[kill\]) $$\label{V} V \rightarrow |X|^{\frac{1}{a-1}}$$ with the singularity at $X \to \infty$ for $ 1 < a < 2$ and at $X \to 0$ for $a < 1$. The exponential behavior of $V$ for the DBH ($ a = 1$) may be read off from (\[l3\]). Let us consider the SBH in a little more detail. The starting point is the Schwarzschild solution in EF coordinates [@wald] $$\label{ef} ds^2=2dvdr + \left( 1-\frac{2M}{r} \right)dv^2 -r^2d {\Omega}^2,$$ whose $r-v$ part is of the type (\[ds2\]). Thus the radial variable may be identified with $\bar{u}$ in (\[ds2\]) and with $X$ in (\[V\]). Indeed $a=\frac12$ yields the correct singularity behavior and the corresponding PSM action (\[lagr\]) with $$\label{v2} V=-\frac{M}{X^2}.$$ In the second order formulation (after eliminating the fields $X^{\pm}$, and $X$ and using their equations of motion) the action reads $$\label{r23} L_{SBH}^{d=2}=\frac{3}{2} \left( \frac{M}{2} \right)^{\frac{1}{3}} R^{\frac{2}{3}} \sqrt{-g} .$$ Now the metric remains as the only dynamical variable. This action does not look very attractive because of the fractional power of curvature, but in the Euclidean formulation it is positive definite and thus may be useful for quantization. By construction this model beside the flat Minkowskian solution has the solution of the true SBH. It should be stressed that the model (\[r23\]) reproduces the $r,v$-part of the SBH globally and not only near the singularity. The above analysis may be extended easily to include also powers of $\log z$ in (\[ds\]). For example $f=z^{-1} \ln z$ improves the DBH to make it null incomplete. In that case $V \to e^{\sqrt{2X}}$ for $X \to \infty$, diverges ’slightly’ less than for the DBH. For exponential behavior in Kruskal coordinates $f \to e^{-uv}$ for $uv \to \infty$, $R$ diverges exponentially as well. In that case both null and non-null extremals are incomplete. Although satisfactory in this respect the corresponding PSM potential $V \to \ln (-X) $ at $X \to 0_-$ does not seem particularly useful for applications in BH models. The ’pure’ PSM model for the SBH with potential (\[v2\]) is fraught with an important drawback: When matter is added the conserved quantity $C$ in (\[newc\]) simply generalizes to a similar conserved one with additional matter contributions (cf. the second ref. [@kumwid]). Thus even before a BH is formed by the influx of matter an ’eternal’ singularity as given e.g. by (\[v2\]) for the SBH, is present in which the mass $M$ basically cannot be modified by the additional matter. A general method to produce at the same time a singularity–free ground state with, say, $C = 0$ is provided by a Weyl transformation of the original metric. It simply generalizes what is really behind the well-known construction of the DBH theory. Consider the transformation $$\label{91} \tilde{g}_{\mu \nu} =\frac{g_{\mu \nu}}{w(X)}$$ in (\[ds2\]) with (\[kill\]) together with a transformation of $X$ $$\label{92} \frac{d \tilde{X}}{dX} =w(X(\tilde{X})).$$ This reproduces the metric $\tilde{g}_{\mu \nu}$ in EF form $$\label{93} (ds)^2=2df \left( d\tilde{X} +\left(\frac{C}{w}-1 \right)df \right)$$ with a flat ground–state $C = 0$. Integrating out $X^+$ and $ X^-$ in (\[lagr\]), and using the identity (\[change\]) with $\phi = \frac{1}{2} \ln w$ one arrives at a generalized dilaton theory $$\label{95} \it{L}=\sqrt{-\tilde{g}} \left( \frac{X}{2}R+\frac{Vw}{2}\tilde{g}^{\mu \nu} \partial_{\mu} \tilde{X} \partial_{\nu} \tilde{X} -Vw \right)$$ where $X$ is to be re-expressed by $\tilde X$ through the integral of (\[92\]). It should be noted that the (minimal) coupling to matter is covariant under this redefinition of fields. Clearly (\[95\]) is the most general action in $d=2$ where the flat ground state corresponds to $C=0$. $V(X)$ may determine an arbitrarily complicated singularity structure. The DBH is the special case $V=\lambda^2=const.$ Then $\tilde{X}$ is easily seen to be proportional to the dilaton field. The SBH results from the choice $V=X^{-1/2}.$ Using (\[92\]) and comparing (\[93\]) with (\[ef\]) in that case with the interaction constant in $w$ fixed by $w=\frac{\tilde{X}}{2}$, the conserved quantity $C$ is identified with the mass $M$ of the BH and (\[95\]) turns into the action of spherically reduced 4D general relativity [@lau]. Unfortunately such a theory cannot be solved exactly if coupling to matter is introduced. Therefore, in the next section a large class of models is studied which contain the SBH as a special case. Dilaton Models Containing Schwarzschild-like Black Holes ======================================================== As discussed already in the last section, any PSM model is locally equivalent to a generalized dilaton model [@kat95]. In the present section we consider all global solutions of the action $$\label{ldil} L=\int d^2x \sqrt{-g}e^{-2\phi}(R+4a(\nabla\phi)^2 +Be^{2(1-a-b)\phi})$$ and compare them with the Schwarzschild black hole. This form of the Lagrangian covers e.g. the CGHS model [@wit91] for $a=1$, $b=0$, spherically reduced gravity [@lau] $a=\frac{1}{2}$, $b=-\frac{1}{2}$, the Jackiw-Teitelboim model [@jackiw] $a=0$, $b=1$. Lemos and Sa [@lemos] give the global solutions for $b=1-a$ and all values of $a$, Mignemi [@mignemi] considers $a=1$ and all values of $b$. The models of [@fabri] correspond to $b = 0, a \leq 1$. The different models can be arranged in an $a$ vs. $b$ diagram (Fig.1). The Lagrangian (\[ldil\]) is obtained from (\[lagr\]) by the “generalized dilatonization”, as explained in the last section with $\phi$ replaced by $a\phi$ in (\[change\]) $$\label{trans} e^d_\mu =e^{-a\phi} \tilde{e}^d_\mu ~~~~\Leftrightarrow ~~~~ g_{\mu\nu}= e^{-2a\phi} \tilde{g}_{\mu\nu},$$ where $a$ is an arbitrary constant. We also replace $X$ by the usual dilaton field $$\label{x} X=2e^{-2\phi}$$ and restrict ourselves to a power behavior in $V$ $$\label{v} V(X)=B \left( \frac{X}{2} \right)^b,$$ depending on the parameters $b$ and $B$ [@mignemi2]. Using the general solution (\[zweib\]),(\[zweibb\]) and defining coordinates $v=-4f$, $u=\phi$ immediately yields the line element corresponding to (\[ldil\]) $$\label{metric} (d \tilde{s})^2= g(\phi) \left( 2dvd\phi+ \tilde{l}(\phi) dv^2 \right),$$ with $$\begin{aligned} \label{Gl:34} b \neq -1: & \tilde{l}(\phi)=\frac{e^{2\phi}}{8} \left(C- \frac{2B}{b+1} e^{-2(b+1)\phi} \right), \\ \label{Gl:34b} b =-1: & \tilde{l}(\phi)=\frac{e^{2\phi}}{8} \left( \tilde{C}+4B\phi \right),~~~\tilde{C}=C-2B \ln 2 \end{aligned}$$ $$g(\phi)= e^{-2(1-a)\phi}$$ where $C$ is the arbitrary constant defined in (\[newc\]). Calculations can be performed in this form or by changing to a new variable $$\begin{aligned} \label{Gl:34c} a\neq1: &u=\frac{e^{-2(1-a)\phi}}{2(a-1)}, \\ a=1: &u=\phi,\end{aligned}$$ to obtain the EF form (\[ds2\]) of the metric with $$\begin{aligned} \label{newl} b\neq -1: & a\neq 1: & l(u)=B_1|u|^{\frac{a}{a-1}} -B_2|u|^{1-\frac{b}{a-1}}, \\ & a=1: & l(u)=\frac18 e^{2u}\left(C-\frac{2B}{b+1}e^{-2(b+1)u} \right), \\ b=-1: & a \neq 1: & l(u)=\frac18|2(a-1)u|^\frac a{a-1} \left(C+\frac{2B}{a-1}\ln|2(a-1)u|\right), \\ & a=1: & l(u)=\frac{e^{2u}}{8} \left( \tilde{C} +4Bu \right),\end{aligned}$$ where the constants are given by $$\label{a1} B_1=\frac{C}{8} (2|a-1|)^{\frac{a}{a-1}}~~~~B_2 = \frac{B}{4(b+1)}(2|a-1|)^ {1-\frac{b}{a-1}}.$$ The range of $u$ for $a=1$ is $[-\infty, +\infty]$ whereas from (\[Gl:34c\]) one has $$2(a-1)u>0,$$ and for $a>1$ the range is reduced to $[0,+\infty]$ and for $a<1$ it is $[-\infty,0]$. Now the scalar curvature in the EF coordinates $$R=l,_{uu}$$ can be easily calculated $$\begin{aligned} \label{curv} b\ne-1, & a\ne1: & R=B'_1u^{\frac{2-a}{a-1}}+B'_2u^{\frac{a+b-1}{1-a}} \\ \label{ecnboa} b\ne-1, & a=1: & R=\frac C2e^{2u}-\frac{Bb^2}{b+1}e^{-2bu} \\ \label{ecobna} b=-1, & a\ne1: & R=\frac a2|2(a-1)u|^{\frac{2-a}{a-1}}\left(C+\frac{2B}a+2B +\frac{2B}{a-1}\ln|2(a-1)u|\right) \\ \label{ecoboa} b=-1, & a=1: & R=\frac12e^{2u}(\tilde{C}+4B+4Bu).\end{aligned}$$ with $B'_1$ and $B'_2$ to be determined easily from (\[a1\]). In the following we will continue our analysis in terms of $\phi$ which can always be transformed to $u$ by means of (\[Gl:34c\]). We see that depending on the values of $a$ and $b$ the scalar curvature may be singular in the limits $u\rightarrow0,\pm\infty$. In Fig.2 the dashed regions show singularities of the scalar curvature. In the limit $\phi\rightarrow\infty$ the singular regions are different for zero and nonzero value of $C$, $$\begin{aligned} \label{ecuscn} \phi &\rightarrow& +\infty,~~C\ne0,~~|R|\rightarrow\infty,~~ (a<2)\cup(a+b<1)\cup(a=2,b=-1), \\ \label{ecuscz} \phi &\rightarrow& +\infty,~~C=0,~~|R|\rightarrow\infty,~~ (a<1)\cup(a+b<1)\cup(a=2,b=-1).\end{aligned}$$ The singularity of $R$ may be positive or negative depending on the values of the constants $a$, $b$, $B$, and $C$ as can be easily seen from (\[curv\])–(\[ecoboa\]). In the limit $\phi\rightarrow-\infty$ the scalar curvature is singular when $$\label{ecuspn} \phi \rightarrow -\infty,~~\forall C,~~|R|\rightarrow\infty,~~ (a+b>1)\cup(a>2)\cup(a=2,b=-1).$$ Now one has to analyse the behavior and completeness of extremals corresponding to the line element (\[metric\]). For the first term in (\[newl\]) the behavior at the singularity for $|u|\to0$, $a<1$ coincides with (\[l2\]) if $u=\bar{u}$ and if $a$ denotes the same parameter as in section 2. However for a general discussion of (\[newl\]) the behavior of both terms and the zeros of $l$ (horizons) are better discussed in terms of the general procedure as outlined in [@alt] of which a summary also was given in [@kat95]. The equations for extremals for the EF metric read $(l'=dl/du)$ $$\begin{aligned} \label{37} \ddot u+l'\dot u\dot v+\frac12ll'\dot v^2&=&0, \\ \label{38} \ddot v-\frac12l'\dot v^2&=&0.\end{aligned}$$ Thus the null extremals $v_1=const$ are always complete because $\ddot u =0$ follows from (\[37\]) and (\[38\]). The second null direction $$\label{second} \frac{dv_2}{du}=-\frac{2}{l}$$ inserted into (\[37\]) also yields $\ddot u=0$ or equivalently $$d\tau \sim e^{2(a-1)\phi}d\phi.$$ Thus the completeness of null extremals depends only on $a$: $$\begin{aligned} \label{econep} \phi &\rightarrow& +\infty,~~a\ge1,~~{\rm complete}, \\ \label{econem} \phi &\rightarrow& -\infty,~~a\le1,~~{\rm complete},\end{aligned}$$ as shown in Fig.3. The physically unreasonable behavior of null extremals in this case, encountered already in section 2, thus has been reconfirmed also here. The affine parameter of non-null extremals is obtained from an integral similar to (\[gl7\]) $$\label{Gl:53} \frac{du}{d\tau} + l\frac{dv}{d\tau} = \sqrt{A} = const.$$ Identifying the affine parameter $d\tau$ with the $ds$ in (\[ds2\]), these extremals are found to obey $$\label{Gl:54} \frac{dv}{du} = -\frac1l \left[ 1 \mp ( 1 - l /A)^{-1/2}\right]$$ by simply solving a quadratic equation. In addition, from (\[Gl:53\]) and (\[Gl:54\]) the affine parameter is determined by $$\label{Gl:55} \tau(u)= \int^{u} \frac{dy}{\sqrt{A-l(u)}} ~~~ .$$ For $A > 0$, resp. $ A < 0$ the parameter $s$ is a timelike, resp. spacelike quantity. As for the null extremals our results are given in terms of $\phi$. Non null extremals turn out to be complete for $$\begin{aligned} \label{econnm} \phi \rightarrow-\infty,&~~(a\le1) \cap (a+b\le1), \\ \label{ecuscnn} \phi \rightarrow+\infty,&~~C\ne0, (a \geq2 )\cap(a+b\geq1), \\ \label{ecusczz} \phi \rightarrow+\infty,&~~C=0, (a\geq1)\cap(a+b\geq1).\end{aligned}$$ as depicted in Fig.3b,c. One observes that only for $\phi \to +\infty$ the region of incompleteness coincides with the one for $|R| \to \infty$ except for the point $a=2,b=-1$ where non null extremals are complete at the curvature singularity. As depicted in Fig.2 we find for some distinct values of $a$ and $b$ de Sitter space or flat space-time solutions: $$\begin{aligned} \label{desit} de Sitter: & a=2, b=0,1~~~a=0,b=1 \\ flat: & a=0, b=0\end{aligned}$$ Furthermore we get the special regions: $$\begin{aligned} \label{newregion} R=C:&a=2 ~~~\cap ~~~B=0 \\ R=0:&B=0 ~~~\cap ~~~C=0 \\ R=-2\frac{Bb^2}{b+1}:&(C=0)\cap (a+b=1) \cap (B\neq0) \cap (b\neq 0)\end{aligned}$$ Now elegant methods [@alt] exist to find the global structure of 2D models. Equation (\[second\]) determines the shape for a certain patch, e.g. the typical one for a black hole drawn in Fig.4a. The fully extended global solution across the dotted lines is obtained by gluing together patches of this type identifying lines with $u=const.$ and exchanging the two null directions [@alt] by an (always existing) diffeomorphism in the overlapping square or triangle. This simply amounts to using reflected or rotated building blocks of the same structure, so as to arrive e.g. for the BH at its characteristic Penrose diagram Fig.4b. For definiteness we consider the case $B\ge0$. The case $B<0$ is obtained by rotating the diagrams for $B>0$ by 90 degrees, because formally both cases are related to each other by exchanging the coordinates of space and time. Up to a rotation there are six types of Penrose diagrams D1,…,D6, as shown in Fig.5. The boundaries of the diagrams correspond to infinite values of the dilaton field $\phi=\pm\infty$ as indicated. There the scalar curvature may be singular or nonsingular. For the moment we concentrate on the shape of the diagrams which is defined entirely by the function $\tilde l$ (\[Gl:34\]) or (\[Gl:34b\]). Therefore it does not depend on the value of $a$. We summarize the types of Penrose diagrams corresponding to different values of the constants in the following table: $$\label{tablbg} \begin{array}{l} D1:~~~C=0,~b>0; \\ D2:~~~C<0,~-1<b\le0;~~~C=0,~(b<-1)\cup(-1<b<0);~~~C>0,~b<-1; \\ D3:~~~C<0,~b>0; \\ D4:~~~C>0,~b>0; \\ D5:~~~C<0,~b\le-1;~~~C=0,~b=-1;~~~C>0,~-1\le b\le0; \\ D6:~~~C=0,~b=0. \end{array}$$ For $B=0$, $C\ne0$ a global solution is of the type D2, whereas for $B=C=0$ one has a flat solution. Using (\[curv\])–(\[ecuspn\]), or equivalently Fig.2 it is straightforward to verify where boundaries are singular, asymptotically flat or correspond to constant curvature. Completeness at the boundaries follows immediately from (\[econep\])–(\[econnm\]). In order to admit a Schwarzschild like global solution, the Penrose diagram must be of the type D5 with incomplete spacelike singular boundary $\phi\rightarrow+\infty$ and complete asymptotically flat boundary $\phi\rightarrow-\infty$. The above analysis proves that this happens if and only if the parameters satisfy the following conditions $$\label{tabscs} \begin{array}{ll} B>0,~~a<1, & C<0,~~~b\le-1, \\ & C=0,~~~b=-1, \\ & C>0,~~~-1\le b\le0. \end{array}$$ In Fig.6 we have tried to summarize all possible cases with one horizon. The range of (\[tabscs\]) corresponds to the lower left corner $a<1,~b\leq 0.$ Known soluble models with couplings to scalar matter [@wit91; @ban91; @lemos; @fabri] always start from an ’undilatonized’ (PSM–type) action with $V = \lambda^2$, i.e. from a flat theory in which scalar fields are introduced. The subsequent dilatonization moves the singularity to the factor of $C$, as explained in the previous section and $B_1 = C = 0$ should be a Minkowskian space-time. We want to stress again that in the presence of matter it is this factor (the ’mass’ parameter) which is changed by the influx of matter, starting say with a ground state $C=0$. The only possibility to start with a Minkowskian space-time before matter flows in $(C=0 \to C\neq 0)$ is the case when the $B_2$-term of (\[newl\]) looses its $u$ dependence completely, which implies that $b=a-1$. From Fig.6 we see immediately that the CGHS model as well as spherically reduced 4D gravity lie on that line. If one is interested to reproduce a SBH like model this means that one has to start out with an action of the restricted form $$\label{future} L=\int d^2x \sqrt{-g}e^{-2\phi}(R+4a(\nabla\phi)^2 +Be^{4(1-a)\phi}),~~~a<1$$ Similarly we see that for $b=0$ one inevitably obtains Rindler space-time [@fabri] except for the CGHS model $(a=1)$. The soluble models with matter couplings (corresponding to $b = 0$ [@wit91; @fabri]) are compatible with $b=a-1$ only at $a=1$, i.e. for the DBH. Although the curvature vanishes for the asymptotic Rindler like space-times in these models this could be interpreted as observing a black hole formation and its Hawking radiation from an accelerating frame — which does not seem very satisfactory. It may be added that also the problem of a mass independent Hawking temperature $\kappa$ appears for $b=0$ models. We take its geometric definition [@wald] from the norm of the Killing vector $k^2=l$ at the horizon, determined by $l(u_h)=0$, $$\label{hawktemp1} \partial_{\mu} k^2 \arrowvert_{u_h} = 2 \kappa k_{\mu} \arrowvert_{u_h}$$ i.e. $$\label{hawktemp2} 2 \kappa =l'(u_h)$$ Using (\[newl\]) one easily finds the Hawking temperature $$\begin{aligned} \label{hawktemp3} b\ne-1:~~~~ 2\kappa&=&\mp\frac{b+1}{1-a}|B_1|^{\frac b{1+b}}|B_2|^{\frac1{1+b}}, \\ \label{hawktemp4} b=-1:~~~~ 2\kappa&=&\frac B2e^{-\frac C{2B}},\end{aligned}$$ where minus and plus signs in eq.(\[hawktemp3\]) correspond to the cases $B_1>0$, $B_2>0$ and $B_1<0$, $B_2<0$ respectively. In the case $B_1B_2<0$ the black hole solution is absent as follows from (\[tabscs\]). We see that the models with $b=0$ share the unpleasant feature with the $DBH$ that $\kappa$ is then independent of the conserved quantity $C$ interpreted as the mass since $B_1$ disappears. Another interesting consequence of eqs.(\[hawktemp3\]), (\[hawktemp4\]) is the restriction on the coupling constants following from the positiveness of the temperature: only the first two cases in (\[tabscs\]) survive. Summary and Outlook =================== Because systematic investigations of the global properties of BH models are still relatively rare, an important defect of the ordinary dilaton black hole relative to the genuine Schwarzschild one seems to have been overlooked until now: Its singularity is only incomplete with respect to [*non–null*]{} extremals. The special role of the DBH has been put into perspective by first embedding it into a family of singularities, to be analyzed according to a power behavior in Kruskal coordinates near the singularity. We also found a large class of models which comprises well known theories and which possesses BHs for a certain range of the parameters. Demanding furthermore that for $C=0$ (no matter) we have Minkowski space reduces the allowed region to a straigth line $b=a-1$ (cf. Fig.6). Not surprisingly spherically reduced 4D gravity corresponds to one point on that line $(a=\frac{1}{2}),$ as well as the DBH $(a=1)$. For exactly solvable models with matter, withtin this class of theories the asymptotic background inevitably is of Rindler type and thus may lead to problems of interpretation. In our opinion a soluble model (after interaction with scalar matter is added) with qualitative features coinciding completely with the SBH still waits to be discovered. Acknowledgement {#acknowledgement .unnumbered} =============== We are grateful for discussions with H. Balasin, T. Klösch, S. Lau and M. Nikbakht. This work has been supported by Fonds zur Förderung der wissenschaftlichen Forschung (FWF) Project No. P 10221–PHY. One of the authors (M.K.) thanks The International Science Foundation, Grant NFR000, and the Russian Fund of Fundamental Investigations, Grant RFFI–93–011–140, for financial support. [99]{} E. Witten, Phys. Rev. D [**44**]{} (1991) 314; C. G. Callan, S. B. Giddings, J. A. Harvey, and A. Strominger, Phys. Rev. D, [**45**]{} (1992) 1005; V. P. Frolov, Phys. Rev. D [**46**]{} (1992) 5383; J. G. Russo and A. A. Tseytlin, Nucl. Phys. [**B382**]{} (1992) 259; J. Russo, L. Susskind, and L. Thorlacius, Phys. Lett. [**B292**]{} (1992) 13; T. Banks, A. Dabholkar, M. Douglas, and M. O’Laughlin, Phys. Rev. D [**45**]{} (1992) 3607; S. P. deAlwis, Phys. Lett. [**B289**]{} (1992) 278. R.B. Mann, A. Shiekh and L. Tarasov, Nucl. Phys. [**B341**]{} (1990) 134; D. Banks and M. O’Loughlin, Nucl. Phys. [**B362**]{} (1991) 649; H.J. Schmidt, J. Math. Phys. [**32**]{}, (1991) 1562; S.D. Odintsov and I.J. Shapiro, Phys. Lett. [**B263**]{} (1991) 183 and Mod. Phys. Lett. [**A7**]{} (1992) 437; I.G. Russo and A.A. Tseytlin, Nucl. Phys. [**B382**]{} (1992) 259; Volovich, Mod. Phys. Lett. A (1992) 1827; R.P.  Mann, Phys. Rev. [**D47**]{} (1993) 4438; D. Louis–Martinez, J.  Gegenberg and G. Kunstatter, Phys. Lett. [**B321**]{} (1994) 193; D.  Louis–Martinez and G. Kunstatter, Phys. Rev. [**D49**]{} (1994) 5227; J.S. Lemos and P.M. Sa, Phys. Rev. [**D49**]{} (1994) 2897; S. Mignemi, “Exact solutions, symmetries and quantization of 2-dim higher derivative gravity with dynamical torsion”, preprint IRS-9502, gr-qc/9508003. M.O. Katanaev and I. V. Volovich; Phys. Lett., [**175B**]{}, (1986) 413; Ann. Phys., [**197**]{} (1990) 1; W. Kummer and D.J. Schwarz, Phys. Rev. , [**D45**]{}, (1992) 3628; H. Grosse, W. Kummer, P. Prešnajder, and D.J.  Schwarz, J. Math. Phys., [**33**]{} (1992) 3892; T.  Strobl, Int. J. Mod. Phys., [**A8**]{} (1993) 1383; S.N.  Solodukhin, Phys. Lett., [**B319**]{} (1993) 87; E.W. Mielke, F. Gronwald, Yu. N. Obukhov, R. Tresguerres, and F.W. Hehl, Phys. Rev. D, [bf 48]{} (1993) 3648; F. Haider and W. Kummer; Int. J. Mod. Phys. [**9**]{} (1994) 207; M. O. Katanaev, Nucl. Phys. B, [**416**]{} (1994) 563; N. Ikeda, Ann. Phys., [**235**]{} (1994) 435. M.O. Katanaev, W. Kummer and H. Liebl, Geometric interpretation and classification of global solutions in generalized dilaton gravity, TU Vienna prep. TUW–95–09 (gr-qc/9511009), (to be published in Phys. Rev.  [**D**]{}) A selection of recent reviews is e.g. S.P. de Alvis and D.A. McIntire, Lessons of quantum 2d dilaton gravity, prep. COLO–HEP–241, hep–th/941003; L.  Thorlacius, Black hole evolution, prep. NSF–ITP–94–109, hep–th/9411020; T. Banks, Lectures on black holes and information loss, prep. RU–94–91, hep–th/9412131 J.S. Lemos and P.M. Sa, Phys. Rev. [**D49**]{} (1994) 2897 P. Thomi, B. Isaak and P. Hajicek, Phys. Rev. [**D30**]{} (1984), 1168; P. Hajicek, Phys. Rev. [**D30**]{} (1984), 1178 S.R. Lau “On the canonical reduction of spherically symmetric gravity”, Techn. Univ. Wien prep. TUW 95-21 (gr-qc/9508028) S. Mignemi, Phys. Rev. [**D50**]{}, R4733 (1994) A. Fabbri and J.G. Russo, ’Soluble models in 2d dilaton gravity’, prep. CERN TH/95-267, (hep-th/9510109). A very comprehensive global analysis of generally covariant 2d models, without concentrating especially on dilaton theories, has only appeared recently [@alt], whereas models with torsion had been discussed in detail already before [@kat93]. T. Strobl, Thesis, Tech. Univ. Vienna 1994; T. Klösch and T. Strobl, “Classical and Quantum Gravity in 1+1 Dimensions: Part I: A unifying approach”, Techn. Univ. Wien prep. TUW-95-16 (gr-qc/9508020), to be published in Classical and Quantum Gravity; “Classical and Quantum Gravity in 1+1 Dimensions: Part II: All universal coverings”, Techn. Univ. Wien prep. TUW-95-23, (gr-qc/9511081); An early version of this method can be found in N. Walker, J. Math. Phys. 11 (1970) 2280. M.O. Katanaev, J. Math. Phys. 34 (1993) 700. Cf. e.g. R.M. Wald, “General Relativity”, University of Chicago Press, 1984 W. Kummer and P. Widerin, Mod. Phys. Lett. A9 (1994) 1407; W. Kummer and P. Widerin, Phys. Rev. D (1995), to be published. The latter reference also contains a short summary of the PSM in its introductory section. C.Teitelboim, Phys.Lett. [**126B**]{} (1983) 41; R.Jackiw, 1984 Quantum Theory of Gravity, ed S. Christensen (Bristol: Hilger) p 403 For standard dilaton gravity the role of the restriction on X by the redefinitions (\[trans\]) has been analyzed recently also by M. Cadoni and S. Mignemi, “On the conformal equivalence between 2d black holes and Rindler spacetime”, Univ. Cagliari prep. INFNCA-TH9516, May1995, (gr-qc/9505032). The possibility to generalize the conformal transformation of the metric has been introduced also in ref.[@fabri]. \ Fig.$\;$   Different models in the a-b parameter plane \ [Fig.$\;$    Unshaded regions correspond to curvature singularities as $|\phi | \to \infty$. The boundaries belong to the shaded region. Full dots show de Sitter space-times whereas empty dots indicate flat space-times.]{} \ Fig.$\;$    Completeness of extremals \ Fig.$\;$    a: Basic patch for a black hole; b: Generic black hole diagram \ [Fig.$\;$    Penrose diagrams for a generic black hole (\[ldil\]). The boundaries correspond to infinite values of the dilaton field $\phi \to \pm \infty$ as indicated. Dashed lines inside a diagram denote horizons.]{} \ [Fig.$\;$    Shape of the Penrose diagrams with horizon in dependence of the values of the parameters $a$ and $b.$ The thick line $b=a-1$ indicates the region of (\[future\]).]{}
--- abstract: 'We describe a type system for the linear-algebraic $\lambda$-calculus. The type system accounts for the linear-algebraic aspects of this extension of $\lambda$-calculus: it is able to statically describe the linear combinations of terms that will be obtained when reducing the programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We prove that the resulting typed $\lambda$-calculus is strongly normalising and features weak subject reduction. Finally, we show how to naturally encode matrices and vectors in this typed calculus.' address: - 'Aix-Marseille Université, LIF, F-13288 Marseille Cedex 9, France' - 'Universidad Nacional de Quilmes, 1876 Bernal, Buenos Aires, Argentina' - 'Université Paris-Sud, LRI, F-91405 Orsay Cedex, France' author: - Pablo Arrighi - 'Alejandro Díaz-Caro' - Benoît Valiron bibliography: - 'vectorial.bib' title: 'The Vectorial $\lambda$-Calculus' --- Introduction ============ (Linear-)algebraic $\lambda$-calculi ------------------------------------ A number of recent works seek to endow the $\lambda$-calculus with a vector space structure. This agenda has emerged simultaneously in two different contexts. - The field of *Linear Logic* considers a logic of resources where the propositions themselves stand for those resources – and hence cannot be discarded nor copied. When seeking to find models of this logic, one obtains a particular family of vector spaces and differentiable functions over these. It is by trying to capture these mathematical structures back into a programming language that Ehrhard and Regnier have defined the [*differential $\lambda$-calculus*]{} [@EhrhardRegnierTCS03], which has an intriguing differential operator as a built-in primitive and an algebraic module of the $\lambda$-calculus terms over natural numbers. Vaux [@VauxMSCS09] has focused his attention on a ‘differential $\lambda$-calculus without differential operator’, extending the algebraic module to positive real numbers. He obtained a confluence result in this case, which stands even in the untyped setting. More recent works on this [*algebraic $\lambda$-calculus*]{} tend to consider arbitrary scalars [@TassonTLCA09; @EhrhardLICS10; @AlbertiJFLA13]. - The field of *Quantum Computation* postulates that, as computers are physical systems, they may behave according to quantum theory. It proves that, if this is the case, novel, more efficient algorithms are possible [@ShorSIAM97; @GroverSTOC96] – which have no classical counterpart. Whilst partly unexplained, it is nevertheless clear that the algorithmic speed-up arises by tapping into the parallelism granted to us ‘for free’ by the [*superposition principle*]{}, which states that if ${\ensuremath{\mathbf{t}}}$ and ${\ensuremath{\mathbf{u}}}$ are possible states of a system, then so is the formal linear combination of them $\alpha\cdot{\ensuremath{\mathbf{t}}}+\beta\cdot{\ensuremath{\mathbf{u}}}$ (with $\alpha$ and $\beta$ some arbitrary complex numbers, up to a normalizing factor). The idea of a module of $\lambda$-terms over an arbitrary scalar field arises quite naturally in this context. This was the motivation behind the [*linear-algebraic $\lambda$-calculus*]{}, or [*Lineal*]{} for short, by Dowek and one of the authors [@ArrighiDowekRTA08], who obtained a confluence result which holds for arbitrary scalars and again covers the untyped setting. These two languages are rather similar: they both merge higher-order computation, be they terminating or not, in its simplest and most general form (namely the untyped $\lambda$-calculus) together with linear algebra also in its simplest and most general form (the axioms of vector spaces). In fact they can simulate each other [@AssafDiazcaroPerdrixTassonValironLMCS14]. Our starting point is the second one, [*Lineal*]{}, because its confluence proof allows arbitrary scalars and because one has to make a choice. Whether the models developed for the first language, and the type systems developed for the second language, carry through to one another via their reciprocal simulations, is a topic of future investigation. Other motivations to study (linear-)algebraic $\lambda$-calculi --------------------------------------------------------------- The two languages are also reminiscent of other works in the literature: - *Algebraic and symbolic computation.* The functional style of programming is based on the $\lambda$-calculus together with a number of extensions, so as to make everyday programming more accessible. Hence since the birth of functional programming there have been several theoretical studies on extensions of the $\lambda$-calculus in order to account for basic algebra (see for instance Dougherty’s algebraic extension [@DoughertyIC92] for normalising terms of the $\lambda$-calculus) and other basic programming constructs such as pattern-matching [@CirsteaKirchnerLiquoriFOSSACS01; @ArbiserMiquelRiosJFP09], together with sometimes non-trivial associated type theories [@PetitTLCA09]. Whilst this was not the original motivation behind (linear-)algebraic $\lambda$-calculi, they could still be viewed as an extension of the $\lambda$-calculus in order to handle operations over vector spaces and make programming more accessible with them. The main difference in approach is that the $\lambda$-calculus is not seen here as a control structure which sits on top of the vector space data structure, controlling which operations to apply and when. Rather, the $\lambda$-calculus terms themselves can be summed and weighted, hence they actually are vectors, upon which they can also act. - *Parallel and probabilistic computation.* The above intertwinings of concepts are essential if seeking to represent parallel or probabilistic computation as it is the computation itself which must be endowed with a vector space structure. The ability to superpose $\lambda$-calculus terms in that sense takes us back to Boudol’s parallel $\lambda$-calculus [@BoudolIC94] or de Liguoro and Piperno’s work on non-deterministic extensions of $\lambda$-calculus [@deLiguoroPipernoIC95], as well as more recent works such as [@PaganiRonchidellaroccaFI10; @BucciarelliEhrhardManzonettoAPAL12; @DiazcaroManzonettoPaganiLFCS13]. It may also be viewed as being part of a series of works on probabilistic extensions of calculi, [[e.g.]{} ]{} [@BournezHoyrupRTA03; @HerescuPalamidessiFOSSACS00] and [@DipierroHankinWiklickyJLC05; @DalLagoZorziRAIRO12; @DiazcaroDowekDCM13] for $\lambda$-calculus more specifically. Hence (linear-)algebraic $\lambda$-calculi can be seen as a platform for various applications, ranging from algebraic computation, probabilistic computation, quantum computation and resource-aware computation. The language ------------ The language we consider in this paper will be called the [*vectorial $\lambda$-calculus*]{}, denoted by ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$. It is derived from [*Lineal*]{} [@ArrighiDowekRTA08]. This language admits the regular constructs of $\lambda$-calculus: variables $x,y,\ldots$, $\lambda$-abstractions $\lambda x.{{\ensuremath{\mathbf{s}}}}$ and application $({\ensuremath{\mathbf{s}}})\,{\ensuremath{\mathbf{t}}}$. But it also admits linear combinations of terms: ${\ensuremath{\mathbf{0}}}$, ${{\ensuremath{\mathbf{s}}}}+{{\ensuremath{\mathbf{t}}}}$ and $\alpha\cdot {{\ensuremath{\mathbf{s}}}}$ are terms, where the scalar $\alpha$ ranges over a ring. As in [@ArrighiDowekRTA08], it behaves in a call-by-value oriented manner, in the sense that $(\lambda x.{{\ensuremath{\mathbf{r}}}})\,({{\ensuremath{\mathbf{s}}}}+{{\ensuremath{\mathbf{t}}}})$ first reduces to $(\lambda x.{{\ensuremath{\mathbf{r}}}})\,{{\ensuremath{\mathbf{s}}}}+(\lambda x.{{\ensuremath{\mathbf{r}}}})\,{{\ensuremath{\mathbf{t}}}}$ until [*basis terms*]{} (i.e. values) are reached, at which point beta-reduction applies. The set of the normal forms of the terms can then be interpreted as a module and the term $(\lambda x.{{\ensuremath{\mathbf{r}}}})\,{{\ensuremath{\mathbf{s}}}}$ can be seen as the application of the linear operator $(\lambda x.{{\ensuremath{\mathbf{r}}}})$ to the vector ${{\ensuremath{\mathbf{s}}}}$. The goal of this paper is to give a formal account of linear operators and vectors at the level of the type system. Our contributions: The types ---------------------------- Our goal is to characterize the vectoriality of the system of terms, as summarized by the slogan: > If ${\ensuremath{\mathbf{s}}}:T$ and ${\ensuremath{\mathbf{t}}}:R$ then $\alpha\cdot{\ensuremath{\mathbf{s}}} + \beta\cdot{\ensuremath{\mathbf{t}}} > : \alpha\cdot T + \beta\cdot R$. In the end we achieve a type system such that: - The typed language features a slightly weakened subject reduction property (Theorem \[thm:subjectreduction\]). - The typed language features strong normalization (cf. Theorem \[th:SN\]). - In general, if ${\ensuremath{\mathbf{t}}}$ has type $\sum_i\alpha_i\cdot U_i$, then it must reduce to a ${\ensuremath{\mathbf{t}}}'$ of the form $\sum_{ij}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}$, where: the ${\ensuremath{\mathbf{b}}}_{ij}$’s are basis terms of unit type $U_i$, and $\sum_{ij} \beta_{ij}=\alpha_{i}$. (cf. Theorem \[thm:termcharact\]). - In particular finite vectors and matrices and tensorial products can be encoded within [$\lambda^{\!\!\textrm{vec}}$]{}. In this case, the type of the encoded expressions coincides with the result of the expression (cf. Theorem \[thm:matrixsound\]). Beyond these formal results, this work constitutes a first attempt to describe a natural type system with type constructs $\alpha\cdot$ and $+$ and to study their behaviour. Directly related works ---------------------- This paper is part of a research path [@tonder04lambda; @AltenkirchGrattageLICS05; @ArrighiDowekRTA08; @ValironQPL10; @BuirasDiazcaroJaskelioffLSFA11; @ArrighiDiazcaroLMCS12; @DiazcaroPetitWoLLIC12] to design a typed language where terms can be linear combinations of terms (they can be interpreted as probability distributions or quantum superpositions of data and programs) and where the types capture some of this additional structure (they provide the propositions for a probabilistic or quantum logic via Curry-Howard). Along this path, a first step was accomplished in [@ArrighiDiazcaroLMCS12] with scalars in the type system. If $\alpha$ is a scalar and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ is a valid sequent, then $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}:\alpha\cdot T$ is a valid sequent. When the scalars are taken to be positive real numbers, the developed language actually provides a static analysis tool for [*probabilistic*]{} computation. However, it fails to address the following issue: without sums but with negative numbers, the term representing “${\bf true}-{\bf false}$”, namely $\lambda x.\lambda y.x-\lambda x.\lambda y.y$, is typed with $0\cdot(X\to(X\to X))$, a type which fails to exhibit the fact that we have a superposition of terms. A second step was accomplished in [@DiazcaroPetitWoLLIC12] with sums in the type system. In this case, if $\Gamma\vdash {\ensuremath{\mathbf{s}}}:S$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ are two valid sequents, then $\Gamma\vdash {\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}:S+T$ is a valid sequent. However, the language considered is only the [*additive*]{} fragment of [*Lineal*]{}, it leaves scalars out of the picture. For instance, $\lambda x.\lambda y.x-\lambda x.\lambda y.y$, does not have a type, due to its minus sign. Each of these two contributions required renewed, careful and lengthy proofs about their type systems, introducing new techniques. The type system we propose in this paper builds upon these two approaches: it includes both scalars and sums of types, thereby reflecting the vectorial structure of the terms at the level of types. Interestingly, combining the two separate features of [@ArrighiDiazcaroLMCS12; @DiazcaroPetitWoLLIC12] raises subtle novel issues, which we identify and discuss (cf. Section \[sec:vectorial\]). Equipped with those two vectorial type constructs, the type system is indeed able to capture some fine-grained information about the vectorial structure of the terms. Intuitively, this means keeping track of both the ‘direction’ and the ‘amplitude’ of the terms. A preliminary version of this paper has appeared in [@ArrighiDiazcaroValironDCM11]. Plan of the paper ----------------- In Section \[sec:language\], we present the language. We discuss the differences with the original language [*Lineal*]{} [@ArrighiDowekRTA08]. In Section \[sec:vectorial\], we explain the problems arising from the possibility of having linear combinations of types, and elaborate a type system that addresses those problems. Section \[sec:sr\] is devoted to subject reduction. We first say why the standard formulation of subject reduction does not hold. Second we state a slightly weakened notion of the subject reduction theorem, and we prove this result. In Section \[sec:SN\], we prove strong normalisation. Finally we close the paper in Section \[sec:examples\] with theorems about the information brought by the type judgements, both in the general and the finitary cases (matrices and vectors). The terms {#sec:language} ========= We consider the untyped language [$\lambda^{\!\!\textrm{vec}}$]{} described in Figure \[fig:Vec\]. It is based on [*Lineal*]{} [@ArrighiDowekRTA08]: terms come in two flavours, basis terms which are the only ones that will substitute a variable in a $\beta$-reduction step, and general terms. We use Krivine’s notation [@Krivine90] for function application: The term $({\ensuremath{\mathbf{s}}})\,{\ensuremath{\mathbf{t}}}$ passes the argument ${\ensuremath{\mathbf{t}}}$ to the function ${\ensuremath{\mathbf{s}}}$. In addition to $\beta$-reduction, there are fifteen rules stemming from the oriented axioms of vector spaces [@ArrighiDowekRTA08], specifying the behaviour of sums and products. We divide the rules in groups: Elementary (E), Factorisation (F), Application (A) and the Beta reduction (B). Essentially, the rules E and F, presented in [@arrighi05], consist in capturing the equations of vector spaces in an oriented rewrite system. For example, $0\cdot {\ensuremath{\mathbf{s}}}$ reduces to ${\ensuremath{\mathbf{0}}}$, as $0\cdot{\ensuremath{\mathbf{s}}} = {\ensuremath{\mathbf{0}}}$ is valid in vector spaces. It should also be noted that this set of algebraic rule is confluent, and does not introduce loops. In particular, the two rules stating $\alpha\cdot({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}})\to \alpha\cdot{\ensuremath{\mathbf{t}}} + \alpha\cdot{\ensuremath{\mathbf{r}}}$ and $\alpha\cdot{\ensuremath{\mathbf{t}}} + \beta\cdot{\ensuremath{\mathbf{t}}}\to (\alpha+\beta)\cdot{\ensuremath{\mathbf{t}}}$ are not inverse one of the other when ${\ensuremath{\mathbf{r}}} = {\ensuremath{\mathbf{t}}}$. Indeed, $$\alpha\cdot({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{t}}}) \to \alpha\cdot{\ensuremath{\mathbf{t}}}+\alpha\cdot{\ensuremath{\mathbf{t}}} \to (\alpha + \alpha)\cdot{\ensuremath{\mathbf{t}}}$$ but not the other what around. The group of A rules formalize the fact that a general term ${\ensuremath{\mathbf{t}}}$ is thought of as a linear combination of terms $\alpha\cdot{\ensuremath{\mathbf{r}}}+\beta\cdot{\ensuremath{\mathbf{r}}}'$ and the face that the application is distributive on the left [ *and*]{} on the right. When we apply ${\ensuremath{\mathbf{s}}}$ to such a superposition, $({\ensuremath{\mathbf{s}}})~{\ensuremath{\mathbf{t}}}$ reduces to $\alpha\cdot({\ensuremath{\mathbf{s}}})~{\ensuremath{\mathbf{r}}} + \beta\cdot({\ensuremath{\mathbf{s}}})~{\ensuremath{\mathbf{r}}}'$. The term ${\ensuremath{\mathbf{0}}}$ is the empty linear combination of terms, explaining the last two rules of Group A. Terms are considered modulo associativity and commutativity of the operator $+$, making the reduction into an [*AC-rewrite system*]{} [@JouannaudKirchnerSIAM86]. Scalars (notation $\alpha,\beta,\gamma,\dots$) form a ring $({\ensuremath{\mathsf{S}}},+,\times)$, where the scalar $0$ is the unit of the addition and $1$ the unit of the multiplication. We use the shortcut notation ${\ensuremath{\mathbf{s}}}-{\ensuremath{\mathbf{t}}}$ in place of ${\ensuremath{\mathbf{s}}}+(-1)\cdot{\ensuremath{\mathbf{t}}}$. Note that although the typical ring we consider in the examples is the ring of complex numbers, the development works for any ring: the ring of integer $\mathbb{Z}$, the finite ring $\mathbb{Z}/2\mathbb{Z}$… The set of free variables of a term is defined as usual: the only operator binding variables is the $\lambda$-abstraction. The operation of substitution on terms (notation ${\ensuremath{\mathbf{t}}}[{{\ensuremath{\mathbf{b}}}}/x]$) is defined in the usual way for the regular $\lambda$-term constructs, by taking care of variable renaming to avoid capture. For a linear combination, the substitution is defined as follows: $(\alpha\cdot{\ensuremath{\mathbf{t}}}+\beta\cdot{\ensuremath{\mathbf{r}}})[{{\ensuremath{\mathbf{b}}}}/x]=\alpha\cdot{\ensuremath{\mathbf{t}}}[{{\ensuremath{\mathbf{b}}}}/x]+\beta\cdot{\ensuremath{\mathbf{r}}}[{{\ensuremath{\mathbf{b}}}}/x]$. Note that we need to choose a reduction strategy. For example, the term $(\lambda x.(x)\ x)$ $(y +z)$ cannot reduce to both $(\lambda x.(x)\,x)\,y+(\lambda x.(x)\,x)\,z$ and $(y+z)\,(y+z)$. Indeed, the former normalizes to $(y)\,y+(z)\,z$ whereas the latter normalizes to $(y)\,z+(y)\,y+(z)\,y+(z)\,z$; which would break confluence. As in [@ArrighiDowekRTA08; @ArrighiDiazcaroLMCS12; @DiazcaroPetitWoLLIC12], we consider a call-by-value reduction strategy: The argument of the application is required to be a base term, cf. Group B. Relation to [*Lineal*]{} ------------------------ Although strongly inspired from [*Lineal*]{}, the language ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$ is closer to [@AssafDiazcaroPerdrixTassonValironLMCS14; @ArrighiDiazcaroLMCS12; @DiazcaroPetitWoLLIC12]. Indeed, [*Lineal*]{} considers some restrictions on the reduction rules, for example $\alpha\cdot{\ensuremath{\mathbf{t}}}+\beta\cdot{\ensuremath{\mathbf{t}}}\to(\alpha+\beta)\cdot{\ensuremath{\mathbf{t}}}$ is only allowed when ${\ensuremath{\mathbf{t}}}$ is a closed normal term. These restrictions are enforced to ensure confluence in the untyped setting. Consider the following example. Let ${\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}=(\lambda x.({\ensuremath{\mathbf{b}}}+(x)~x))~\lambda x.({\ensuremath{\mathbf{b}}}+(x)~x)$. Then ${\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}$ reduces to ${\ensuremath{\mathbf{b}}}+{\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}$. So the term ${\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}-{\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}$ reduces to ${\ensuremath{\mathbf{0}}}$, but also reduces to ${\ensuremath{\mathbf{b}}}+{\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}-{\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}$ and hence to ${\ensuremath{\mathbf{b}}}$, breaking confluence. The above restriction forbids the first reduction, bringing back confluence. In our setting we do not need it because ${\ensuremath{\mathbf{Y}}}_{\ensuremath{\mathbf{b}}}$ is not well-typed. If one considers a typed language enforcing strong normalisation, one can waive many of the restrictions and consider a more canonical set of rewrite rules [@AssafDiazcaroPerdrixTassonValironLMCS14; @ArrighiDiazcaroLMCS12; @DiazcaroPetitWoLLIC12]. Working with a type system enforcing strong normalisation (as shown in Section \[sec:SN\]), we follow this approach. Booleans in the vectorial $\lambda$-calculus -------------------------------------------- We claimed in the introduction that the design of [*Lineal*]{} was motivated by quantum computing; in this section we develop this analogy. Both in ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$ and in quantum computation one can interpret the notion of booleans. In the former we can consider the usual booleans $\lambda x.\lambda y.x$ and $\lambda x.\lambda y.y$ whereas in the latter we consider the regular quantum bits ${{\bf true}}={{|{0}\rangle}}$ and ${{\bf false}}={{|{1}\rangle}}$. In ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$, a representation of ${\it if}~{{\ensuremath{\mathbf{r}}}}~{\it then}~{{\ensuremath{\mathbf{s}}}}~{\it else}~{{\ensuremath{\mathbf{t}}}}$ needs to take into account the special relation between sums and applications. We cannot directly encode this test as the usual $(({{\ensuremath{\mathbf{r}}}})\,{{\ensuremath{\mathbf{s}}}})\,{{\ensuremath{\mathbf{t}}}}$. Indeed, if ${{\ensuremath{\mathbf{r}}}}$, ${{\ensuremath{\mathbf{s}}}}$ and ${{\ensuremath{\mathbf{t}}}}$ were respectively the terms ${{\bf true}}$, ${{\ensuremath{\mathbf{s}}}}_1+{{\ensuremath{\mathbf{s}}}}_2$ and ${{\ensuremath{\mathbf{t}}}}_1+{{\ensuremath{\mathbf{t}}}}_2$, the term $(({{\ensuremath{\mathbf{r}}}})\,{{\ensuremath{\mathbf{s}}}})\,{{\ensuremath{\mathbf{t}}}}$ would reduce to $(({{\bf true}})\,{{\ensuremath{\mathbf{s}}}}_1)\,{{\ensuremath{\mathbf{t}}}}_1 + (({{\bf true}})\,{{\ensuremath{\mathbf{s}}}}_1)\,{{\ensuremath{\mathbf{t}}}}_2 + (({{\bf true}})\,{{\ensuremath{\mathbf{s}}}}_2)\,{{\ensuremath{\mathbf{t}}}}_1 +(({{\bf true}})\,{{\ensuremath{\mathbf{s}}}}_2)\,{{\ensuremath{\mathbf{t}}}}_2$, then to $2\cdot{{\ensuremath{\mathbf{s}}}}_1 + 2\cdot{{\ensuremath{\mathbf{s}}}}_2$ instead of ${{\ensuremath{\mathbf{s}}}}_1 + {{\ensuremath{\mathbf{s}}}}_2$. We need to “freeze” the computations in each branch of the test so that the sum does not distribute over the application. For that purpose we use the well-known notion of [*thunks*]{} [@ArrighiDowekRTA08]: we encode the test as ${\{(({{\ensuremath{\mathbf{r}}}})\,{[{{\ensuremath{\mathbf{s}}}}]})\,{[{{\ensuremath{\mathbf{t}}}}]}\}}$, where ${[-]}$ is the term $\lambda f.-$ with $f$ a fresh, unused term variable and where ${\{-\}}$ is the term $(-)\lambda x.x$. The former “freezes” the linearity while the latter “releases” it. Then the term ${\it if}~{{\bf true}}~{\it then}~({{\ensuremath{\mathbf{s}}}}_1+{{\ensuremath{\mathbf{s}}}}_2)~{\it else}~({{\ensuremath{\mathbf{t}}}}_1+{{\ensuremath{\mathbf{t}}}}_2)$ reduces to the term ${{\ensuremath{\mathbf{s}}}}_1+{{\ensuremath{\mathbf{s}}}}_2$ as one could expect. Note that this test is linear, in the sense that the term ${\it if}~(\alpha\cdot{{\bf true}}+\beta\cdot{{\bf false}})~{\it then}~{{\ensuremath{\mathbf{s}}}}~{\it else}~{{\ensuremath{\mathbf{t}}}}$ reduces to $\alpha\cdot{{\ensuremath{\mathbf{s}}}} + \beta\cdot{{\ensuremath{\mathbf{t}}}}$. This is similar to the [*quantum test*]{} that can be found [[e.g.]{} ]{}in [@tonder04lambda; @AltenkirchGrattageLICS05]. Quantum computation deals with complex, linear combinations of terms, and a typical computation is run by applying linear unitary operations on the terms, called [*gates*]{}. For example, the Hadamard gate [**H**]{} acts on the space of booleans spanned by ${{\bf true}}$ and ${{\bf false}}$. It sends ${{{\bf true}}}$ to $\frac1{\sqrt2}({{{\bf true}}}+{{{\bf false}}})$ and ${{{\bf false}}}$ to $\frac1{\sqrt2}({{{\bf true}}}-{{{\bf false}}})$. If $x$ is a quantum bit, the value $({\bf H})\,x$ can be represented as the quantum test $$({\bf H})\,x\quad{:}{=}\quad{ {\it if}~x~{\it then}~\frac1{\sqrt2}({{\bf true}}+{{\bf false}})~{\it else}~\frac1{\sqrt2}({{\bf true}}-{{\bf false}})}.$$ As developed in [@ArrighiDowekRTA08], one can simulate this operation in ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$ using the test construction we just described: $${\{({\bf H})\,x\}}\quad{:}{=}\quad\left\{ \left((x)\,\left[\frac1{\sqrt2}\cdot{{\bf true}}+\frac1{\sqrt2}\cdot{{\bf false}}\right]\right)\, \left[\frac1{\sqrt2}\cdot{{\bf true}}-\frac1{\sqrt2}\cdot{{\bf false}}\right] \right\}.$$ Note that the thunks are necessary: without thunks the term $$\left((x)\, \left(\frac1{\sqrt2}\cdot{{\bf true}}+\frac1{\sqrt2}\cdot{{\bf false}}\right)\right)\, \left(\frac1{\sqrt2}\cdot{{\bf true}}-\frac1{\sqrt2}\cdot{{\bf false}}\right)$$ would reduce to the term $$\frac12(((x)\,{{\bf true}})\,{{\bf true}}+ ((x)\,{{\bf true}})\,{{\bf false}}+ ((x)\,{{\bf false}})\,{{\bf true}}+ ((x)\,{{\bf false}})\,{{\bf false}}),$$ which is fundamentally different from the term ${\bf H}$ we are trying to emulate. With this procedure we can “encode” any matrix. If the space is of some general dimension $n$, instead of the basis elements ${{\bf true}}$ and ${{\bf false}}$ we can choose for $i=1$ to $n$ the terms $\lambda x_1.\cdots.\lambda x_n.x_i$’s to encode the basis of the space. We can also take tensor products of qubits. We come back to these encodings in Section \[sec:examples\]. The type system {#sec:vectorial} =============== This section presents the core definition of the paper: the vectorial type system. Intuitions ---------- Before diving into the technicalities of the definition, we discuss the rationale behind the construction of the type-system. ### Superposition of types We want to incorporate the notion of scalars in the type system. If $A$ is a valid type, the construction $\alpha\cdot A$ is also a valid type and if the terms ${\ensuremath{\mathbf{s}}}$ and ${\ensuremath{\mathbf{t}}}$ are of type $A$, the term $\alpha\cdot{\ensuremath{\mathbf{s}}}+\beta\cdot{\ensuremath{\mathbf{t}}}$ is of type $(\alpha+\beta)\cdot A$. This was achieved in [@ArrighiDiazcaroLMCS12] and it allows us to distinguish between the functions $\lambda x.(1\cdot x)$ and $\lambda x.(2\cdot x)$: the former is of type $A\to A$ whereas the latter is of type $A\to (2\cdot A)$. The terms ${{\bf true}}$ and ${{\bf false}}$ can be typed in the usual way with ${\mathcal{B}}= X\to(X\to X)$, for a fixed type $X$. So let us consider the term $\frac1{\sqrt2}\cdot({{\bf true}}-{{\bf false}})$. Using the above addition to the type system, this term should be of type $0\cdot{\mathcal{B}}$, a type which fails to exhibit the fact that we have a superposition of terms. For instance, applying the Hadamard gate to this term produces the term ${{\bf false}}$ of type ${\mathcal{B}}$: the norm would then jump from $0$ to $1$. This time, the problem comes from the fact that the type system does not keep track of the “direction” of a term. To address this problem we must allow sums of types. For instance, provided that ${\mathcal{T}}=X\to(Y\to X)$ and ${\mathcal{F}}=X\to(Y\to Y)$, we can type the term $\frac1{\sqrt2}\cdot({{\bf true}}-{{\bf false}})$ with $\frac{\sqrt2}2\cdot({\mathcal{T}}-{\mathcal{F}})$, which has $L_2$-norm $1$, just like the type of ${{\bf false}}$ has norm one. At this stage the type system is able to type the term ${\bf H}=\lambda x.{\{((x)\,{[\frac1{\sqrt2}\cdot{{\bf true}}+\frac1{\sqrt2}\cdot{{\bf false}}]})\, {[\frac1{\sqrt2}\cdot{{\bf true}}-\frac1{\sqrt2}\cdot{{\bf false}}]}\}}$. Indeed, remember that the thunk construction ${[-]}$ is simply $\lambda f.(-)$ where $f$ is a fresh variable and that ${\{-\}}$ is $(-)\lambda x.x$. So whenever ${\ensuremath{\mathbf{t}}}$ has type $A$, ${[{\ensuremath{\mathbf{t}}}]}$ has type ${\bf I}\to A$ with ${\bf I}$ an identity type of the form $Z\to Z$, and ${\{{\ensuremath{\mathbf{t}}}\}}$ has type $A$ whenever ${\ensuremath{\mathbf{t}}}$ has type ${\bf I}\to A$. The term $\bf H$ can then be typed with $(({\bf I}\to\frac1{\sqrt 2}.({\mathcal{T}}+{\mathcal{F}})) \to ({\bf I}\to\frac1{\sqrt 2}.({\mathcal{T}}-{\mathcal{F}})) \to {\bf I}\to T)\to T$, where $T$ any fixed type. Let us now try to type the term $({\bf H})\,{{\bf true}}$. This is possible by taking $T$ to be $\frac1{\sqrt 2}\cdot({\mathcal{T}}+{\mathcal{F}})$. But then, if we want to type the term $({\bf H})\,{{\bf false}}$, $T$ needs to be equal to $\frac1{\sqrt 2}\cdot({\mathcal{T}}-{\mathcal{F}})$. It follows that we cannot type the term $({\bf H})\,(\frac{1}{\sqrt2}\cdot{{\bf true}}+ \frac1{\sqrt2}\cdot{{\bf false}})$ since there is no possibility to conciliate the two constraints on $T$. To address this problem, we need a forall construction in the type system, making it [*à la System F*]{}. The term ${\bf H}$ can now be typed with $\forall T.(({\bf I}\to\frac1{\sqrt 2}\cdot({\mathcal{T}}+{\mathcal{F}})) \to ({\bf I}\to\frac1{\sqrt 2}\cdot({\mathcal{T}}-{\mathcal{F}})) \to {\bf I} \to T)\to T$ and the types ${\mathcal{T}}$ and ${\mathcal{F}}$ are updated to be respectively $\forall XY.X\to(Y\to X)$ and $\forall XY.X\to(Y\to Y)$. The terms $({\bf H})\,{{\bf true}}$ and $({\bf H})\,{{\bf false}}$ can both be well-typed with respective types $\frac1{\sqrt 2}\cdot({\mathcal{T}}+{\mathcal{F}})$ and $\frac1{\sqrt 2}\cdot({\mathcal{T}}-{\mathcal{F}})$, as expected. ### Type variables, units and general types {#sec:introH} Because of the call-by-value strategy, variables must range over types that are not linear combination of other types, i.e. [*unit types*]{}. To illustrate this necessity, consider the following example. Suppose we allow variables to have scaled types, such as $\alpha\cdot U$. Then the term $\lambda x.x+y$ could have type $(\alpha\cdot U)\to \alpha\cdot U+V$ (with $y$ of type $V$). Let ${\ensuremath{\mathbf{b}}}$ be of type $U$, then $(\lambda x.x+y)~(\alpha\cdot{\ensuremath{\mathbf{b}}})$ has type $\alpha\cdot U+V$, but then $$(\lambda x.x+y)~(\alpha\cdot{\ensuremath{\mathbf{b}}})\to \alpha\cdot(\lambda x.x+y)~{\ensuremath{\mathbf{b}}}\to \alpha\cdot({\ensuremath{\mathbf{b}}}+y)\to \alpha\cdot{\ensuremath{\mathbf{b}}}+\alpha\cdot y\,,$$ which is problematic since the type $\alpha\cdot U+V$ does not reflect such a superposition. Hence, the left side of an arrow will be required to be a unit type. This is achieved by the grammar defined in Figure \[fig:types\]. Type variables, however, do not always have to be unit type. Indeed, a forall of a general type was needed in the previous section in order to type the term ${\ensuremath{\mathbf{H}}}$. But we need to distinguish a general type variable from a unit type variable, in order to make sure that only unit types appear at the left of arrows. Therefore, we define two sorts of type variables: the variables ${\ensuremath{\mathpzc{X}}}$ to be replaced with unit types, and ${\ensuremath{\mathbb{X}}}$ to be replaced with any type (we use just $X$ when we mean either one). The type ${\ensuremath{\mathpzc{X}}}$ is a unit type whereas the type ${\ensuremath{\mathbb{X}}}$ is not. In particular, the type ${\mathcal{T}}$ is now $\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}}$, the type ${\mathcal{F}}$ is $\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}}$ and the type of ${{\ensuremath{\mathbf{H}}}}$ is $$\forall{\ensuremath{\mathbb{X}}}.\left(\left({\bf I}\to\frac1{\sqrt 2}\cdot({\mathcal{T}}+{\mathcal{F}})\right) \to \left({\bf I}\to\frac1{\sqrt 2}\cdot({\mathcal{T}}-{\mathcal{F}})\right) \to {\bf I} \to {\ensuremath{\mathbb{X}}}\right)\to {\ensuremath{\mathbb{X}}}.$$ Notice how the left sides of all arrows remain unit types. ### The term ${\ensuremath{\mathbf{0}}}$ {#sec:term0} The term ${\ensuremath{\mathbf{0}}}$ will naturally have the type $0\cdot T$, for any inhabited type $T$ (enforcing the intuition that the term ${\ensuremath{\mathbf{0}}}$ is essentialy a normal form of programs of the form ${\ensuremath{\mathbf{t}}}-{\ensuremath{\mathbf{t}}}$). We could also consider to add the equivalence $R+0\cdot T\equiv R$ as in [@ArrighiDiazcaroLMCS12]. However, consider the following example. Let $\lambda x.x$ be of type $U\to U$ and let ${\ensuremath{\mathbf{t}}}$ be of type $T$. The term $\lambda x.x + {\ensuremath{\mathbf{t}}} - {\ensuremath{\mathbf{t}}}$ is of type $(U\to U) + 0\cdot T$, that is, $(U\to U)$. Now choose ${\ensuremath{\mathbf{b}}}$ of type $U$: we are allowed to say that $(\lambda x.x + {\ensuremath{\mathbf{t}}} - {\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}}$ is of type $U$. This term reduces to ${\ensuremath{\mathbf{b}}} + ({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}} - ({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}}$. But if the type system is reasonable enough, we should at least be able to type $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}}$. However, since there is no constraints on the type $T$, this is difficult to enforce. The problem comes from the fact that along the typing of ${\ensuremath{\mathbf{t}}} - {\ensuremath{\mathbf{t}}}$, the type of ${\ensuremath{\mathbf{t}}}$ is lost in the equivalence $(U\to U)+0\cdot T\equiv U\to U$. The only solution is to not discard $0\cdot T$, that is, to not equate $R+0\cdot T$ and $R$. Formalisation ------------- We now give a formal account of the type system: we first describe the language of types, then present the typing rules. ### Definition of types Types are defined in Figure \[fig:types\] (top). They come in two flavours: [*unit types*]{} and general types, that is, linear combinations of types. Unit types include all types of *System F* [@GirardLafontTaylor89 Ch. 11] and intuitively they are used to type basis terms. The arrow type admits only a unit type in its domain. This is due to the fact that the argument of a $\lambda$-abstraction can only be substituted by a basis term, as discussed in Section \[sec:introH\]. As discussed before, the type system features two sorts of variables: unit variables ${\ensuremath{\mathpzc{X}}}$ and general variables ${\ensuremath{\mathbb{X}}}$. The former can only be substituted by a unit type whereas the latter can be substituted by any type. We use the notation $X$ when the type variable is unrestricted. The substitution of ${\ensuremath{\mathpzc{X}}}$ by $U$ (resp. ${\ensuremath{\mathbb{X}}}$ by $S$) in $T$ is defined as usual and is written $T[U/{\ensuremath{\mathpzc{X}}}]$ (resp. $T[S/{\ensuremath{\mathbb{X}}}]$). We use the notation $T[A/X]$ to say: “if $X$ is a unit variable, then $A$ is a unit type and otherwise $A$ is a general type”. In particular, for a linear combination, the substitution is defined as follows: $(\alpha\cdot T+\beta\cdot R)[A/X]=\alpha\cdot T[A/X]+\beta\cdot R[A/X]$. We also use the vectorial notation $T[\vec{A}/\vec{X}]$ for $T[A_1/X_1]\cdots[A_n/X_n]$ if $\vec{X}=X_1,\dots,X_n$ and $\vec{A}=A_1,\dots,A_n$, and also $\forall \vec X$ for $\forall X_1\dots X_n=\forall X_1.\dots.\forall X_n$. The equivalence relation $\equiv$ on types is defined as a congruence. Notice that this equivalence makes the types into a weak module over the scalars: they almost form a module save from the fact that there is no neutral element for the addition. The type $0\cdot T$ is not the neutral element of the addition. We may use the summation ($\sum$) notation without ambiguity, due to the associativity and commutativity equivalences of $+$. ### Typing rules The typing rules are given also in Figure \[fig:types\] (bottom). Contexts are denoted by $\Gamma$, $\Delta$, etc. and are defined as sets $\{x:U,\dots\}$, where $x$ is a term variable appearing only once in the set, and $U$ is a unit type. The axiom ($ax$) and the arrow introduction rule ($\to_I$) are the usual ones. The rule ($0_I$) to type the term ${\ensuremath{\mathbf{0}}}$ takes into account the discussion in Section \[sec:term0\]. This rule also ensures that the type of ${\ensuremath{\mathbf{0}}}$ is inhabited, discarding problematic types like $0\cdot \forall X.X$. Any sum of typed terms can be typed using Rule $(+_I)$. Similarly, any scaled typed term can be typed with $(\alpha_I)$. Rule $(\equiv)$ ensures that equivalent types can be used to type the same terms. Finally, the particular form of the arrow-elimination rule ($\to_E$) is due to the rewrite rules in group A that distribute sums and scalars over application. The need and use of this complicated arrow elimination can be illustrated by the following three examples. Rule $(\to_E)$ is easier to read for trivial linear combinations. It states that provided that $\Gamma\vdash {\ensuremath{\mathbf{s}}}:\forall X.U\to S$ and $\Gamma\vdash {\ensuremath{\mathbf{t}}}:V$, if there exists some type $W$ such that $V=U[W/X]$, then since the sequent $\Gamma\vdash {\ensuremath{\mathbf{s}}}:V\to S[W/X]$ is valid, we also have $\Gamma\vdash ({\ensuremath{\mathbf{s}}})\,{\ensuremath{\mathbf{t}}}:S[W/X]$. Hence, the arrow elimination here performs an arrow and a forall elimination at the same time. Consider the terms ${\ensuremath{\mathbf{b}}}_1$ and ${\ensuremath{\mathbf{b}}}_2$, of respective types $U_1$ and $U_2$. The term ${\ensuremath{\mathbf{b}}}_1 + {\ensuremath{\mathbf{b}}}_2$ is of type $U_1+U_2$. We would reasonably expect the term $(\lambda x.x)\,({\ensuremath{\mathbf{b}}}_1 + {\ensuremath{\mathbf{b}}}_2)$ to also be of type $U_1 + U_2$. This is the case thanks to Rule $(\to_E)$. Indeed, type the term $\lambda x.x$ with the type $\forall X.X\to X$ and we can now apply the rule. Notice that we could not type such a term unless we eliminate the forall together with the arrow. \[ex:3\] A slightly more involved example is the projection of a pair of elements. It is possible to encode in [*System F*]{} the notion of pairs and projections: $\langle {\ensuremath{\mathbf{b}}}, {\ensuremath{\mathbf{c}}}\rangle = \lambda x.((x)~{\ensuremath{\mathbf{b}}})~{\ensuremath{\mathbf{c}}}$, $\langle {\ensuremath{\mathbf{b}}}', {\ensuremath{\mathbf{c}}}'\rangle = \lambda x.((x)~{\ensuremath{\mathbf{b}}}')~{\ensuremath{\mathbf{c}}}'$, $\pi_1 = \lambda x.(x)~(\lambda y.\lambda z.y)$ and $\pi_2 = \lambda x.(x)~(\lambda y.\lambda z.z)$. Provided that ${\ensuremath{\mathbf{b}}}$, ${\ensuremath{\mathbf{b}}}'$, ${\ensuremath{\mathbf{c}}}$ and ${\ensuremath{\mathbf{c}}}'$ have respective types $U$, $U'$, $V$ and $V'$, the type of $\langle {\ensuremath{\mathbf{b}}}, {\ensuremath{\mathbf{c}}}\rangle$ is $\forall X.(U\to V\to X)\to X$ and the type of $\langle {\ensuremath{\mathbf{b}}}', {\ensuremath{\mathbf{c}}}'\rangle$ is $\forall X.(U'\to V'\to X)\to X$. The term $\pi_1$ and $\pi_2$ can be typed respectively with $\forall XYZ.((X\to Y\to X)\to Z)\to Z$ and $\forall XYZ.((X\to Y\to Y)\to Z)\to Z$. The term $(\pi_1 + \pi_2)\,(\langle {\ensuremath{\mathbf{b}}}, {\ensuremath{\mathbf{c}}}\rangle + \langle {\ensuremath{\mathbf{b}}}', {\ensuremath{\mathbf{c}}}'\rangle)$ is then typable of type $U+U'+V+V'$, thanks to Rule $(\to_E)$. Note that this is consistent with the rewrite system, since it reduces to ${\ensuremath{\mathbf{b}}} + {\ensuremath{\mathbf{c}}} + {\ensuremath{\mathbf{b}}}' + {\ensuremath{\mathbf{c}}}'$. Example: Typing Hadamard ------------------------ In this Section, we formally show how to retrieve the type that was discussed in Section \[sec:introH\], for the term ${\bf H}$ encoding the Hadamard gate. Let ${{\bf true}}=\lambda x.\lambda y.x$ and ${{\bf false}}=\lambda x.\lambda y.y$. It is easy to check that $$\begin{aligned} &\vdash{{\bf true}}:\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}},\\ &\vdash{{\bf false}}:\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}}.\end{aligned}$$ We also define the following superpositions: $${{|{+}\rangle}}=\frac{1}{\sqrt{2}}\cdot({{\bf true}}+{{\bf false}}) \qquad\textrm{and}\qquad {{|{-}\rangle}}=\frac{1}{\sqrt{2}}\cdot({{\bf true}}-{{\bf false}}).$$ In the same way, we define $$\begin{aligned} \boxplus&=\frac{1}{\sqrt{2}}\cdot((\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}})+(\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}})), \\ \boxminus&=\frac{1}{\sqrt{2}}\cdot((\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}})-(\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}})).\end{aligned}$$ Finally, we recall $[{\ensuremath{\mathbf{t}}}]=\lambda x.{\ensuremath{\mathbf{t}}}$, where $x\notin{\ensuremath{FV}({\ensuremath{\mathbf{t}}})}$ and $\{{\ensuremath{\mathbf{t}}}\}=({\ensuremath{\mathbf{t}}})~I$. So ${\{{[{\ensuremath{\mathbf{t}}}]}\}}\to{\ensuremath{\mathbf{t}}}$. Then it is easy to check that $\vdash{[{{|{+}\rangle}}]}:I\to\boxplus$ and $\vdash{[{{|{-}\rangle}}]}:I\to\boxminus$. In order to simplify the notation, let $F=(I\to\boxplus)\to(I\to\boxminus)\to (I\to {\ensuremath{\mathbb{X}}})$. Then $$\prooftree \prooftree \prooftree \prooftree \prooftree \prooftree \justifies x:F\vdash x:F \using ax \endprooftree \qquad x:F\vdash{[{{|{+}\rangle}}]}:I\to\boxplus \justifies x:F\vdash (x)~{[{{|{+}\rangle}}]}:(I\to\boxminus)\to(I\to {\ensuremath{\mathbb{X}}}) \using\to_E \endprooftree \qquad x:F\vdash{[{{|{-}\rangle}}]}:I\to\boxminus \justifies x:F\vdash (x)~{[{{|{+}\rangle}}]}{[{{|{-}\rangle}}]}:I\to {\ensuremath{\mathbb{X}}} \using\to_E \endprooftree \justifies x:F\vdash{\{(x)~{[{{|{+}\rangle}}]}{[{{|{-}\rangle}}]}\}}:{\ensuremath{\mathbb{X}}} \using\to_E \endprooftree \justifies\vdash\lambda x.{\{(x)~{[{{|{+}\rangle}}]}{[{{|{-}\rangle}}]}\}}:F\to {\ensuremath{\mathbb{X}}} \using\to_I \endprooftree \justifies\vdash\lambda x.{\{(x)~{[{{|{+}\rangle}}]}{[{{|{-}\rangle}}]}\}}:\forall {\ensuremath{\mathbb{X}}}.((I\to\boxplus)\to(I\to\boxminus)\to (I\to {\ensuremath{\mathbb{X}}}))\to {\ensuremath{\mathbb{X}}} \using\forall_{{\ensuremath{\mathbb{I}}}} \endprooftree$$ Now we can apply Hadamard to a qubit and get the right type. Let $H$ be the term $\lambda x.{\{(x)~{[{{|{+}\rangle}}]}{[{{|{-}\rangle}}]}\}}$ Yet a more interesting example is the following. Let $$\boxplus_I = \frac 1{\sqrt 2}\cdot(((I\to\boxplus)\to(I\to\boxminus)\to(I\to\boxplus))+((I\to\boxplus)\to(I\to\boxminus)\to(I\to\boxminus)))$$ That is, $\boxplus$ where the forall have been instantiated. It is easy to check that $\vdash{{|{+}\rangle}}:\boxplus_I$. Hence, $$\prooftree\vdash H:\forall {\ensuremath{\mathbb{X}}}.((I\to\boxplus)\to(I\to\boxminus)\to (I\to {\ensuremath{\mathbb{X}}}))\to {\ensuremath{\mathbb{X}}} \quad \vdash{{|{+}\rangle}}:\boxplus_I \justifies \vdash (H)~{{|{+}\rangle}}:\frac 1{\sqrt 2}\cdot\boxplus+\frac 1{\sqrt 2}\cdot\boxminus \using\to_E \endprooftree$$ And since $\frac 1{\sqrt 2}\cdot\boxplus+\frac 1{\sqrt 2}\cdot\boxminus\equiv\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}}$, we conclude that $$\vdash(H)~{{|{+}\rangle}}:\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}}.$$ Notice that $(H)~{{|{+}\rangle}}\to^*{{\bf true}}$. Subject reduction {#sec:sr} ================= As we will now explain, the usual formulation of subject reduction is not directly satisfied. We discuss the alternatives and opt for a weakened version of subject reduction. Principal types and subtyping alternatives ------------------------------------------ Since the terms of ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$ are not explicitly typed, we are bound to have sequents such as $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T_1$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T_2$ with distinct types $T_1$ and $T_2$ for the same term ${\ensuremath{\mathbf{t}}}$. Using Rules $(+_I)$ and $(\alpha_I)$ we get the valid typing judgement $\Gamma\vdash\alpha\cdot {\ensuremath{\mathbf{t}}}+\beta\cdot {\ensuremath{\mathbf{t}}}:\alpha\cdot T_1+\beta\cdot T_2$. Given that $\alpha\cdot {\ensuremath{\mathbf{t}}}+\beta\cdot {\ensuremath{\mathbf{t}}}$ reduces to $(\alpha+\beta)\cdot {\ensuremath{\mathbf{t}}}$, a regular subject reduction would ask for the valid sequent $\Gamma\vdash(\alpha+\beta)\cdot {\ensuremath{\mathbf{t}}}:\alpha\cdot T_1+\beta\cdot T_2$. But since in general we do not have $\alpha\cdot T_1+\beta\cdot T_2\equiv(\alpha+\beta)\cdot T_1\equiv(\alpha+\beta)\cdot T_2$, we need to find a way around this. A first approach would be to use the notion of principal types. However, since our type system includes [*System F*]{}, the usual examples for the absence of principal types apply to our settings: we cannot rely upon this method. A second approach would be to ask for the sequent $\Gamma\vdash(\alpha+\beta)\cdot {\ensuremath{\mathbf{t}}}:\alpha\cdot T_1+\beta\cdot T_2$ to be valid. If we force this typing rule into the system, it seems to solve the issue but then the type of a term becomes pretty much arbitrary: with typing context $\Gamma$, the term $(\alpha+\beta)\cdot{\ensuremath{\mathbf{t}}}$ would then be typed with any combination $\gamma\cdot T_1 + \delta\cdot T_2$, where $\alpha+\beta=\gamma+\delta$. The approach we favour in this paper is via a notion of order on types. The order, denoted with $\sqsupseteq$, will be chosen so that the factorisation rules make the types of terms smaller. We will ask in particular that $(\alpha+\beta)\cdot T_1\sqsupseteq\alpha\cdot T_1+\beta\cdot T_2$ and $(\alpha+\beta)\cdot T_2\sqsupseteq\alpha\cdot T_1+\beta\cdot T_2$ whenever $T_1$ and $T_2$ are types for the same term. This approach can also be extended to solve a second pitfall coming from the rule ${{\ensuremath{\mathbf{t}}}} + {\ensuremath{\mathbf{0}}} \to {\ensuremath{\mathbf{t}}}$. Indeed, although $x:{\ensuremath{\mathpzc{X}}}\vdash x + {\ensuremath{\mathbf{0}}} : {\ensuremath{\mathpzc{X}}}+0\cdot T$ is well-typed for any inhabited $T$, the sequent $x:{\ensuremath{\mathpzc{X}}}\vdash x:{\ensuremath{\mathpzc{X}}}+0\cdot T$ is not valid in general. We therefore extend the ordering to also have ${\ensuremath{\mathpzc{X}}}\sqsupseteq {\ensuremath{\mathpzc{X}}}+0\cdot T$. Notice that we are not introducing a subtyping relation with this ordering. For example, although $\vdash (\alpha+\beta)\cdot\lambda x.\lambda y.x:(\alpha+\beta)\cdot\forall{\ensuremath{\mathpzc{X}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}})$ is valid and $(\alpha+\beta)\cdot\forall{\ensuremath{\mathpzc{X}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}})\sqsupseteq \alpha\cdot \forall{\ensuremath{\mathpzc{X}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}})+\beta\cdot\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to({\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}})$, the sequent $\vdash (\alpha+\beta)\cdot\lambda x.\lambda y.x:\alpha\cdot\forall {\ensuremath{\mathpzc{X}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}})+\beta\cdot\forall{\ensuremath{\mathpzc{X}}}{\ensuremath{\mathpzc{Y}}}.{\ensuremath{\mathpzc{X}}}\to({\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}})$ is not valid. Weak subject reduction ---------------------- We define the ordering relation $\sqsupseteq$ on types discussed above as the smallest reflexive transitive and congruent relation satisfying the rules: 1. $(\alpha+\beta)\cdot T\sqsupseteq\alpha\cdot T+\beta\cdot T'$ if there are $\Gamma,{\ensuremath{\mathbf{t}}}$ such that $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}: \alpha\cdot T$ and $\Gamma\vdash\beta\cdot{\ensuremath{\mathbf{t}}}: \beta\cdot T'$. 2. $T\sqsupseteq T+0.R$ for any type $R$. 3. If $T\sqsupseteq R$ and $U\sqsupseteq V$, then $T+S\sqsupseteq R+S$, $\alpha\cdot T\sqsupseteq\alpha\cdot R$, $U\to T\sqsupseteq U\to R$ and $\forall X.U\sqsupseteq\forall X.V$. Note the fact that $\Gamma\vdash {\ensuremath{\mathbf{t}}}: T$ and $\Gamma\vdash {\ensuremath{\mathbf{t}}}: T'$ does not imply that $\beta\cdot T\sqsupseteq \beta\cdot T'$. For instance, although $\beta\cdot T\sqsupseteq 0\cdot T+\beta\cdot T'$, we do not have $0\cdot T+\beta\cdot T'\equiv\beta\cdot T'$. Let $R$ be any reduction rule from Figure \[fig:Vec\], and $\to_R$ a one-step reduction by rule $R$. A weak version of the subject reduction theorem can be stated as follows. \[thm:subjectreduction\] For any terms ${\ensuremath{\mathbf{t}}}$, ${\ensuremath{\mathbf{t}}}'$, any context $\Gamma$ and any type $T$, if ${\ensuremath{\mathbf{t}}}\to_R{\ensuremath{\mathbf{t}}}'$ and $\Gamma\vdash {\ensuremath{\mathbf{t}}}: T$, then: 1. if $R\notin$ Group F, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}': T$; 2. if $R\in$ Group F, then $\exists S\sqsupseteq T$ such that $\Gamma\vdash{\ensuremath{\mathbf{t}}}': S$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}: S$. Prerequisites to the proof {#sec:prereq} -------------------------- The proof of Theorem \[thm:subjectreduction\] requires some machinery that we develop in this section. Omitted proofs can be found in \[app:srpre\]. ### Properties of types The following lemma gives a characterisation of types as linear combinations of unit types and general variables. \[lem:typecharact\] For any type $T$ in $\mathcal{G}$, there exist $n,m\in\mathbb{N}$, $\alpha_1,\dots,\alpha_n$, $\beta_1,\dots,\beta_m\in{\ensuremath{\mathsf{S}}}$, distinct unit types $U_1,\dots,U_n$ and distinct general variables ${\ensuremath{\mathbb{X}}}_1,\dots,{\ensuremath{\mathbb{X}}}_m$ such that $$T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j\ .\qedhere$$ Our system admits weakening, as stated by the following lemma. \[lem:weakening\] Let ${\ensuremath{\mathbf{t}}}$ be such that $x\not\in{\ensuremath{FV}({\ensuremath{\mathbf{t}}})}$. Then $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ is derivable if and only if $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ is derivable. By a straightforward induction on the type derivation. ### Properties on the equivalence relation \[lem:equivdistinctscalars\] Let $U_1,\dots,U_n$ be a set of distinct unit types, and let $V_1,\dots,V_m$ be also a set distinct unit types. If ${\sum_{i=1}^{n}}\alpha_i\cdot U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j$, then $m=n$ and there exists a permutation $p$ of $m$ such that $\forall i$, $\alpha_i=\beta_{p(i)}$ and $U_i\equiv V_{p(i)}$. \[lem:equivforall\]  1. \[it:equivforall1\] ${\sum_{i=1}^{n}}\alpha_i\cdot U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j\Leftrightarrow{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot\forall X.V_j$. 2. \[it:equivforall2\] ${\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j\Rightarrow\forall V_j,\exists W_j~/~V_j\equiv\forall X.W_j$. 3. \[it:equivforall3\] $T\equiv R\Rightarrow T[A/X]\equiv R[A/X]$. ### An auxiliary relation on types For the proof of subject reduction, we use the standard strategy developed by Barendregth [@Barendregt92][^1]. It consists in defining a relation betwen types of the form $\forall X.T$ and $T$. For our vectorial type system, we take into account linear combinations of types \[def:order\] For any types $T, R$, any context $\Gamma$ and any term ${\ensuremath{\mathbf{t}}}$ such that $$\prooftree\Gamma\vdash{\ensuremath{\mathbf{t}}}:T \justifies \prooftree\vdots \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:R \endprooftree \endprooftree$$ 1. if $X\notin{\ensuremath{FV}(\Gamma)}$, write $T\succ^{{\ensuremath{\mathbf{t}}}}_{X,\Gamma} R$ if either - $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i$ and $R\equiv{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i$,or - $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i$ and $R\equiv {\sum_{i=1}^{n}}\alpha_i\cdot U_i[A/X]$. 2. if ${\mathcal{V}}$ is a set of type variables such that ${\mathcal{V}}\cap{\ensuremath{FV}(\Gamma)}=\emptyset$, we define $\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma}$ inductively by - If $X\in {\mathcal{V}}$ and $T\succ^{{\ensuremath{\mathbf{t}}}}_{X,\Gamma} R$, then $T\succeq^{{\ensuremath{\mathbf{t}}}}_{\{X\},\Gamma} R$. - If ${\mathcal{V}}_1,{\mathcal{V}}_2\subseteq{\mathcal{V}}$, $T\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}}_1,\Gamma} R$ and $R\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}}_2,\Gamma} S$, then $T\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}}_1\cup{\mathcal{V}}_2,\Gamma} S$. - If $T\equiv R$, then $T\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} R$. Let the following be a valid derivation. $$\prooftree \prooftree \prooftree \prooftree \prooftree\Gamma\vdash{\ensuremath{\mathbf{t}}}:T\qquad T\equiv {\sum_{i=1}^{n}}\alpha_i\cdot U_i \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot U_i\qquad {\ensuremath{\mathpzc{X}}}\notin{\ensuremath{FV}(\Gamma)} \using\equiv \endprooftree \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall{\ensuremath{\mathpzc{X}}}.U_i \using\forall_{\ensuremath{\mathpzc{I}}} \endprooftree \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot U_i[V/{\ensuremath{\mathpzc{X}}}] \using\forall_{\ensuremath{\mathpzc{E}}} \endprooftree {\ensuremath{\mathbb{Y}}}\notin{\ensuremath{FV}(\Gamma)} \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall{\ensuremath{\mathbb{Y}}}.U_i[V/{\ensuremath{\mathpzc{X}}}] \using\forall_{\ensuremath{\mathpzc{I}}} \endprooftree \qquad {\sum_{i=1}^{n}}\alpha_i\cdot\forall{\ensuremath{\mathbb{Y}}}.U_i[V/{\ensuremath{\mathpzc{X}}}]\equiv R \justifies\Gamma\vdash{\ensuremath{\mathbf{t}}}:R \using\equiv \endprooftree$$ Then $T\succeq^{{\ensuremath{\mathbf{t}}}}_{\{{\ensuremath{\mathpzc{X}}},{\ensuremath{\mathbb{Y}}}\},\Gamma} R$. Note that this relation is stable under reduction in the following way: \[lem:subjectreductionofrelation\] If $T\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} R$, ${\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{r}}}$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}: T$, then $T\succeq^{{\ensuremath{\mathbf{r}}}}_{{\mathcal{V}},\Gamma} R$. The following lemma states that if two arrow types are ordered, then they are equivalent up to some substitutions. \[lem:arrowscomp\] If $V\to R\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} \forall\vec X.(U\to T)$, then $U\to T\equiv(V\to R)[\vec{A}/\vec{Y}]$, with $\vec Y\notin {\ensuremath{FV}(\Gamma)}$. ### Generations lemmas Before proving Theorem \[thm:subjectreduction\], we need to prove some basic properties of the system. \[lem:scalars\] For any context $\Gamma$, term ${\ensuremath{\mathbf{t}}}$, type $T$ and scalar $\alpha$, if $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}: T$, then there exists a type $R$ such that $T\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}: R$. Moreover, if the minimum size of the derivation of $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}:T$ is $s$, then if $T=\alpha\cdot R$, the minimum size of the derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$ is at most $s-1$, in other case, its minimum size is at most $s-2$. The following lemma shows that the type for ${\ensuremath{\mathbf{0}}}$ is always $0\cdot T$. \[lem:termzero\] Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$ or ${\ensuremath{\mathbf{t}}}=\alpha\cdot{\ensuremath{\mathbf{0}}}$, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ implies $T\equiv 0\cdot R$. \[lem:sums\] If $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:S$, then $S\equiv T+R$ with $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R$. Moreover, if the size of the derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:S$ is $s$, then if $S=T+R$, the minimum sizes of the derivations of $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R$ are at most $s-1$, and if $S\neq T+R$, the minimum sizes of these derivations are at most $s-2$. \[lem:app\] If $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:T$, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec{A}_j/\vec{X}]$ where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec{A}_j/\vec{X}]\succeq^{({\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{r}}}}_{{\mathcal{V}},\Gamma} T$ for some ${\mathcal{V}}$. \[lem:abs\] If $\Gamma\vdash\lambda x.{\ensuremath{\mathbf{t}}}:T$, then $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:R$ where $U\to R\succeq^{\lambda x.{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} T$ for some ${\mathcal{V}}$. A basis term can always be given a unit type. \[lem:basevectors\] For any context $\Gamma$, type $T$ and basis term ${\ensuremath{\mathbf{b}}}$, if $\Gamma\vdash{\ensuremath{\mathbf{b}}}: T$ then there exists a unit type $U$ such that $T\equiv U$. ### Substitution lemma The final stone for the proof of Theorem \[thm:subjectreduction\] is a lemma relating well-typed terms and substitution. \[lem:substitution\] For any term ${{\ensuremath{\mathbf{t}}}}$, basis term ${\ensuremath{\mathbf{b}}}$, term variable $x$, context $\Gamma$, types $T$, $R$, $U$, $\vec{W}$, set of type variables ${\mathcal{V}}$, type variables $\vec{X}$ and types $A$, where $A$ is a unit type if $\vec X$ are unit variables, otherwise $A$ is a general type, we have, 1. \[it:substitutionTypes\] if $\Gamma\vdash{\ensuremath{\mathbf{t}}}: T$, then $\Gamma[A/X]\vdash{\ensuremath{\mathbf{t}}}: T[A/X]$; 2. \[it:substitutionTerms\] if $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$, $\Gamma\vdash{\ensuremath{\mathbf{b}}}:U$ then $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]: T$. The proof of subject reduction (Theorem \[thm:subjectreduction\]), follows by induction using the previous defined lemmas. It can be foun in full details in \[app:srproof\]. Strong normalisation {#sec:SN} ==================== For proving strong normalisation of well-typed terms, we use reducibility candidates, a well-known method described for example in [@GirardLafontTaylor89 Ch. 14]. The technique is adapted to linear combinations of terms. Omitted proofs in this section can be found . A [*neutral term*]{} is a term that is not a $\lambda$-abstraction and that does reduce to something. The set of [*closed neutral terms*]{} is denoted with ${\mathcal{N}}$. We write ${\Lambda_0}$ for the set of closed terms and ${{\it SN}_0}$ for the set of closed, strongly normalising terms. If ${\ensuremath{\mathbf{t}}}$ is any term, ${{\rm Red}}({\ensuremath{\mathbf{t}}})$ is the set of all terms ${\ensuremath{\mathbf{t}}}'$ such that ${\ensuremath{\mathbf{t}}}\to {\ensuremath{\mathbf{t}}}'$. It is naturally extended to sets of terms. We say that a set $S$ of closed terms is a reducibility candidate, denoted with $S\in{\mathsf{RC}}$ if the following conditions are verified: ${{\bf RC}}_1$ : Strong normalisation: $S\subseteq{{\it SN}_0}$. ${{\bf RC}}_2$ : Stability under reduction: ${\ensuremath{\mathbf{t}}}\in S$ implies ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq S$. ${{\bf RC}}_3$ : Stability under neutral expansion: If ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$ and ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq S$ then ${\ensuremath{\mathbf{t}}}\in S$. ${{\bf RC}}_4$ : The common inhabitant: ${\ensuremath{\mathbf{0}}}\in S$. We define the notion of [*algebraic context*]{} over a list of terms $\vec {{\ensuremath{\mathbf{t}}}}$, with the following grammar: $$F(\vec{{\ensuremath{\mathbf{t}}}}),G(\vec{{\ensuremath{\mathbf{t}}}})\quad::=\quad {\ensuremath{\mathbf{t}}}_i~|~ F(\vec{{\ensuremath{\mathbf{t}}}}) + G(\vec{{\ensuremath{\mathbf{t}}}})~|~\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}}) ~|~ {{\ensuremath{\mathbf{0}}}},$$ where ${\ensuremath{\mathbf{t}}}_i$ is the $i$-th element of the list ${\ensuremath{\mathbf{t}}}$. Given a set of terms $S=\{{\ensuremath{\mathbf{s}}}_i\}_i$, we write $\mathcal{F}(S)$ for the set of terms of the form $F(\vec{{\ensuremath{\mathbf{s}}}})$ when $F$ spans over algebraic contexts. We introduce a condition on contexts, which will be handy to define some of the operations on candidates: ${{\bf CC}}$ : If for some $F$, $F(\vec{{\ensuremath{\mathbf{s}}}})\in S$ then $\forall i, {\ensuremath{\mathbf{s}}}_i\in S$. We then define the following operations on reducibility candidates. 1. Let ${\mathsf{A}}$ and ${\mathsf{B}}$ be in ${\mathsf{RC}}$. ${\mathsf{A}}\to{\mathsf{B}}$ is the closure under ${{\bf RC}}_3$ and ${{\bf RC}}_4$ of the set of ${\ensuremath{\mathbf{t}}}\in{\Lambda_0}$ such that $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{0}}}\in{\mathsf{B}}$ and such that for all base terms ${\ensuremath{\mathbf{b}}}\in{\mathsf{A}}$, $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}}\in{\mathsf{B}}$. 2. If $\{{\mathsf{A}}_i\}_i$ is a family of reducibility candidates, $\sum_i{\mathsf{A_i}}$ is the closure under ${{\bf CC}}$, ${{\bf RC}}_2$ and ${{\bf RC}}_3$ of the set $$\left\{~F(\vec{{\ensuremath{\mathbf{t}}}})~|\textrm{ for all }j,\, {\ensuremath{\mathbf{t}}}_j \in{\mathsf{A_i}}~\textrm{ for some }i~\textrm{ and }F(\vec{{\ensuremath{\mathbf{t}}}})\in\mathcal{F}(\vec{{\ensuremath{\mathbf{t}}}})~ \right\}.$$ Notice that ${\sum_{i=1}^{1}}{\mathsf{A}}\neq{\mathsf{A}}$. Indeed, ${\sum_{i=1}^{1}}{\mathsf{A}}$ is the closure over ${{\bf CC}}$, ${{\bf RC}}_2$ and ${{\bf RC}}_3$ of $\{ F(\vec{{\ensuremath{\mathbf{t}}})}~|~{\ensuremath{\mathbf{t}}}_j\in {\mathsf{A}} \}$, that is, the set of linear combinations of terms of ${\mathsf{A}}$, and its closure. \[lem:RCop\] If ${\mathsf{A}}$, ${\mathsf{B}}$ and all the ${\mathsf{A}}_i$’s are in ${\mathsf{RC}}$, then so are ${\mathsf{A}}\to{\mathsf{B}}$, $\sum_i{\mathsf{A}}_i$ and $\cap_i{\mathsf{A}}_i$. A *single type valuation* is a partial function from type variables to reducibility candidates, that we define as a sequence of comma-separated mappings, with $\emptyset$ denoting the empty valuation: $\rho:=\,\emptyset~|~\rho,X\mapsto{\mathsf{A}}$. Type variables are interpreted using pairs of single type valuations, that we simply call [*valuations*]{}, with common domain: $\rho = (\rho_+,\rho_-)$ with $|\rho_+|=|\rho_-|$. Given a valuation $\rho=(\rho_+,\rho_-)$, the [*complementary valuation*]{} $\bar\rho$ is the pair $(\rho_-,\rho_+)$. We write $(X_+,X_-)\mapsto(A_+,A_-)$ for the valuation $(X_+\mapsto A_+, X_-\mapsto A_-)$. A valuation is called *valid* if for all $X$, $\rho_-(X)\subseteq\rho_+(X)$. From now on, we will consider the following grammar $${\ensuremath{\mathbb{U,V,W}}} ::= U~|~{\ensuremath{\mathbb{X}}}.$$ That is, we will use ${\ensuremath{\mathbb{U,V,W}}}$ for unit and ${\ensuremath{\mathbb{X}}}$-kind of variables. To define the interpretation of a type $T$, we use the following result. \[lem:typedecomp\] Any type $T$, has a unique canonical decomposition $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$ such that for all $l,k$, ${\ensuremath{\mathbb{U}}}_l\not\equiv{\ensuremath{\mathbb{U}}}_k$. The interpretation ${\ensuremath{\llbracket{T}\rrbracket}}_\rho$ of a type $T$ in a valuation $\rho=(\rho_+,\rho_-)$ defined for each free type variable of $T$ is given by: $$\begin{array}{r@{~=~}l} {\ensuremath{\llbracket{X}\rrbracket}}_\rho & \rho_+(X),\\ {\ensuremath{\llbracket{U\to T}\rrbracket}}_\rho & {\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho}\to{\ensuremath{\llbracket{T}\rrbracket}}_\rho,\\ {\ensuremath{\llbracket{\forall X.U}\rrbracket}}_\rho & \cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}}, {\mathsf{B}})},\\ \multicolumn{2}{c}{\mbox{If }T\equiv\sum_i\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i\mbox{ is the canonical decomposition of }T\mbox{ and }T\not\equiv{\ensuremath{\mathbb{U}}}}\\ {\ensuremath{\llbracket{T}\rrbracket}}_\rho & \sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho} \end{array}$$ From Lemma \[lem:RCop\], the interpretation of any type is a reducibility candidate. Reducibility candidates deal with closed terms, whereas proving the adequacy lemma by induction requires the use of open terms with some assumptions on their free variables, that will be guaranteed by a context. Therefore we use *substitutions* $\sigma$ to close terms: $$\sigma := \emptyset \;|\; (x \mapsto{\ensuremath{\mathbf{b}}};\sigma)\enspace,$$ then ${\ensuremath{\mathbf{t}}}_{\emptyset} = {\ensuremath{\mathbf{t}}}$ and ${\ensuremath{\mathbf{t}}}_{x \mapsto {\ensuremath{\mathbf{b}}};\sigma} = {\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]_{\sigma}$. All the substitutions ends by $\emptyset$, hence we omit it when not necessary. Given a context $\Gamma$, we say that a substitution $\sigma$ *satisfies* $\Gamma$ for the valuation $\rho$ (notation: $\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_{\rho}$) when $(x:U) \in \Gamma$ implies $x_\sigma\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho}$ (Note the change in polarity). A typing judgement $\Gamma\vdash{\ensuremath{\mathbf{t}}}: T$, is said to be *valid* (notation $\Gamma\models{\ensuremath{\mathbf{t}}}: T$) if - in case $T\equiv{\ensuremath{\mathbb{U}}}$, then for every valuation $\rho$, and for every substitution $\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$, we have ${\ensuremath{\mathbf{t}}}_{\sigma}\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}}\rrbracket}}_{\rho}$. - in other case, that is, $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$ with $n>1$, such that for all $i,j$, ${\ensuremath{\mathbb{U}}}_i\not\equiv{\ensuremath{\mathbb{U}}}_j$ (notice that by Lemma \[lem:typedecomp\] such a decomposition always exists), then for every valuation $\rho$, and set of valuations $\{\rho_i\}_n$, where $\rho_i$ acts on $FV(U_i)\setminus FV(\Gamma)$, and for every substitution $\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$, we have ${\ensuremath{\mathbf{t}}}_{\sigma}\in\sum_{i=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho,\rho_i}$. \[lem:substRed\] For any types $T$ and $A$, variable $X$ and valuation $\rho$, we have ${\ensuremath{\llbracket{T[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{T}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$ and ${\ensuremath{\llbracket{T[A/X]}\rrbracket}}_{\bar\rho} = {\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho,(X_-,X_+)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho})}$. The proof of the Adequacy Lemma as well as the machinery of needed auxiliary lemmas can be found in \[app:adequacy\]. \[lem:SNadeq\] Every derivable typing judgement is valid: For every valid sequent $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$, we have $\Gamma\models{\ensuremath{\mathbf{t}}}:T$. \[th:SN\] If $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ is a valid sequent, then ${\ensuremath{\mathbf{t}}}$ is strongly normalising. If $\Gamma$ is the list $(x_i:U_i)_i$, the sequent $\vdash\lambda x_1\ldots x_n.{\ensuremath{\mathbf{t}}}:U_1\to(\cdots\to(U_n\to T)\cdots)$ is derivable. Using Lemma \[lem:SNadeq\], we deduce that for any valuation $\rho$ and any substitution $\sigma\in{\ensuremath{\llbracket{\emptyset}\rrbracket}}_\rho$, we have $\lambda x_1\ldots x_n.{\ensuremath{\mathbf{t}}}_\sigma\in{\ensuremath{\llbracket{T}\rrbracket}}_\rho$. By construction, $\sigma$ does nothing on ${\ensuremath{\mathbf{t}}}$: ${\ensuremath{\mathbf{t}}}_\sigma = {\ensuremath{\mathbf{t}}}$. Since ${\ensuremath{\llbracket{T}\rrbracket}}_\rho$ is a reducibility candidate, $\lambda x_1\ldots x_n.{\ensuremath{\mathbf{t}}}$ is strongly normalising and hence ${\ensuremath{\mathbf{t}}}$ is strongly normalising. Interpretation of typing judgements {#sec:examples} =================================== The general case ---------------- In the general case the calculus can represent infinite-dimensional linear operators such as $\lambda x.x$, $\lambda x.\lambda y.y$, $\lambda x.\lambda f.(f)\,x$,…and their applications. Even for such general terms ${\ensuremath{\mathbf{t}}}$, the vectorial type system provides much information about the superposition of basis terms $\sum_i\alpha_i\cdot{\ensuremath{\mathbf{b}}}_i$ to which ${\ensuremath{\mathbf{t}}}$ reduces, as explained in Theorem \[thm:termcharact\]. How much information is brought by the type system in the finitary case is the topic of Section \[sec:finitary\]. \[thm:termcharact\] Let $T$ be a generic type with canonical decomposition ${\sum_{i=1}^{n}}\alpha_i.{\ensuremath{\mathbb{U}}}_i$, in the sense of Lemma \[lem:typedecomp\]. If ${}\vdash{\ensuremath{\mathbf{t}}}:T$, then ${\ensuremath{\mathbf{t}}}\to^*{\sum_{i=1}^{n}}{\sum_{j=1}^{m_i}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}$, where for all $i$, $\vdash{\ensuremath{\mathbf{b}}}_{ij}:{\ensuremath{\mathbb{U}}}_i$ and ${\sum_{j=1}^{m_i}}\beta_{ij}=\alpha_i$, and with the convention that ${\sum_{j=1}^{0}}\beta_{ij}=0$ and ${\sum_{j=1}^{0}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}={\ensuremath{\mathbf{0}}}$. The detailed proof of the previous theorem can be found in \[app:examples\] The finitary case: Expressing matrices and vectors {#sec:finitary} -------------------------------------------------- In what we call the “finitary case”, we show how to encode finite-dimensional linear operators, i.e. matrices, together with their applications to vectors, as well as matrix and tensor products. Theorem \[thm:matrixsound\] shows that we can encode matrices, vectors and operations upon them, and the type system will provide the result of such operations. ### In 2 dimensions In this section we come back to the motivating example introducing the type system and we show how [[$\lambda^{\!\!\textrm{vec}}$]{}]{} handles the Hadamard gate, and how to encode matrices and vectors. With an empty typing context, the booleans ${{\bf true}}=\lambda x.\lambda y.x\,$ and $\,{{\bf false}}=\lambda x.\lambda y.y$ can be respectively typed with the types ${\mathcal{T}}=\forall {\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{X}}})\,$ and $\,{\mathcal{F}}=\forall{\ensuremath{\mathpzc{XY}}}.{\ensuremath{\mathpzc{X}}}\to ({\ensuremath{\mathpzc{Y}}}\to{\ensuremath{\mathpzc{Y}}})$. The superposition has the following type $\vdash\alpha\cdot{{\bf true}}+\beta\cdot{{\bf false}}:\alpha\cdot{\mathcal{T}}+ \beta\cdot{\mathcal{F}}$. (Note that it can also be typed with $(\alpha+\beta)\cdot \forall{\ensuremath{\mathpzc{X}}}.{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}}\to{\ensuremath{\mathpzc{X}}}$). The linear map ${\ensuremath{\mathbf{U}}}$ sending ${{\bf true}}$ to $a\cdot{{\bf true}}+b\cdot{{\bf false}}$ and ${{\bf false}}$ to $c\cdot{{\bf true}}+d\cdot{{\bf false}}$, that is $$\begin{aligned} {{\bf true}}&\mapsto a\cdot{{\bf true}}+b\cdot{{\bf false}}, \\ {{\bf false}}&\mapsto c\cdot{{\bf true}}+d\cdot{{\bf false}}\end{aligned}$$ is written as $${\ensuremath{\mathbf{U}}}={\lambda x.{\{((x){[a\cdot{{\bf true}}+b\cdot{{\bf false}}]}){[c\cdot{{\bf true}}+d\cdot{{\bf false}}]}\}} }.$$ The following sequent is valid: $$\vdash{\ensuremath{\mathbf{U}}}:\forall {\ensuremath{\mathbb{X}}}.((I\to (a\cdot{\mathcal{T}}+b\cdot{\mathcal{F}}))\to(I\to (c\cdot{\mathcal{T}}+d\cdot{\mathcal{F}}))\to I\to {\ensuremath{\mathbb{X}}})\to {\ensuremath{\mathbb{X}}}.$$ This is consistent with the discussion in the introduction: the Hadamard gate is the case $a=b=c=\frac1{\sqrt2}$ and $d=-\frac1{\sqrt2}$. One can check that with an empty typing context, $({\bf U})~{{\bf true}}$ is well typed of type $a\cdot{\mathcal{T}}+b\cdot{\mathcal{F}}$, as expected since it reduces to $a\cdot{{\bf true}}+b\cdot{{\bf false}}$. The term $({\bf H})~\frac1{\sqrt2}\cdot ({{\bf true}}+{{\bf false}})$ is well-typed of type ${\mathcal{T}}+0\cdot{\mathcal{F}}$. Since the term reduces to ${{\bf true}}$, this is consistent with the subject reduction: we indeed have ${\mathcal{T}}\sqsupseteq{\mathcal{T}}+0\cdot{\mathcal{F}}$. But we can do more than typing $2$-dimensional vectors $2\times2$-matrices: using the same technique we can encode vectors and matrices of any size. ### Vectors in $n$ dimensions {#sec:vec} The $2$-dimensional space is represented by the span of $\lambda x_1x_2.x_1$ and $\lambda x_1x_2.x_2$: the $n$-dimensional space is simply represented by the span of all the $\lambda x_1\cdots{}x_n.x_i$, for $i=1\cdots{}n$. As for the two dimensional case where $$\vdash \alpha_1\cdot\lambda x_1x_2.x_1 + \alpha_2\cdot\lambda x_1x_2.x_2 : \alpha_1\cdot\forall {\ensuremath{\mathpzc{X}}}_1{\ensuremath{\mathpzc{X}}}_2.{\ensuremath{\mathpzc{X}}}_1 + \alpha_2\cdot\forall {\ensuremath{\mathpzc{X}}}_1{\ensuremath{\mathpzc{X}}}_2.{\ensuremath{\mathpzc{X}}}_2,$$ an $n$-dimensional vector is typed with $$\vdash {\sum_{i=1}^{n}}\alpha_i\cdot\lambda x_1\cdots{}x_n.x_i : {\sum_{i=1}^{n}}\alpha_i\cdot\forall {\ensuremath{\mathpzc{X}}}_1\cdots{}{\ensuremath{\mathpzc{X}}}_n.{\ensuremath{\mathpzc{X}}}_i.$$ We use the notations $${{\ensuremath{\mathbf{e}}}}_i^n = \lambda x_1\cdots{}x_n.x_i, \qquad {{\ensuremath{\mathbf{E}}}}_i^n = \forall {\ensuremath{\mathpzc{X}}}_1\cdots{}{\ensuremath{\mathpzc{X}}}_n.{\ensuremath{\mathpzc{X}}}_i$$ and we write $$\begin{array}{r@{~=~}l@{~=~}l} \left\llbracket \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_n \end{array} \right\rrbracket^{\rm term}_{n} & \left(\begin{array}{c} \alpha_{1}\cdot{{\ensuremath{\mathbf{e}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{n}\cdot{{\ensuremath{\mathbf{e}}}}_n^n \end{array}\right) & \sum\limits_{i=1}^{n}\alpha_i\cdot {{\ensuremath{\mathbf{e}}}}_i^n, \\\multicolumn{3}{c}{ }\\ \left\llbracket \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_n \end{array} \right\rrbracket^{\rm type}_{n} & \left(\begin{array}{c} \alpha_{1}\cdot{{\ensuremath{\mathbf{E}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{n}\cdot{{\ensuremath{\mathbf{E}}}}_n^n \end{array}\right) & \sum\limits_{i=1}^{n}\alpha_i\cdot {{\ensuremath{\mathbf{E}}}}_i^n. \end{array}$$ ### $n\times m$ matrices {#sec:mat} Once the representation of vectors is chosen, it is easy to generalize the representation of $2\times 2$ matrices to the $n\times m$ case. Suppose that the matrix $U$ is of the form $$U = \left( \begin{array}{ccc} \alpha_{11} & \cdots & \alpha_{1m} \\ \vdots && \vdots \\ \alpha_{n1} & \cdots & \alpha_{nm} \end{array} \right),$$ then its representation is $$\left\llbracket U \right\rrbracket^{\rm term}_{n\times m} ={~~~~} \lambda x. \left\{ \left( \cdots \left( (x) \left[ \begin{array}{c} \alpha_{11}\cdot{{\ensuremath{\mathbf{e}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{n1}\cdot{{\ensuremath{\mathbf{e}}}}_n^n \end{array} \right] \right) \cdots \left[ \begin{array}{c} \alpha_{1m}\cdot{{\ensuremath{\mathbf{e}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{nm}\cdot{{\ensuremath{\mathbf{e}}}}_n^n \end{array} \right] \right) \right\}\qquad$$ and its type is $$\left\llbracket U \right\rrbracket^{\rm type}_{n\times m} ={~~~~} \forall{\ensuremath{\mathbb{X}}}. \left( \left[ \begin{array}{c} \alpha_{11}\cdot{{\ensuremath{\mathbf{E}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{n1}\cdot{{\ensuremath{\mathbf{E}}}}_n^n \end{array} \right]\to \cdots \to \left[ \begin{array}{c} \alpha_{1m}\cdot{{\ensuremath{\mathbf{E}}}}_1^n \\+\\ \cdots \\+\\ \alpha_{nm}\cdot{{\ensuremath{\mathbf{E}}}}_n^n \end{array} \right]\to [~{\ensuremath{\mathbb{X}}}~] \right) \to {\ensuremath{\mathbb{X}}},$$ that is, an almost direct encoding of the matrix $U$. We also use the shortcut notation $${\bf mat}({\ensuremath{\mathbf{t}}}_1,\ldots,{\ensuremath{\mathbf{t}}}_n) = \lambda x.(\ldots((x)\,{[{\ensuremath{\mathbf{t}}}_1]})\ldots)\,{[{\ensuremath{\mathbf{t}}}_n]}$$ ### Useful constructions In this section, we describe a few terms representing constructions that will be used later on. #### Projections The first useful family of terms are the projections, sending a vector to its $i^{\rm th}$ coordinate: $$\left( \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_i \\ \vdots \\ \alpha_n \end{array} \right) \longmapsto \left( \begin{array}{c} 0 \\ \vdots \\ \alpha_i \\ \vdots \\ 0 \end{array} \right).$$ Using the matrix representation, the term projecting the $i^{\rm th}$ coordinate of a vector of size $n$ is $$\xymatrix@R=0em@C=0em{ \textrm{$i^{\rm th}$ position}\ar@/^1em/[rd]&& \\ {{\ensuremath{\mathbf{p}}}}^n_i = {\bf mat}({{\ensuremath{\mathbf{0}}}},\cdots,{{\ensuremath{\mathbf{0}}}},&{{\ensuremath{\mathbf{e}}}}^n_i,&{{\ensuremath{\mathbf{0}}}},\cdots,{{\ensuremath{\mathbf{0}}}}). }$$ We can easily verify that $$\vdash {{\ensuremath{\mathbf{p}}}}^n_i : \left\llbracket \begin{array}{ccccc} 0 & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots & & & \vdots \\ 0 & & 1 & & 0 \\ \vdots & & & \ddots & \vdots \\ 0 & \cdots & 0 & \cdots & 0 \end{array} \right\rrbracket^{\rm type}_{n\times n}$$ and that $$({{\ensuremath{\mathbf{p}}}}^n_{i_0})\, \left( {\sum_{i=1}^{n}}\alpha_i\cdot{{\ensuremath{\mathbf{e}}}}^n_i \right) \longrightarrow^* \alpha_{i_0}\cdot{{\ensuremath{\mathbf{e}}}}^n_{i_0}.$$ #### Vectors and diagonal matrices Using the projections defined in the previous section, it is possible to encode the map sending a vector of size $n$ to the corresponding $n\times n$ matrix: $$\left( \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_n \end{array} \right) \longmapsto \left( \begin{array}{c@{~}c@{~}c} \alpha_1&&0 \\ &\ddots& \\ 0&&\alpha_n \end{array} \right)$$ with the term $${\bf diag}^n = \lambda b.{\bf mat}(({{\ensuremath{\mathbf{p}}}}^n_1)\,\{b\},\ldots,({{\ensuremath{\mathbf{p}}}}^n_n)\,\{b\})$$ of type $$\vdash {\bf diag}^n : \left[ \left\llbracket \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_n \end{array} \right\rrbracket^{\rm type}_{n} \right] \to \left\llbracket \begin{array}{ccc} \alpha_1 & & 0 \\ & \ddots & \\ 0 & & \alpha_n \end{array} \right\rrbracket^{\rm type}_{n\times n}.$$ It is easy to check that $$({\bf diag}^n)\, \left[ {\sum_{i=1}^{n}}\alpha_i\cdot{\bf e}^n_i \right] \longmapsto^* {\bf mat}(\alpha_1\cdot{\bf e}^n_1,\ldots,\alpha_n\cdot{\bf e}^n_n)$$ #### Extracting a column vector out of a matrix Another construction that is worth exhibiting is the operation $$\left( \begin{array}{c@{~}c@{~}c} \alpha_{11}&\cdots&\alpha_{1n} \\ \vdots&&\vdots \\ \alpha_{m1}&\cdots&\alpha_{mn} \end{array} \right) \longmapsto \left( \begin{array}{c} \alpha_{1i} \\ \vdots \\ \alpha_{mi} \end{array} \right).$$ It is simply defined by multiplying the input matrix with the $i^{\rm th}$ base column vector: $${\bf col}^n_i = \lambda x.(x)\,{{\ensuremath{\mathbf{e}}}}^n_i$$ and one can easily check that this term has type $$\vdash {\bf col}^n_i : \left\llbracket \begin{array}{ccc} \alpha_{11}&\cdots&\alpha_{1n} \\ \vdots&&\vdots \\ \alpha_{m1}&\cdots&\alpha_{mn} \end{array} \right\rrbracket^{\rm type}_{m\times n} \to \left\llbracket \begin{array}{c} \alpha_{1i} \\ \vdots \\ \alpha_{mi} \end{array} \right\rrbracket^{\rm type}_{m}.$$ Note that the same term ${\bf col}^n_i$ can be typed with several values of $m$. ### A language of matrices and vectors In this section we formalize what was informally presented in the previous sections: the fact that one can encode simple matrix and vector operations in [$\lambda^{\!\!\textrm{vec}}$]{}, and the fact that the type system serves as a witness for the result of the encoded operation. We define the language ${{\it Mat}}$ of matrices and vectors with the grammar $$\begin{array}{ll} M,N &{}::={} \zeta ~|~ M\otimes N ~|~ (M)\,N \\ u,v &{}::={} \nu ~|~ u\otimes v ~|~ (M)\,u, \end{array}$$ where $\zeta$ ranges over the set matrices and $\nu$ over the set of (column) vectors. Terms are implicitly typed: types of matrices are $(m,n)$ where $m$ and $n$ ranges over positive integers, while types of vectors are simply integers. Typing rules are the following. $$\infer{\zeta:(m,n)}{\zeta\in\mathbb{C}^{m\times n}} \qquad \infer{M\otimes N:(mm',nn')}{M:(m,n) & N:(m',n')} \qquad \infer{(M)\,N:(m,n)}{M:(m,n') & N:(n',n)}$$ $$\infer{\nu:m}{\nu\in\mathbb{C}^{m}} \qquad \infer{u\otimes v:mn}{u:m & v:n} \qquad \infer{(M)\,u:n}{M:(m,n) & u:m}$$ The operational semantics of this language is the natural interpretation of the terms as matrices and vectors. If $M$ computes the matrix $\zeta$, we write $M\downarrow\zeta$. Similarly, if $u$ computes the vector $\nu$, we write $u\downarrow\nu$. Following what we already said, matrices and vectors can be interpreted as types and terms in [$\lambda^{\!\!\textrm{vec}}$]{}. The map ${\ensuremath{\llbracket{-}\rrbracket}}^{\rm term}$ sends terms of ${\it Mat}$ to terms of ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$ and the map ${\ensuremath{\llbracket{-}\rrbracket}}^{\rm type}$ sends matrices and vectors to types of ${\ensuremath{\lambda^{\!\!\textrm{vec}}}}$. - Vectors and matrices are defined as in Sections \[sec:vec\] and \[sec:mat\]. - As we already discussed, the matrix-vector multiplication is simply the application of terms in [$\lambda^{\!\!\textrm{vec}}$]{}: $${{\ensuremath{\llbracket{(M)\,u}\rrbracket}}^{{\rm term}}} = ({{\ensuremath{\llbracket{M}\rrbracket}}^{{\rm term}}})\,{{\ensuremath{\llbracket{u}\rrbracket}}^{{\rm term}}}$$ - The matrix multiplication is performed by first extracting the column vectors, then performing the matrix-vector multiplication: this gives a column of the final matrix. We conclude by recomposing the final matrix column-wise. That is done with the term $${\bf app} = \lambda xy.{\bf mat}((x)\,(({\bf col}^m_1)\,y),\ldots,(x)\,(({\bf col}^m_n)\,y))$$ and its type is Hence, $${{\ensuremath{\llbracket{(M)~N}\rrbracket}}^{{\rm term}}}= (({\bf app})~{{\ensuremath{\llbracket{M}\rrbracket}}^{{\rm term}}})~{{\ensuremath{\llbracket{N}\rrbracket}}^{{\rm term}}}$$ - For defining the the tensor of vectors, we need to multiply the coefficients of the vectors: $$\left( \begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_n \end{array} \right) \otimes \left( \begin{array}{c} \beta_1 \\ \vdots \\ \beta_m \end{array} \right) = \left( \begin{array}{c} \alpha_1 \cdot \left( \begin{array}{c} \beta_1 \\ \vdots \\ \beta_m \end{array} \right) \\ \vdots \\ \alpha_n \cdot \left( \begin{array}{c} \beta_1 \\ \vdots \\ \beta_m \end{array} \right) \end{array} \right) = \left( \begin{array}{c} \alpha_1\beta_1 \\ \vdots \\ \alpha_1\beta_m \\ \vdots \\ \alpha_n\beta_1 \\ \vdots \\ \alpha_n\beta_m \end{array} \right).$$ We perform this operation in several steps: First, we map the two vectors $(\alpha_i)_i$ and $(\beta_j)_j$ into matrices of size $mn\times mn$: These two operations can be represented as terms of [$\lambda^{\!\!\textrm{vec}}$]{} respectively as follows: It is now enough to multiply these two matrices together to retrieve the diagonal: $$\left( \begin{array}{c@{}c@{}c@{}c@{}c@{}c@{}c} \alpha_1&&&&&& \\ &\vddots&&&&& \\ &&\alpha_1&&&& \\ &&&\vddots&&& \\ &&&&\alpha_n&& \\ &&&&&\vddots& \\ &&&&&&\alpha_n \end{array} \right) \left( \begin{array}{c@{}c@{}c@{}c@{}c@{}c@{}c} \beta_1&&&&& \\ &\vddots&&&& \\ &&\beta_m&&&& \\ &&&\vddots&&& \\ &&&&\beta_1&& \\ &&&&&\vddots& \\ &&&&&&\beta_m \end{array} \right) \left( \begin{array}{c} 1\\ \vdots \\ 1\\ \vdots \\ 1\\ \vdots \\ 1\\ \end{array} \right) \quad=\quad \left( \begin{array}{c} \alpha_1\beta_1 \\ \vdots \\ \alpha_1\beta_m \\ \vdots \\ \alpha_n\beta_1 \\ \vdots \\ \alpha_n\beta_m \end{array} \right)$$ and this can be implemented through matrix-vector multiplication: $${\bf tens}^{n,m} = \lambda bc.(({{\ensuremath{\mathbf{m}}}}^{n,m}_1)\,b)\,\left((({{\ensuremath{\mathbf{m}}}}^{m,n}_2)\,c)\,\left({\sum_{i=1}^{mn}}{{\ensuremath{\mathbf{e}}}}^n_i\right)\right).$$ Hence, if $u:n$ and $v:m$, we have $${{\ensuremath{\llbracket{u\otimes v}\rrbracket}}^{{\rm term}}} = (({\bf tens}^{n,m})~{{\ensuremath{\llbracket{u}\rrbracket}}^{{\rm term}}})~{{\ensuremath{\llbracket{v}\rrbracket}}^{{\rm term}}}$$ - The tensor of matrices is done column by column: $$\begin{gathered} \qquad\left( \begin{array}{ccc} \alpha_{11} & \dots & \alpha_{1n}\\ \vdots & & \vdots\\ \alpha_{n'1} & \dots & \alpha_{n'n} \end{array} \right) \otimes \left( \begin{array}{ccc} \beta_{11} & \dots & \beta_{1m}\\ \vdots & & \vdots\\ \beta_{m'1} & \dots & \beta_{m'm} \end{array} \right) = \\ \left( \begin{array}{ccc} \left( \begin{array}{c} \alpha_{11}\\ \vdots\\ \alpha_{n'1} \end{array} \right)\otimes \left( \begin{array}{c} \beta_{11}\\ \vdots\\ \beta_{m'1} \end{array} \right) & \dots & \left( \begin{array}{c} \alpha_{1n}\\ \vdots\\ \alpha_{n'n} \end{array} \right)\otimes \left( \begin{array}{c} \beta_{1m}\\ \vdots\\ \beta_{m'm} \end{array} \right) \end{array} \right) \end{gathered}$$ If $M$ be a matrix of size $m\times m'$ and $N$ a matrix of size $n\times n'$. Then $M\otimes N$ has size $m\times n$, and it can be implemented as $$\begin{gathered} {\bf Tens}^{m,n} ={}\\ \lambda bc.{\bf mat}( (({\bf tens}^{m,n})~({\bf col}_1^m)~b)~({\bf col}_1^n)~c, \cdots (({\bf tens}^{m,n})~({\bf col}_n^m)~b)~({\bf col}_m^n)~c) \end{gathered}$$ Hence, if $M:(m,m')$ and $N:(n,n')$, we have $${{\ensuremath{\llbracket{M\otimes N}\rrbracket}}^{{\rm term}}} = (({\bf Tens}^{m,n})~{{\ensuremath{\llbracket{M}\rrbracket}}^{{\rm term}}})~{{\ensuremath{\llbracket{N}\rrbracket}}^{{\rm term}}}$$ \[thm:matrixsound\] The denotation of [*Mat*]{} as terms and types of [[$\lambda^{\!\!\textrm{vec}}$]{}]{} are sound in the following sense. $$M\downarrow\zeta \qquad \textrm{implies} \qquad \vdash{{\ensuremath{\llbracket{M}\rrbracket}}^{{\rm term}}} : {{\ensuremath{\llbracket{\zeta}\rrbracket}}^{{\rm type}}},$$ $$u\downarrow\nu \qquad \textrm{implies} \qquad \vdash{{\ensuremath{\llbracket{u}\rrbracket}}^{{\rm term}}} : {{\ensuremath{\llbracket{\nu}\rrbracket}}^{{\rm type}}}.$$ The proof is a straightfoward structural induction on $M$ and $u$. [[$\lambda^{\!\!\textrm{vec}}$]{}]{} and quantum computation ------------------------------------------------------------ In quantum computation, data is encoded on normalised vectors in Hilbert spaces. For our purpose, their interesting property is to be modules over the ring of complex numbers. The smallest non-trivial such space is the space of [*qubits*]{}. The space of qubits is the two-dimensional vector space $\mathbb{C}^2$, together with a chosen orthonormal basis $\{{{|{0}\rangle}}, {{|{1}\rangle}}\}$. A quantum bit (or qubit) is a normalised vector $\alpha{{|{0}\rangle}} + \beta{{|{1}\rangle}}$, where $|\alpha|^2 + |\beta|^2=1$. In quantum computation, the operations on qubits that are usually considered are the [*quantum gates*]{}, [[i.e.]{} ]{}a chosen set of unitary operations. For our purpose, their interesting property is to be [*linear*]{}. The fact that one can encode quantum circuits in [[$\lambda^{\!\!\textrm{vec}}$]{}]{} is a corollary of Theorem \[thm:matrixsound\]. Indeed, a quantum circuit can be regarded as a sequence of multiplications and tensors of matrices. The language of term can faithfully represent those, where as the type system can serve as an abstract interpretation of the actual unitary map computed by the circuit. We believe that this tool is a first step towards lifting the “quantumness” of algebraic $\lambda$-calculi to the level of a type based analysis. It could also be a step towards a “quantum theoretical logic” coming readily with a Curry-Howard isomorphism. The logic we are sketching merges intuitionistic logic and vectorial structure, which makes it intriguing. The next step in the study of the quantumness of the linear algebraic $\lambda$-calculus is the exploration of the notion of orthogonality between terms, and the validation of this notion by means of a compilation into quantum circuits. The work of [@ValironQPL10] shows that it is worthwhile pursuing in this direction. #### Acknowledgements We would like to thank Gilles Dowek and Barbara Petit for enlightening discussions. Bibliography ============ Detailed proofs of lemmas and theorems in Section \[sec:sr\] {#app:sr} ============================================================ Lemmas from Section \[sec:prereq\] {#app:srpre} ---------------------------------- [**Lemma \[lem:typecharact\]** (Characterisation of types)**.**]{} For any type $T$ in $\mathcal{G}$, there exist $n,m\in\mathbb{N}$, $\alpha_1,\dots,\alpha_n$, $\beta_1,\dots,\beta_m\in{\ensuremath{\mathsf{S}}}$, distinct unit types $U_1,\dots,U_n$ and distinct general variables ${\ensuremath{\mathbb{X}}}_1,\dots,{\ensuremath{\mathbb{X}}}_m$ such that $$T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j\ .$$ Structural induction on $T$. - Let $T$ be a unit type, then take $\alpha=\beta=1$, $n=1$ and $m=0$, and so $T\equiv{\sum_{i=1}^{1}}1\cdot U=1\cdot U$. - Let $T=\alpha\cdot T'$, then by the induction hypothesis $T'\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j$, so $T=\alpha\cdot T'\equiv\alpha\cdot ({\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j)\equiv{\sum_{i=1}^{n}}(\alpha\times\alpha_i)\cdot U_i+{\sum_{j=1}^{m}}(\alpha\times\beta_j)\cdot{\ensuremath{\mathbb{X}}}_j$. - Let $T=R+S$, then by the induction hypothesis $R\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j$ and $S\equiv{\sum_{i=1}^{n'}}\alpha'_i\cdot U'_i+{\sum_{j=1}^{m'}}\beta'_j\cdot{\ensuremath{\mathbb{X'}}}_j$, so $T=R+S\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{i=1}^{n'}}\alpha'_i\cdot U'_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j+{\sum_{j=1}^{m'}}\beta'_j\cdot{\ensuremath{\mathbb{X'}}}_j$. If the $U_i$ and the $U'_i$ are all different each other, we have finished, in other case, if $U_k=U'_h$, notice that $\alpha_k\cdot U_k+\alpha'_h\cdot U'_h=(\alpha_k+\alpha'_h)\cdot U_k$. - Let $T={\ensuremath{\mathbb{X}}}$, then take $\alpha=\beta=1$, $m=1$ and $n=0$, and so $T\equiv{\sum_{j=1}^{1}} 1\cdot{\ensuremath{\mathbb{X}}}=1\cdot{\ensuremath{\mathbb{X}}}$. [**Lemma \[lem:equivdistinctscalars\]** (Equivalence between sums of distinct elements)**.**]{} Let $U_1,\dots,U_n$ be a set of distinct unit types, and let $V_1,\dots,V_m$ be also a set distinct unit types. If ${\sum_{i=1}^{n}}\alpha_i\cdot U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j$, then $m=n$ and there exists a permutation $p$ of $m$ such that $\forall i$, $\alpha_i=\beta_{p(i)}$ and $U_i\equiv V_{p(i)}$. Straightforward case by case analysis over the equivalence rules. Notice that since all the equivalences are stated between terms with sums and/or scalars, when $U$ and $V$ are unit types, $U\equiv V\Leftrightarrow U=V$. [**Lemma \[lem:equivforall\]** (Equivalences $\forall_I$)**.**]{}  1. \[ap:it:equivforall1\] ${\sum_{i=1}^{n}}\alpha_i\cdot U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j\Leftrightarrow{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot\forall X.V_j$. 2. \[ap:it:equivforall2\] ${\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i\equiv{\sum_{j=1}^{m}}\beta_j\cdot V_j\Rightarrow\forall V_j,\exists W_j~/~V_j\equiv\forall X.W_j$. 3. \[ap:it:equivforall3\] $T\equiv R\Rightarrow T[A/X]\equiv R[A/X]$. Item (1) From Lemma \[lem:equivdistinctscalars\], $m=n$, and without loss of generality, for all $i$, $\alpha_i=\beta_i$ and $U_i=V_i$ in the left-to-right direction, $\forall X.U_i=\forall X.V_i$ in the right-to-left direction. In both cases we easily conclude. Item (2) is similar. Item (3) is a straightforward induction on the equivalence $T\equiv R$. [**Lemma \[lem:subjectreductionofrelation\]** ($\succeq$-stability)**.**]{} If $T\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} R$, ${\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{r}}}$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}: T$, then $T\succeq^{{\ensuremath{\mathbf{r}}}}_{{\mathcal{V}},\Gamma} R$. It suffices to show this for $\succ^{{\ensuremath{\mathbf{t}}}}_{X,\Gamma}$, with $X\in{\mathcal{V}}$. Observe that since $T\succ^{{\ensuremath{\mathbf{t}}}}_{X,\Gamma}R$, then $X\notin{\ensuremath{FV}(\Gamma)}$. We only have to prove that $\Gamma\vdash{\ensuremath{\mathbf{r}}}: R$ is derivable from $\Gamma\vdash{\ensuremath{\mathbf{r}}}: T$. We proceed now by cases: - $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i$ and $R\equiv{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U_i$, then using rules $\forall_I$ and $\equiv$, we can deduce $\Gamma\vdash{\ensuremath{\mathbf{r}}}: R$. - $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot\forall X.U$ and $R\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i[A/X]$, then using rules $\forall_E$ and $\equiv$, we can deduce $\Gamma\vdash{\ensuremath{\mathbf{r}}}: R$. [**Lemma \[lem:arrowscomp\]** (Arrows comparison)**.**]{} If $V\to R\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} \forall\vec X.(U\to T)$, then $U\to T\equiv(V\to R)[\vec{A}/\vec{Y}]$, with $\vec Y\notin {\ensuremath{FV}(\Gamma)}$. Let $(~\cdot~)^\circ$ be a map from types to types defined as follows, $$\begin{array}{r@{~}c@{~}l@{\hspace{0.4cm}}r@{~}c@{~}l@{\hspace{0.4cm}}r@{~}c@{~}l} X^\circ &=& X & (U\to T)^\circ &=& U\to T & (\forall X.T)^\circ &=& T^\circ \\ (\alpha\cdot T)^\circ &=&\alpha\cdot T^\circ & && & (T+R)^\circ &=&T^\circ+R^\circ \end{array}$$ We need three intermediate results: 1. If $T\equiv R$, then $T^\circ\equiv R^\circ$. 2. For any types $U, A$, there exists $B$ such that $(U[A/X])^\circ=U^\circ[B/X]$. 3. For any types $V, U$, there exists $\vec A$ such that if $V\succeq^{{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} \forall\vec X.U$, then $U^\circ\equiv V^\circ[\vec A/\vec X]$. [*Proofs.*]{} 1. Induction on the equivalence rules. We only give the basic cases since the inductive step, given by the context where the equivalence is applied, is trivial. - $(1\cdot T)^\circ=1\cdot T^\circ\equiv T^\circ$. - $(\alpha\cdot(\beta\cdot T))^\circ=\alpha\cdot(\beta\cdot T^\circ)\equiv(\alpha\times\beta)\cdot T^\circ=((\alpha\times\beta)\cdot T)^\circ$. - $(\alpha\cdot T+\alpha\cdot R)^\circ=\alpha\cdot T^\circ+\alpha\cdot R^\circ\equiv\alpha\cdot(T^\circ+R^\circ)=(\alpha\cdot(T+R))^\circ$. - $(\alpha\cdot T+\beta\cdot T)^\circ=\alpha\cdot T^\circ+\beta\cdot T^\circ\equiv(\alpha+\beta)\cdot T^\circ=((\alpha+\beta)\cdot T)^\circ$. - $(T+R)^\circ=T^\circ+R^\circ\equiv R^\circ+T^\circ=(R+T)^\circ$. - $(T+(R+S))^\circ=T^\circ+(R^\circ+S^\circ)\equiv (T^\circ+R^\circ)+S^\circ=((T+R)+S)^\circ$. 2. Structural induction on $U$. - $U={\ensuremath{\mathpzc{X}}}$. Then $({\ensuremath{\mathpzc{X}}}[V/{\ensuremath{\mathpzc{X}}}])^\circ=V^\circ={\ensuremath{\mathpzc{X}}}[V^\circ/{\ensuremath{\mathpzc{X}}}]={\ensuremath{\mathpzc{X}}}^\circ[V^\circ/{\ensuremath{\mathpzc{X}}}]$. - $U={\ensuremath{\mathpzc{Y}}}$. Then $({\ensuremath{\mathpzc{Y}}}[A/X])^\circ={\ensuremath{\mathpzc{Y}}}={\ensuremath{\mathpzc{Y}}}^\circ[A/X]$. - $U=V\to T$. Then $((V\to T)[A/X])^\circ=(V[A/X]\to T[A/X])^\circ=V[A/X]\to T[A/X]=(V\to T)[A/X]=(V\to T)^\circ[A/X]$. - $U=\forall Y.V$. Then $((\forall Y.V)[A/X])^\circ=(\forall Y.V[A/X])^\circ=(V[A/X])^\circ$, which by the induction hypothesis is equivalent to $V^\circ[B/X]=(\forall Y.V)^\circ[B/X]$. 3. It suffices to show this for $V\succ^{{\ensuremath{\mathbf{t}}}}_{X,\Gamma} \forall\vec X.U$. Cases: - $\forall\vec X.U\equiv\forall Y.V$, then notice that $(\forall\vec X.U)^\circ \equiv_{(1)}(\forall Y.V)^\circ=V^\circ$. - $V\equiv\forall Y.W$ and $\forall\vec X.U\equiv W[A/X]$, then $(\forall\vec X.U)^\circ\equiv_{(1)}(W[A/X])^\circ\equiv_{(2)} W^\circ[B/X]=(\forall Y.W)^\circ[B/X]\equiv_{(1)}V^\circ[B/X]$. Proof of the lemma. $U\to T\equiv(U\to T)^\circ$, by the intermediate result 3, this is equivalent to $(V\to R)^\circ[\vec A/\vec X]=(V\to R)[\vec A/\vec X]$. [**Lemma \[lem:scalars\]** (Scalars)**.**]{} For any context $\Gamma$, term ${\ensuremath{\mathbf{t}}}$, type $T$ and scalar $\alpha$, if $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}: T$, then there exists a type $R$ such that $T\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}: R$. Moreover, if the minimum size of the derivation of $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}:T$ is $s$, then if $T=\alpha\cdot R$, the minimum size of the derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$ is at most $s-1$, in other case, its minimum size is at most $s-2$. We proceed by induction on the typing derivation. [**Lemma \[lem:termzero\]** (Type for zero)**.**]{} Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$ or ${\ensuremath{\mathbf{t}}}=\alpha\cdot{\ensuremath{\mathbf{0}}}$, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ implies $T\equiv 0\cdot R$. We proceed by induction on the typing derivation. [**Lemma \[lem:sums\]** (Sums)**.**]{} If $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:S$, then $S\equiv T+R$ with $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R$. Moreover, if the size of the derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:S$ is $s$, then if $S=T+R$, the minimum sizes of the derivations of $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R$ are at most $s-1$, and if $S\neq T+R$, the minimum sizes of these derivations are at most $s-2$. We proceed by induction on the typing derivation. In the second case (when the types are not equal), there exists $N,M\subseteq\{1,\dots,n\}$ with $N\cup M=\{1,\dots,n\}$ such that $$\begin{aligned} T\equiv\sum_{i\in N\setminus M}\alpha_i\cdot U_i+\sum_{i\in N\cap M}\alpha_i'\cdot U_i&\qquad\textrm{and}\\ R\equiv\sum_{i\in M\setminus N}\alpha_i\cdot U_i+\sum_{i\in N\cap M}\alpha_i''\cdot U_i&\end{aligned}$$ where $\forall i\in N\cap M$, $\alpha_i'+\alpha_i''=\alpha_i$. Therefore, using $\equiv$ (if needed) and the same $\forall$-rule, we get $\Gamma\vdash{\ensuremath{\mathbf{t}}}:\sum_{i\in N\setminus M}\alpha_i\cdot V_i+\sum_{i\in N\cap M}\alpha_i'\cdot V_i$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:\sum_{i\in M\setminus N}\alpha_i\cdot V_i+\sum_{i\in N\cap M}\alpha_i''\cdot V_i$, with derivations of minimum size at most $s-1$. [**Lemma \[lem:app\]** (Applications)**.**]{} If $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:T$, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec{A}_j/\vec{X}]$ where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec{A}_j/\vec{X}]\succeq^{({\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{r}}}}_{{\mathcal{V}},\Gamma} T$ for some ${\mathcal{V}}$. We proceed by induction on the typing derivation. t:[\_[i=1]{}\^[n]{}]{}\_i.(UT\_i) r:[\_[j=1]{}\^[m]{}]{}\_jU\[\_j/\] (t) r:[\_[i=1]{}\^[n]{}]{}[\_[j=1]{}\^[m]{}]{}\_i\_jT\_i\[\_j/\] \_E This is the trivial case. [**Lemma \[lem:abs\]** (Abstractions)**.**]{} If $\Gamma\vdash\lambda x.{\ensuremath{\mathbf{t}}}:T$, then $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:R$ where $U\to R\succeq^{\lambda x.{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} T$ for some ${\mathcal{V}}$. We proceed by induction on the typing derivation. [**Lemma \[lem:basevectors\]** (Basis terms)**.**]{} For any context $\Gamma$, type $T$ and basis term ${\ensuremath{\mathbf{b}}}$, if $\Gamma\vdash{\ensuremath{\mathbf{b}}}: T$ then there exists a unit type $U$ such that $T\equiv U$. By induction on the typing derivation. [**Lemma \[lem:substitution\]** (Substitution lemma)**.**]{} For any term ${{\ensuremath{\mathbf{t}}}}$, basis term ${\ensuremath{\mathbf{b}}}$, term variable $x$, context $\Gamma$, types $T$, $R$, $U$, $\vec{W}$, set of type variables ${\mathcal{V}}$ and type variables $\vec{X}$, 1. \[ap:it:substitutionTypes\] if $\Gamma\vdash{\ensuremath{\mathbf{t}}}: T$, then $\Gamma[A/X]\vdash{\ensuremath{\mathbf{t}}}: T[A/X]$; 2. \[ap:it:substitutionTerms\] if $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$, $\Gamma\vdash{\ensuremath{\mathbf{b}}}:U$ then $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]: T$.   1. Induction on the typing derivation. t:[\_[i=1]{}\^[n]{}]{}\_iY.(UT\_i)r:[\_[j=1]{}\^[m]{}]{}\_jU\[B\_j/Y\] (t) r:[\_[i=1]{}\^[n]{}]{}[\_[j=1]{}\^[m]{}]{}\_i\_jT\_i\[B\_j/Y\] \_E By the induction hypothesis $\Gamma[A/X]\vdash{\ensuremath{\mathbf{t}}}:({\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec Y.(U\to T_i))[A/X]$ and this type is equal to ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec Y.(U[A/X]\to T_i[A/X])$. Also $\Gamma[A/X]\vdash{\ensuremath{\mathbf{r}}}:({\sum_{j=1}^{m}}\beta_j\cdot U[\vec B_j/\vec Y])[A/X]= {\sum_{j=1}^{m}}\beta_j\cdot U[\vec B_j/\vec Y][A/X]$. Since $\vec Y$ is bound, we can consider it is not in $A$. Hence $U[\vec B_j/\vec Y][A/X]=U[A/X][\vec B_j[A/X]/\vec Y]$, and so, by rule $\to_E$, $$\begin{aligned} \Gamma[A/X]\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}&:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[A/X][\vec B_j[A/X]/\vec Y]\\ &=({\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec B_j/\vec Y])[A/X]\ . \end{aligned}$$ 2. We proceed by induction on the typing derivation of $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$. 1. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $ax$. Cases: - ${\ensuremath{\mathbf{t}}}=x$, then $T=U$, and so $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:T$ and $\Gamma\vdash{\ensuremath{\mathbf{b}}}:U$ are the same sequent. - ${\ensuremath{\mathbf{t}}}=y$. Notice that $y[{\ensuremath{\mathbf{b}}}/x]=y$. By Lemma \[lem:weakening\] $\Gamma,x:U\vdash y:T$ implies $\Gamma\vdash y:T$. 2. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $0_I$, then ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$ and $T=0\cdot R$, with $\Gamma,x:U\vdash{\ensuremath{\mathbf{r}}}:R$ for some ${\ensuremath{\mathbf{r}}}$. By the induction hypothesis, $\Gamma\vdash{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:R$. Hence, by rule $0_I$, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:0\cdot R$. 3. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\to_I$, then ${\ensuremath{\mathbf{t}}}=\lambda y.{\ensuremath{\mathbf{r}}}$ and $T=V\to R$, with $\Gamma,x:U,y:V\vdash{\ensuremath{\mathbf{r}}}:R$. Since our system admits weakening (Lemma \[lem:weakening\]), the sequent $\Gamma,y:V\vdash{\ensuremath{\mathbf{b}}}:U$ is derivable. Then by the induction hypothesis, $\Gamma,y:V\vdash{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:R$, from where, by rule $\to_I$, we obtain $\Gamma\vdash\lambda y.{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:V\to R$. We are done since $\lambda y.{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]=(\lambda y.{\ensuremath{\mathbf{r}}})[{\ensuremath{\mathbf{b}}}/x]$. 4. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\to_E$, then ${\ensuremath{\mathbf{t}}}=({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}$ and $T={\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot R_i[\vec B/\vec Y]$, with $\Gamma,x:U \vdash {\ensuremath{\mathbf{r}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec Y.(V\to T_i)$ and $\Gamma,x:U \vdash {\ensuremath{\mathbf{u}}}:{\sum_{j=1}^{m}}\beta_j\cdot V[\vec B/\vec Y]$. By the induction hypothesis, $\Gamma \vdash {\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec Y.(V\to R_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{u}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{j=1}^{m}}\beta_j\cdot V[\vec B/\vec Y]$. Then, by rule $\to_E$, $\Gamma \vdash {\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x])~{\ensuremath{\mathbf{u}}}[{\ensuremath{\mathbf{b}}}/x]: {\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot R_i[\vec B/\vec Y]$. 5. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\forall_I$. Then $T={\sum_{i=1}^{n}}\alpha_i\cdot\forall Y.V_i$, with $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot V_i$ and $Y\notin{\ensuremath{FV}(\Gamma)}\cup{\ensuremath{FV}(U)}$. By the induction hypothesis, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{i=1}^{n}}\alpha_i\cdot V_i$. Then by rule $\forall_I$, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{i=1}^{n}}\alpha_i\cdot\forall Y.V_i$. 6. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\forall_E$, then $T={\sum_{i=1}^{n}}\alpha_i\cdot V_i[B/Y]$, with $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall Y.V_i$. By the induction hypothesis, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{i=1}^{n}}\alpha_i\cdot\forall Y.V_i$. By rule $\forall_E$, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:{\sum_{i=1}^{n}}\alpha_i\cdot V_i[B/Y]$. 7. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\alpha_I$. Then $T=\alpha\cdot R$ and ${\ensuremath{\mathbf{t}}}=\alpha\cdot{\ensuremath{\mathbf{r}}}$, with $\Gamma,x:U\vdash{\ensuremath{\mathbf{r}}}:R$. By the induction hypothesis $\Gamma\vdash{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:R$. Hence by rule $\alpha_I$, $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:\alpha\cdot R$. Notice that $\alpha\cdot{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]=(\alpha\cdot{\ensuremath{\mathbf{r}}})[{\ensuremath{\mathbf{b}}}/x]$. 8. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $+_I$. Then ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}}$ and $T=R+S$, with $\Gamma,x:U\vdash{\ensuremath{\mathbf{r}}}:R$ and $\Gamma,x:U\vdash{\ensuremath{\mathbf{u}}}:S$. By the induction hypothesis, $\Gamma\vdash{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]:R$ and $\Gamma\vdash{\ensuremath{\mathbf{u}}}[{\ensuremath{\mathbf{b}}}/x]:S$. Then by rule $+_I$, $\Gamma\vdash{\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]+{\ensuremath{\mathbf{u}}}[{\ensuremath{\mathbf{b}}}/x]:R+S$. Notice that ${\ensuremath{\mathbf{r}}}[{\ensuremath{\mathbf{b}}}/x]+{\ensuremath{\mathbf{u}}}[{\ensuremath{\mathbf{b}}}/x]=({\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}})[{\ensuremath{\mathbf{b}}}/x]$. 9. Let $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:T$ as a consequence of rule $\equiv$. Then $T\equiv R$ and $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:R$. By the induction hypothesis, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:R$. Hence, by rule $\equiv$, $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:T$. Proof of Theorem \[thm:subjectreduction\] {#app:srproof} ----------------------------------------- [**Theorem \[thm:subjectreduction\]** (Weak subject reduction)**.**]{} For any terms ${\ensuremath{\mathbf{t}}}$, ${\ensuremath{\mathbf{t}}}'$, any context $\Gamma$ and any type $T$, if ${\ensuremath{\mathbf{t}}}\to_R{\ensuremath{\mathbf{t}}}'$ and $\Gamma\vdash {\ensuremath{\mathbf{t}}}: T$, then: 1. if $R\notin$ Group F, then $\Gamma\vdash{\ensuremath{\mathbf{t}}}': T$; 2. if $R\in$ Group F, then $\exists S\sqsupseteq T$ such that $\Gamma\vdash{\ensuremath{\mathbf{t}}}': S$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}: S$. Let ${\ensuremath{\mathbf{t}}}\to_R{\ensuremath{\mathbf{t}}}'$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$. We proceed by induction on the rewrite relation. #### Group E   $0\cdot{\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{0}}}$ : Consider $\Gamma\vdash 0\cdot{\ensuremath{\mathbf{t}}}:T$. By Lemma \[lem:scalars\], we have that $T\equiv 0\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$. Then, by rule $0_I$, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:0\cdot R$. We conclude using rule $\equiv$. $1\cdot{\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{t}}}$ : Consider $\Gamma\vdash 1\cdot{\ensuremath{\mathbf{t}}}:T$, then by Lemma \[lem:scalars\], $T\equiv 1\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$. Notice that $R\equiv T$, so we conclude using rule $\equiv$. $\alpha\cdot{\ensuremath{\mathbf{0}}}\to{\ensuremath{\mathbf{0}}}$ : Consider $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{0}}}:T$, then by Lemma \[lem:termzero\], $T\equiv 0\cdot R$. Hence by rules $\equiv$ and $0_I$, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:0\cdot 0\cdot R$ and so we conclude using rule $\equiv$. $\alpha\cdot(\beta\cdot{\ensuremath{\mathbf{t}}})\to(\alpha\times\beta)\cdot{\ensuremath{\mathbf{t}}}$ : Consider $\Gamma\vdash\alpha\cdot(\beta\cdot{\ensuremath{\mathbf{t}}}):T$. By Lemma \[lem:scalars\], $T\equiv\alpha\cdot R$ and $\Gamma\vdash\beta\cdot{\ensuremath{\mathbf{t}}}:R$. By Lemma \[lem:scalars\] again, $R\equiv\beta\cdot S$ with $\Gamma\vdash{\ensuremath{\mathbf{t}}}:S$. Notice that $(\alpha\times\beta)\cdot S\equiv\alpha\cdot(\beta\cdot S)\equiv T$, hence by rules $\alpha_I$ and $\equiv$, we obtain $\Gamma\vdash(\alpha\times\beta)\cdot{\ensuremath{\mathbf{t}}}:T$. $\alpha\cdot({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}})\to\alpha\cdot{\ensuremath{\mathbf{t}}}+\alpha\cdot{\ensuremath{\mathbf{r}}}$ : Consider $\Gamma\vdash\alpha\cdot({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}):T$. By Lemma \[lem:scalars\], $T\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:R$. By Lemma \[lem:sums\] $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R_1$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R_2$, with $R_1+R_2\equiv R$. Then by rules $\alpha_I$ and $+_I$, $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}+\alpha\cdot{\ensuremath{\mathbf{r}}}:\alpha\cdot R_1+\alpha\cdot R_2$. Notice that $\alpha\cdot R_1+\alpha\cdot R_2\equiv\alpha\cdot(R_1+R_2)\equiv\alpha\cdot R\equiv T$. We conclude by rule $\equiv$. #### Group F   $\alpha\cdot{\ensuremath{\mathbf{t}}}+\beta\cdot{\ensuremath{\mathbf{t}}}\to(\alpha+\beta)\cdot{\ensuremath{\mathbf{t}}}$ : Consider $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}+\beta\cdot{\ensuremath{\mathbf{t}}}:T$, then by Lemma \[lem:sums\], $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}:T_1$ and $\Gamma\vdash\beta\cdot{\ensuremath{\mathbf{t}}}:T_2$ with $T_1+T_2\equiv T$. Then by Lemma \[lem:scalars\], $T_1\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$ and $T_2\equiv\beta\cdot S$. By rule $\alpha_I$, $\Gamma\vdash(\alpha+\beta)\cdot{\ensuremath{\mathbf{t}}}:(\alpha+\beta)\cdot R$. Notice that $(\alpha+\beta)\cdot R\sqsupseteq\alpha\cdot R+\beta\cdot S\equiv T_1+T_2\equiv T$. $\alpha\cdot{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{t}}}\to(\alpha+1)\cdot{\ensuremath{\mathbf{t}}}$ and $R={\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{t}}}\to(1+1)\cdot{\ensuremath{\mathbf{t}}}$ : The proofs of these two cases are simplified versions of the previous case. ${\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{0}}}\to{\ensuremath{\mathbf{t}}}$ : Consider $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{0}}}:T$. By Lemma \[lem:sums\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$ and $\Gamma\vdash{\ensuremath{\mathbf{0}}}:S$ with $R+S\equiv T$. In addition, by Lemma \[lem:termzero\], $S\equiv 0\cdot S'$. Notice that $R + 0\cdot R\equiv R\sqsupseteq R+0\cdot S'\equiv R+S\equiv T$. #### Group B   $(\lambda x.{\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}\to{\ensuremath{\mathbf{t}}}{[{{\ensuremath{\mathbf{b}}}}/{x}]}$ : Consider $\Gamma\vdash(\lambda x.{\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}:T$, then by Lemma \[lem:app\], we have $\Gamma\vdash\lambda x.{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to R_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{b}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]$ where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot R_i[\vec A_j/\vec X]\succeq^{(\lambda x.{\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{b}}}}_{{\mathcal{V}},\Gamma} T$. However, we can simplify these types using Lemma \[lem:basevectors\], and so we have $\Gamma\vdash\lambda x.{\ensuremath{\mathbf{t}}}:\forall\vec X.(U\to R)$ and $\Gamma\vdash{\ensuremath{\mathbf{b}}}:U[\vec A/\vec X]$ with $R[\vec A/\vec X]\succeq^{(\lambda x.{\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{b}}}}_{{\mathcal{V}},\Gamma} T$. Note that $\vec{X}\not\in{\ensuremath{FV}(\Gamma)}$ (from the arrow introduction rule). Hence, by Lemma \[lem:abs\], $\Gamma,x:V\vdash{\ensuremath{\mathbf{t}}}:S$, with $V\to S\succeq^{\lambda x.{\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma}\forall\vec X.(U\to R)$. Hence, by Lemma \[lem:arrowscomp\], $U\equiv V[\vec B/\vec Y]$ and $R\equiv S[\vec B/\vec Y]$ with $\vec Y\notin{\ensuremath{FV}(\Gamma)}$, so by Lemma \[lem:substitution\](\[it:substitutionTypes\]), $\Gamma,x:U\vdash{\ensuremath{\mathbf{t}}}:R$. Applying Lemma \[lem:substitution\](\[it:substitutionTypes\]) once more, we have $\Gamma[\vec A/\vec X,x:U[\vec A/\vec X]\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:R[\vec A/\vec X]$. Since $\vec X\not\in{\ensuremath{FV}(\Gamma)}$, $\Gamma[\vec A/\vec X] = \Gamma$ and we can apply Lemma \[lem:substitution\](\[it:substitutionTerms\]) to get $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:R[\vec A/\vec X]\succeq^{(\lambda x.{\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{b}}}}_{{\mathcal{V}},\Gamma} T$. So, by Lemma \[lem:subjectreductionofrelation\], $R[\vec A/\vec X]\succeq^{{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]}_{{\mathcal{V}},\Gamma} T$, which implies $\Gamma\vdash{\ensuremath{\mathbf{t}}}[{\ensuremath{\mathbf{b}}}/x]:T$. #### Group A   $({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}\to({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}+({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}$ : Consider $\Gamma\vdash({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}:T$. Then by Lemma \[lem:app\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{u}}}:{\sum_{j=1}^{m}}\beta_j.U[\vec A_j/\vec X]$ where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}){\ensuremath{\mathbf{u}}}}_{{\mathcal{V}},\Gamma}T$. Then by Lemma \[lem:sums\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R_1$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R_2$, with $R_1+R_2\equiv{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$. Hence, there exists $N_1, N_2\subseteq\{1,\dots,n\}$ with $N_1\cup N_2=\{1,\dots,n\}$ such that $$\begin{aligned} R_1\equiv\sum\limits_{i\in N_1\setminus N_2}\alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2}\alpha'_i\cdot\forall\vec X.(U\to T_i) & \mbox{\quad and}\\ R_2\equiv\sum\limits_{i\in N_2\setminus N_1}\alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2}\alpha''_i\cdot\forall\vec X.(U\to T_i) & \end{aligned}$$ where $\forall i\in N_1\cap N_2$, $\alpha'_i+\alpha''_i=\alpha_i$. Therefore, using $\equiv$ we get $$\begin{aligned} \Gamma\vdash{\ensuremath{\mathbf{t}}}:\sum\limits_{i\in N_1\setminus N_2}\alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2}\alpha'_i\cdot\forall\vec X.(U\to T_i) & \mbox{\quad and}\\ \Gamma\vdash{\ensuremath{\mathbf{r}}}:\sum\limits_{i\in N_2\setminus N_1}\alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2}\alpha''_i\cdot\forall\vec X.(U\to T_i) & \end{aligned}$$ So, using rule $\to_E$, we get $$\begin{aligned} \Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}:\sum\limits_{i\in N_1\setminus N_2}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ \sum\limits_{i\in N_1\cap N_2}{\sum_{j=1}^{m}}\alpha'_i\times\beta_j\cdot T_i[\vec A_j/\vec X] & \mbox{\quad and}\\ \Gamma\vdash({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}:\sum\limits_{i\in N_2\setminus N_1}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ \sum\limits_{i\in N_1\cap N_2}{\sum_{j=1}^{m}}\alpha''_i\times\beta_j\cdot T_i[\vec A_j/\vec X] & \end{aligned}$$ Finally, by rule $+_I$ we can conclude $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}+({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}}+{\ensuremath{\mathbf{r}}}){\ensuremath{\mathbf{u}}}}_{{\mathcal{V}},\Gamma} T$. Then by Lemma \[lem:subjectreductionofrelation\], $:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{u}}}+({\ensuremath{\mathbf{r}}}){\ensuremath{\mathbf{u}}}}_{{\mathcal{V}},\Gamma} T$, so $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}+({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{u}}}:T$. $({\ensuremath{\mathbf{t}}})~({\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}})\to({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}+({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}$ : Consider $\Gamma\vdash({\ensuremath{\mathbf{t}}})~({\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}}):T$. By Lemma \[lem:app\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}}:{\sum_{j=1}^{m}}\beta_j.U[\vec A_j/\vec X]$ where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}})({\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{u}}})}_{{\mathcal{V}},\Gamma} T$. Then by Lemma \[lem:sums\], $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R_1$ and $\Gamma\vdash{\ensuremath{\mathbf{u}}}:R_2$, with $R_1+R_2\equiv{\sum_{j=1}^{m}}\beta_j.U[\vec A_j/\vec X]$. Hence, there exists $M_1, M_2\subseteq\{1,\dots,m\}$ with $M_1\cup M_2=\{1,\dots,m\}$ such that $$\begin{aligned} R_1\equiv\sum\limits_{j\in M_1\setminus M_2}\beta_j.U[\vec A_j/\vec X]+ \sum\limits_{j\in M_1\cap M_2}\beta'_j.U[\vec A_j/\vec X] & \mbox{\quad and}\\ R_2\equiv\sum\limits_{j\in M_2\setminus M_1}\beta_j.U[\vec A_j/\vec X]+ \sum\limits_{j\in M_1\cap M_2}\beta''_j.U[\vec A_j/\vec X] & \end{aligned}$$ where $\forall j\in M_1\cap M_2$, $\beta'_j+\beta''_j=\beta_j$. Therefore, using $\equiv$ we get $$\begin{aligned} \Gamma\vdash{\ensuremath{\mathbf{r}}}:\sum\limits_{j\in M_1\setminus M_2}\beta_j.U[\vec A_j/\vec X]+ \sum\limits_{j\in M_1\cap M_2}\beta'_j.U[\vec A_j/\vec X] & \mbox{\quad and}\\ \Gamma\vdash{\ensuremath{\mathbf{u}}}:\sum\limits_{j\in M_2\setminus M_1}\beta_j.U[\vec A_j/\vec X]+ \sum\limits_{j\in M_1\cap M_2}\beta''_j.U[\vec A_j/\vec X] & \end{aligned}$$ So, using rule $\to_E$, we get $$\begin{aligned} \Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:{\sum_{i=1}^{n}}\sum\limits_{j\in M_1\setminus M_2}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ {\sum_{i=1}^{n}}\sum\limits_{j\in M_1\cap M_2}\alpha_i\times\beta'_j\cdot T_i[\vec A_j/\vec X] & \mbox{\quad and}\\ \Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}:{\sum_{i=1}^{n}}\sum\limits_{j\in M_2\setminus M_1}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ {\sum_{i=1}^{n}}\sum\limits_{j\in M_1\cap M_2}\alpha_i\times\beta''_j\cdot T_i[\vec A_j/\vec X] & \end{aligned}$$ Finally, by rule $+_I$ we can conclude $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}+({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{u}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$. We finish the case with Lemma \[lem:subjectreductionofrelation\]. $(\alpha\cdot{\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}\to\alpha\cdot ({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}$ : Consider $\Gamma\vdash(\alpha\cdot{\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:T$. Then by Lemma \[lem:app\], $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]$, where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{(\alpha\cdot{\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{r}}}}_{{\mathcal{V}},\Gamma} T$. Then by Lemma \[lem:scalars\], ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:R$. By Lemma \[lem:typecharact\], $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i+{\sum_{k=1}^{h}}\eta_k\cdot{\ensuremath{\mathbb{X}}}_k$, however it is easy to see that $h=0$ because $R$ is equivalent to a sum of terms, where none of them is ${\ensuremath{\mathbb{X}}}$. So $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i$. Without lost of generality (cf. previous case), take $T_i\neq T_k$ for all $i\neq k$ and $h=0$, and notice that ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)\equiv{\sum_{i=1}^{n'}}\alpha\times\gamma_i\cdot V_i$. Then by Lemma \[lem:equivdistinctscalars\], there exists a permutation $p$ such that $\alpha_i=\alpha\times\gamma_{p(i)}$ and $\forall\vec X.(U\to T_i)\equiv V_{p(i)}$. Without lost of generality let $p$ be the trivial permutation, and so $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\gamma_i\cdot\forall\vec X.(U\to T_i)$. Hence, using rule $\to_E$, $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\gamma_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$. Therefore, by rule $\alpha_I$, $\Gamma\vdash\alpha\cdot({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:\alpha\cdot{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\gamma_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$. Notice that $\alpha\cdot{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\gamma_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\equiv {\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$. We finish the case with Lemma \[lem:subjectreductionofrelation\]. $({\ensuremath{\mathbf{t}}})~(\alpha\cdot{\ensuremath{\mathbf{r}}})\to\alpha\cdot ({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}$ : Consider $\Gamma\vdash({\ensuremath{\mathbf{t}}})~(\alpha\cdot{\ensuremath{\mathbf{r}}}):T$. Then by Lemma \[lem:app\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)$ and $\Gamma\vdash\alpha\cdot{\ensuremath{\mathbf{r}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]$, where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}})(\alpha\cdot{\ensuremath{\mathbf{r}}})}_{{\mathcal{V}},\Gamma} T$. Then by Lemma \[lem:scalars\], ${\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{r}}}:R$. By Lemma \[lem:typecharact\], $R\equiv{\sum_{j=1}^{m'}}\gamma_j\cdot V_j+{\sum_{k=1}^{h}}\eta_k\cdot{\ensuremath{\mathbb{X}}}_k$, however it is easy to see that $h=0$ because $R$ is equivalent to a sum of terms, where none of them is ${\ensuremath{\mathbb{X}}}$. So $R\equiv{\sum_{j=1}^{m'}}\gamma_j\cdot V_j$. Without lost of generality (cf. previous case), take $A_j\neq A_k$ for all $j\neq k$, and notice that ${\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]\equiv{\sum_{j=1}^{m'}}\alpha\times\gamma_j\cdot V_j$. Then by Lemma \[lem:equivdistinctscalars\], there exists a permutation $p$ such that $\beta_j=\alpha\times\gamma_{p(j)}$ and $U[\vec A_j/\vec X]\equiv V_{p(j)}$. Without lost of generality let $p$ be the trivial permutation, and so $\Gamma\vdash{\ensuremath{\mathbf{r}}}:{\sum_{j=1}^{m}}\gamma_i\cdot U[\vec A_j/\vec X]$. Hence, using rule $\to_E$, $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\gamma_j\cdot T_i[\vec A_j/\vec X]$. Therefore, by rule $\alpha_I$, $\Gamma\vdash\alpha\cdot({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}}:\alpha\cdot{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\gamma_j\cdot T_i[\vec A_j/\vec X]$. Notice that $\alpha\cdot{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\gamma_j\cdot T_i[\vec A_j/\vec X]\equiv {\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$. We finish the case with Lemma \[lem:subjectreductionofrelation\]. $({\ensuremath{\mathbf{0}}})~{\ensuremath{\mathbf{t}}}\to {\ensuremath{\mathbf{0}}}$ : Consider $\Gamma\vdash({\ensuremath{\mathbf{0}}})~{\ensuremath{\mathbf{t}}}:T$. By Lemma \[lem:app\], $\Gamma\vdash{\ensuremath{\mathbf{0}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]$, where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{0}}}){\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} T$. Then by Lemma \[lem:termzero\], ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)\equiv 0\cdot R$. By Lemma \[lem:typecharact\], $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i+{\sum_{k=1}^{h}}\eta_k\cdot{\ensuremath{\mathbb{X}}}_k$, however, it is easy to see that $h=0$ and so $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i$. Without lost of generality, take $T_i\neq T_k$ for all $i\neq k$, and notice that ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)\equiv{\sum_{i=1}^{n'}}0\cdot V_i$. By Lemma \[lem:equivdistinctscalars\], $\alpha_i=0$. Notice that by rule $\to_E$, $\Gamma\vdash({\ensuremath{\mathbf{0}}})~{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]$, hence by rules $0_I$ and $\equiv$, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{0}}}){\ensuremath{\mathbf{t}}}}_{{\mathcal{V}},\Gamma} T$. By Lemma \[lem:subjectreductionofrelation\], ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]\succeq^{{\ensuremath{\mathbf{0}}}}_{{\mathcal{V}},\Gamma} T$, so $\Gamma\vdash{\ensuremath{\mathbf{0}}}:T$. $({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}\to {\ensuremath{\mathbf{0}}}$ : Consider $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}:T$. By Lemma \[lem:app\], $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)$ and $\Gamma\vdash{\ensuremath{\mathbf{0}}}:{\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]$, where ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{0}}}}_{{\mathcal{V}},\Gamma}T$. Then by Lemma \[lem:termzero\], ${\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]\equiv 0\cdot R$. By Lemma \[lem:typecharact\], $R\equiv{\sum_{j=1}^{m'}}\gamma_j\cdot V_j+{\sum_{k=1}^{h}}\eta_k\cdot{\ensuremath{\mathbb{X}}}_k$, however, it is easy to see that $h=0$ and so $R\equiv{\sum_{j=1}^{m'}}\gamma_j\cdot V_j$. Without lost of generality, take $A_j\neq A_k$ for all $j\neq k$, and notice that ${\sum_{j=1}^{m}}\beta_j\cdot U[\vec A_j/\vec X]\equiv{\sum_{j=1}^{m'}}0\cdot V_j$. By Lemma \[lem:equivdistinctscalars\], $\beta_j=0$. Notice that by rule $\to_E$, $\Gamma\vdash({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]$, hence by rules $0_I$ and $\equiv$, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]\succeq^{({\ensuremath{\mathbf{t}}}){\ensuremath{\mathbf{0}}}}_{{\mathcal{V}},\Gamma}T$. By Lemma \[lem:subjectreductionofrelation\], ${\sum_{i=1}^{n}}{\sum_{j=1}^{m}}0\cdot T_i[\vec A_j/\vec X]\succeq^{{\ensuremath{\mathbf{0}}}} T$. Hence, $\Gamma\vdash{\ensuremath{\mathbf{0}}}:T$. #### Contextual rules Follows from the generation lemmas, the induction hypothesis and the fact that $\sqsupseteq$ is congruent. Detailed proofs of lemmas and theorems in Section \[sec:SN\] {#app:SN} ============================================================ First lemmas {#app:SNf} ------------ [**Lemma \[lem:RCop\]**]{} If ${\mathsf{A}}$, ${\mathsf{B}}$ and all the ${\mathsf{A}}_i$’s are in ${\mathsf{RC}}$, then so are ${\mathsf{A}}\to{\mathsf{B}}$, $\sum_i{\mathsf{A}}_i$ and $\cap_i{\mathsf{A}}_i$. Before proving that these operators define reducibility candidates, we need the following result which simplifies its proof: a linear combination of strongly normalising terms, is strongly normalising. That is [**Auxiliary Lemma** (AL)**.**]{} If $\{{\ensuremath{\mathbf{t}}}_i\}_i$ are strongly normalising, then so is $F(\vec{{\ensuremath{\mathbf{t}}}})$ for any algebraic context $F$. [*Proof.*]{} Let $\vec{{\ensuremath{\mathbf{t}}}}={\ensuremath{\mathbf{t}}}_1,\dots,{\ensuremath{\mathbf{t}}}_n$. We define two notions. - A measure $s$ on $\vec{{\ensuremath{\mathbf{t}}}}$ defined as the the sum over $i$ of the sum of the lengths of all the possible rewrite sequences starting with ${\ensuremath{\mathbf{t}}}_i$. - An algebraic measure $a$ over algebraic contexts $F(.)$ defined inductively by $a({\ensuremath{\mathbf{t}}}_i)=1$, $a(F(\vec{{\ensuremath{\mathbf{t}}}})+G(\vec{{\ensuremath{\mathbf{t}}}'}))=2+a(F(\vec{{\ensuremath{\mathbf{t}}}}))+a(G(\vec{{\ensuremath{\mathbf{t}}}'}))$, $a(\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))=1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))$, $a({\ensuremath{\mathbf{0}}})=0$. We claim that for all algebraic contexts $F(\cdot)$ and all strongly normalising terms ${{\ensuremath{\mathbf{t}}}}_i$ that are not linear combinations (that is, of the form $x$, $\lambda x.{\ensuremath{\mathbf{r}}}$ or $({\ensuremath{\mathbf{s}}})~{\ensuremath{\mathbf{r}}}$), the term $F(\vec{{\ensuremath{\mathbf{t}}}})$ is also strongly normalising. The claim is proven by induction on $s(\vec{{\ensuremath{\mathbf{t}}}})$ (the size is finite because ${\ensuremath{\mathbf{t}}}$ is SN, and because the rewrite system is finitely branching). - If $s(\vec{{\ensuremath{\mathbf{t}}}})=0$. Then none of the ${\ensuremath{\mathbf{t}}}_i$ reduces. We show by induction on $a(F(\vec{{\ensuremath{\mathbf{t}}}}))$ that $F(\vec{{\ensuremath{\mathbf{t}}}})$ is SN. - If $a(F(\vec{{\ensuremath{\mathbf{t}}}}))=0$, then $F(\vec{{\ensuremath{\mathbf{t}}}})={\ensuremath{\mathbf{0}}}$ which is SN. - Suppose it is true for all $F(\vec{{\ensuremath{\mathbf{t}}}})$ of algebraic measure less or equal to $m$, and consider $F(\vec{{\ensuremath{\mathbf{t}}}})$ such that $a(F(\vec{{\ensuremath{\mathbf{t}}}}))=m+1$. Since the ${\ensuremath{\mathbf{t}}}_i$ are not linear combinations and they are in normal form, because $s(\vec{{\ensuremath{\mathbf{t}}}})=0$, then $F(\vec{{\ensuremath{\mathbf{t}}}})$ can only reduce with a rule from Group E or a rule from group F. We show that those reductions are strictly decreasing on the algebraic measure, by a rule by rule analysis, and so, we can conclude by induction hypothesis. - $0\cdot F(\vec{{\ensuremath{\mathbf{t}}}})\to{\ensuremath{\mathbf{0}}}$. Note that $a(0\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))=1> 0=a({\ensuremath{\mathbf{0}}})$. - $1\cdot F(\vec{{\ensuremath{\mathbf{t}}}})\to F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a(1\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))=1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))> a(F(\vec{{\ensuremath{\mathbf{t}}}}))$. - $\alpha\cdot {\ensuremath{\mathbf{0}}}\to{\ensuremath{\mathbf{0}}}$. Note that $a(\alpha\cdot {\ensuremath{\mathbf{0}}})=1> 0=a({\ensuremath{\mathbf{0}}})$. - $\alpha\cdot (\beta\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))\to(\alpha\times\beta)\cdot F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a(\alpha\cdot (\beta\cdot F(\vec{{\ensuremath{\mathbf{t}}}})))=1+2\cdot (1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}})))> 1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))=a((\alpha\times\beta)\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))$. - $\alpha\cdot (F(\vec{{\ensuremath{\mathbf{t}}}})+G(\vec{{\ensuremath{\mathbf{t}}}'}))\to\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+\alpha\cdot G(\vec{{\ensuremath{\mathbf{t}}}}')$. Note that $a(\alpha\cdot (F(\vec{{\ensuremath{\mathbf{t}}}})+G(\vec{{\ensuremath{\mathbf{t}}}}')))=5+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))+2\cdot a(G(\vec{{\ensuremath{\mathbf{t}}}}'))> 4+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))+2\cdot a(G(\vec{{\ensuremath{\mathbf{t}}}}'))=a(\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+\alpha\cdot G(\vec{{\ensuremath{\mathbf{t}}}}'))$. - $\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+\beta\cdot F(\vec{{\ensuremath{\mathbf{t}}}})\to(\alpha+\beta)\cdot F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a(\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+\beta\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))=4+4\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))> 1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))=a((\alpha+\beta)\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))$. - $\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+F(\vec{{\ensuremath{\mathbf{t}}}})\to(\alpha+1)\cdot F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a(\alpha\cdot F(\vec{{\ensuremath{\mathbf{t}}}})+F(\vec{{\ensuremath{\mathbf{t}}}}))=3+3\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))>1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))=a\cdot ((\alpha+1)\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))$. - $F(\vec{{\ensuremath{\mathbf{t}}}})+F(\vec{{\ensuremath{\mathbf{t}}}})\to (1+1)\cdot F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a\cdot (F(\vec{{\ensuremath{\mathbf{t}}}})+F(\vec{{\ensuremath{\mathbf{t}}}}))=2+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))> 1+2\cdot a(F(\vec{{\ensuremath{\mathbf{t}}}}))=a\cdot ((1+1)\cdot F(\vec{{\ensuremath{\mathbf{t}}}}))$. - $F(\vec{{\ensuremath{\mathbf{t}}}})+{\ensuremath{\mathbf{0}}}\to F(\vec{{\ensuremath{\mathbf{t}}}})$. Note that $a\cdot (F(\vec{{\ensuremath{\mathbf{t}}}})+{\ensuremath{\mathbf{0}}})=2+a(F(\vec{{\ensuremath{\mathbf{t}}}}))> a(F(\vec{{\ensuremath{\mathbf{t}}}}))$. - Contextual rules are trivial. - Suppose it is true for $n$, then consider $\vec{{\ensuremath{\mathbf{t}}}}$ such that $s(\vec{{\ensuremath{\mathbf{t}}}})=n+1$. Again, we show that $F(\vec{{\ensuremath{\mathbf{t}}}})$ is SN by induction on $a(F(\vec{{\ensuremath{\mathbf{t}}}}))$. - If $a(F(\vec{{\ensuremath{\mathbf{t}}}}))=0$, then $F(\vec{{\ensuremath{\mathbf{t}}}})={\ensuremath{\mathbf{0}}}$ which is SN. - Suppose it is true for all $F(\vec{{\ensuremath{\mathbf{t}}}})$ of algebraic measure less or equal to $m$, and consider $F(\vec{{\ensuremath{\mathbf{t}}}})$ such that $a(F(\vec{{\ensuremath{\mathbf{t}}}}))=m+1$. Since the ${\ensuremath{\mathbf{t}}}_i$ are not linear combinations, $F(\vec{{\ensuremath{\mathbf{t}}}})$ can reduce in two ways: - $F({\ensuremath{\mathbf{t}}}_1,\ldots {\ensuremath{\mathbf{t}}}_i,\ldots {\ensuremath{\mathbf{t}}}_k)\to F({\ensuremath{\mathbf{t}}}_1,\ldots {\ensuremath{\mathbf{t}}}'_i,\ldots {\ensuremath{\mathbf{t}}}_k)$ with ${\ensuremath{\mathbf{t}}}_i\to{\ensuremath{\mathbf{t}}}'_i$. Then ${\ensuremath{\mathbf{t}}}'_i$ can be written as $ H({\ensuremath{\mathbf{r}}}_1,\ldots{\ensuremath{\mathbf{r}}}_l) $ for some algebraic context $H$, where the ${\ensuremath{\mathbf{r}}}_j$’s are not linear combinations. Note that $$\sum_{j=1}^l s({\ensuremath{\mathbf{r}}}_j) \leq s({\ensuremath{\mathbf{t}}}'_i) < s({\ensuremath{\mathbf{t}}}_i).$$ Define the context $$\begin{gathered} G({\ensuremath{\mathbf{t}}}_1,\ldots,{\ensuremath{\mathbf{t}}}_{i-1},{\ensuremath{\mathbf{u}}}_1,\ldots {\ensuremath{\mathbf{u}}}_l,{\ensuremath{\mathbf{t}}}_{i+1},\ldots {\ensuremath{\mathbf{t}}}_k) =\hspace{10ex}\\ \hspace{10ex} F({\ensuremath{\mathbf{t}}}_1,\ldots,{\ensuremath{\mathbf{t}}}_{i-1},H({\ensuremath{\mathbf{u}}}_1,\ldots {\ensuremath{\mathbf{u}}}_l),{\ensuremath{\mathbf{t}}}_{i+1},\ldots {\ensuremath{\mathbf{t}}}_k). \end{gathered}$$ The term $F(\vec{{\ensuremath{\mathbf{t}}}})$ then reduces to the term $$G({\ensuremath{\mathbf{t}}}_1,\ldots,{\ensuremath{\mathbf{t}}}_{i-1},{\ensuremath{\mathbf{r}}}_1,\ldots{\ensuremath{\mathbf{r}}}_l,{\ensuremath{\mathbf{t}}}_{i+1}\ldots {\ensuremath{\mathbf{t}}}_k),$$ where $$s({\ensuremath{\mathbf{t}}}_1,\ldots,{\ensuremath{\mathbf{t}}}_{i-1},{\ensuremath{\mathbf{r}}}_1,\ldots{\ensuremath{\mathbf{r}}}_l,{\ensuremath{\mathbf{t}}}_{i+1}\ldots {\ensuremath{\mathbf{t}}}_k) < s(\vec{{\ensuremath{\mathbf{t}}}}).$$ Using the top induction hypothesis, we conclude that $F({\ensuremath{\mathbf{t}}}_1,\ldots {\ensuremath{\mathbf{t}}}'_i,\ldots {\ensuremath{\mathbf{t}}}_k)$ is SN. - $F(\vec{{\ensuremath{\mathbf{t}}}})\to G(\vec{{\ensuremath{\mathbf{t}}}})$, with $a(G(\vec{{\ensuremath{\mathbf{t}}}}))<a(F(\vec{{\ensuremath{\mathbf{t}}}}))$. Using the second induction hypothesis, we conclude that $G(\vec{{\ensuremath{\mathbf{t}}}})$ is SN All the possible reducts of $F(\vec{{\ensuremath{\mathbf{t}}}})$ are SN: so is $F(\vec{{\ensuremath{\mathbf{t}}}})$. This closes the proof of the claim. Now, consider any SN terms $\{{\ensuremath{\mathbf{t}}}_i\}_i$ and any algebraic context $G(\vec{{\ensuremath{\mathbf{t}}}})$. Each ${\ensuremath{\mathbf{t}}}_i$ can be written as an algebraic sum of $x$’s, $\lambda x.{\ensuremath{\mathbf{s}}}$’s and $({\ensuremath{\mathbf{r}}})\,{\ensuremath{\mathbf{s}}}$’s It can be written as $F(\vec{{\ensuremath{\mathbf{t}}}'})$ for some $\vec{{\ensuremath{\mathbf{t}}}'}$. The hypotheses of the claim are satisfied: $G(\vec{{\ensuremath{\mathbf{t}}}})$ is SN. Now, we can prove Lemma \[lem:RCop\] First, we consider the case ${\mathsf{A}}\to{\mathsf{B}}$. ${{\bf RC}}_1$ : We must show that all ${\ensuremath{\mathbf{t}}}\in{\mathsf{A}}\to{\mathsf{B}}$ are in ${{\it SN}_0}$. We proceed by induction on the definition of ${\mathsf{A}}\to{\mathsf{B}}$. - Assume that ${\ensuremath{\mathbf{t}}}$ is such that for ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{0}}}$ and ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{b}}}$, with ${\ensuremath{\mathbf{b}}}\in A$, then $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{r}}}\in{\mathsf{B}}$. Hence by ${{\bf RC}}_1$ in ${\mathsf{B}}$, ${\ensuremath{\mathbf{t}}}\in{{\it SN}_0}$. - Assume that ${\ensuremath{\mathbf{t}}}$ is closed neutral and that ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq{\mathsf{A}}\to{\mathsf{B}}$. By induction hypothesis, all the elements of ${{\rm Red}}({\ensuremath{\mathbf{t}}})$ are strongly normalising: so is ${\ensuremath{\mathbf{t}}}$. - The last case is immediate: if ${\ensuremath{\mathbf{t}}}$ is the term ${\ensuremath{\mathbf{0}}}$, it is strongly normalising. ${{\bf RC}}_2$ : We must show that if ${\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{t}}}'$ and ${\ensuremath{\mathbf{t}}}\in{\mathsf{A}}\to{\mathsf{B}}$, then ${\ensuremath{\mathbf{t}}}'\in{\mathsf{A}}\to{\mathsf{B}}$. We again proceed by induction on the definition of ${\mathsf{A}}\to{\mathsf{B}}$. - Let ${\ensuremath{\mathbf{t}}}$ such that $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{0}}}\in{\mathsf{B}}$ and such that for all ${\ensuremath{\mathbf{b}}}\in{\mathsf{A}}$, $({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}\in{\mathsf{B}}$. Then by ${{\bf RC}}_2$ in ${\mathsf{B}}$, $({\ensuremath{\mathbf{t}}}')~{\ensuremath{\mathbf{0}}}\in{\mathsf{B}}$ and $({\ensuremath{\mathbf{t}}}')~{\ensuremath{\mathbf{b}}}\in{\mathsf{B}}$, and so ${\ensuremath{\mathbf{t}}}'\in{\mathsf{A}}\to{\mathsf{B}}$. - If ${\ensuremath{\mathbf{t}}}$ is closed neutral and ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq{\mathsf{A}}\to{\mathsf{B}}$, then ${\ensuremath{\mathbf{t}}}'\in{\mathsf{A}}\to{\mathsf{B}}$ since ${\ensuremath{\mathbf{t}}}'\in{{\rm Red}}({\ensuremath{\mathbf{t}}})$. - If ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$, it does not reduce. ${{\bf RC}}_3$ and ${{\bf RC}}_4$ : Trivially true by definition. Then we analyze the case $\sum_i{\mathsf{A}}_i$. ${{\bf RC}}_1$ : If ${\ensuremath{\mathbf{t}}} = F(\vec{{\ensuremath{\mathbf{t}}}'})$ when $F$ is an alg. context and ${\ensuremath{\mathbf{t}}}'_i\in{\mathsf{A}}_i$, the result is immediate using the auxiliary lemma (AL) and ${{\bf RC}}_1$ on the ${\mathsf{A}}_i$’s. If ${\ensuremath{\mathbf{t}}}$ is closed neutral and ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq\sum_i{\mathsf{A}}_i$, then ${\ensuremath{\mathbf{t}}}$ is strongly normalising since all elements of ${{\rm Red}}({\ensuremath{\mathbf{t}}})$ are strongly normalising. Finally, if ${\ensuremath{\mathbf{t}}}$ is equal to ${\ensuremath{\mathbf{0}}}$, there is nothing to do. ${{\bf RC}}_2$ and ${{\bf RC}}_3$ : Trivially true by definition. ${{\bf RC}}_4$ : Since ${\ensuremath{\mathbf{0}}}$ is an algebraic context, it is also in the set. Finally, we prove the case $\cap_i{\mathsf{A}}_i$. ${{\bf RC}}_1$ : Trivial since for all $i$, ${\mathsf{A}}_i\subseteq{{\it SN}_0}$. ${{\bf RC}}_2$ : Let ${\ensuremath{\mathbf{t}}}\in\cap_i{\mathsf{A}}_i$, then $\forall i,\,{\ensuremath{\mathbf{t}}}\in{\mathsf{A}}_i$ and so by ${{\bf RC}}_2$ in ${\mathsf{A}}_i$, ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq{\mathsf{A}}_i$. Thus ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq\cap_i{\mathsf{A}}_i$. ${{\bf RC}}_3$ : Let ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$ and ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq\cap_i{\mathsf{A}}$. Then $\forall_i,\,{{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq A_i$, and thus, by ${{\bf RC}}_3$ in ${\mathsf{A}}_i$, ${\ensuremath{\mathbf{t}}}\in{\mathsf{A}}_i$, which implies ${\ensuremath{\mathbf{t}}}\in\cap_i{\mathsf{A}}_i$. ${{\bf RC}}_4$ : By ${{\bf RC}}_4$, for all $i$, ${\ensuremath{\mathbf{0}}}\in{\mathsf{A}}_i$. Therefore, ${\ensuremath{\mathbf{0}}}\in\cap_i{\mathsf{A}}_i$. This concludes the proof of Lemma \[lem:RCop\]. [**Lemma \[lem:typedecomp\]**]{} Any type $T$, has a unique canonical decomposition $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$ such that for all $l,k$, ${\ensuremath{\mathbb{U}}}_l\not\equiv{\ensuremath{\mathbb{U}}}_k$. By Lemma \[lem:typecharact\], $T\equiv{\sum_{i=1}^{n}}\alpha_i\cdot U_i+{\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{X}}}_j$. Suppose that there exist $l,k$ such that $U_l\equiv U_k$. Then notice that $T\equiv (\alpha_l+\alpha_k)\cdot U_l+\sum_{i\neq l,k}\alpha_i\cdot U_i$. Repeat the process until there is no more $l,k$ such that $U_l\not\equiv U_k$. Proceed in the analogously to obtain a linear combination of different ${\ensuremath{\mathbb{X}}}_j$. [**Lemma \[lem:substRed\]**]{} For any types $T$ and $A$, variable $X$ and valuation $\rho$, we have ${\ensuremath{\llbracket{T[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{T}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$ and ${\ensuremath{\llbracket{T[A/X]}\rrbracket}}_{\bar\rho} = {\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho,(X_-,X_+)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho})}$. We proceed by structural induction on $T$. On each case we only show the case of $\rho$ since the $\bar\rho$ case follows analogously. - $T=X$. Then ${\ensuremath{\llbracket{X[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{A}\rrbracket}}_\rho = {\ensuremath{\llbracket{X}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$. - $T=Y$. Then ${\ensuremath{\llbracket{Y[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{Y}\rrbracket}}_\rho = \rho_+(Y) = {\ensuremath{\llbracket{Y}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$. - $Y=U\to R$. Then ${\ensuremath{\llbracket{(U\to R)[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{U[A/X]}\rrbracket}}_{\bar\rho}\to{\ensuremath{\llbracket{R[A/X]}\rrbracket}}_\rho$. By the induction hypothesis, we have ${\ensuremath{\llbracket{U[A/X]}\rrbracket}}_{\bar\rho}\to{\ensuremath{\llbracket{R[A/X]}\rrbracket}}_\rho= {\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho,(X_-,X_+)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho})} \to {\ensuremath{\llbracket{R}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})} = {\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$. - $U=\forall Y.V$. Then ${\ensuremath{\llbracket{(\forall Y.V)[A/X]}\rrbracket}}_\rho = {\ensuremath{\llbracket{\forall Y.V[A/X]}\rrbracket}}_\rho$ which by definition is equal to $\cap_{B\subseteq C\in{\mathsf{RC}}} {\ensuremath{\llbracket{V[A/X]}\rrbracket}}_{\rho,(Y_+,Y_-)\mapsto(B,C)}$ and this, by the induction hypothesis, is equal to $\cap_{B\subseteq C\in{\mathsf{RC}}} {\ensuremath{\llbracket{V}\rrbracket}}_{\rho,(Y_+,Y_-)\mapsto(B,C),(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})} = {\ensuremath{\llbracket{\forall Y.V}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$. - $T$ of canonical decomposition $\sum_i\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$. Then ${\ensuremath{\llbracket{T}\rrbracket}}_\rho = \sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_\rho$, which by the induction hypothesis is equal to $\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})} ={\ensuremath{\llbracket{T}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\ensuremath{\llbracket{A}\rrbracket}}_{\bar\rho},{\ensuremath{\llbracket{A}\rrbracket}}_{\rho})}$. Proof of the Adequacy Lemma (\[lem:SNadeq\]) {#app:adequacy} -------------------------------------------- We need the following results first. \[lem:polar1appendix\] For any type $T$, if $\rho=(\rho_+,\rho_-)$ is a valid valuation over ${\ensuremath{FV}(T)}$, then we have ${\ensuremath{\llbracket{T}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{T}\rrbracket}}_{\rho}$. Structural induction on $T$. - $T=X$. Then ${\ensuremath{\llbracket{T}\rrbracket}}_{\bar{\rho}}=\rho_-(X)\subseteq\rho_+(X)={\ensuremath{\llbracket{T}\rrbracket}}_\rho$. - $T=U\to R$. Then ${\ensuremath{\llbracket{U\to R}\rrbracket}}_{\bar{\rho}}={\ensuremath{\llbracket{U}\rrbracket}}_\rho\to{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}$. By the induction hypothesis ${\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{U}\rrbracket}}_\rho$ and ${\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_\rho$. We must show that $\forall{\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\bar{\rho}}$, ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$. Let ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\bar{\rho}}={\ensuremath{\llbracket{U}\rrbracket}}_\rho\to{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}$. We proceed by induction on the definition of $\to$. - Let ${\ensuremath{\mathbf{t}}}\in\{{\ensuremath{\mathbf{t}}}\,|({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{0}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}$ and $\forall{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_\rho, ({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}\}$. Notice that $({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{0}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_{\rho}$ and forall ${\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}$, ${\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_\rho$, and so $({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_\rho$. Thus ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}\to{\ensuremath{\llbracket{R}\rrbracket}}_\rho={\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$. - Let ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\bar{\rho}}$ and ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$. By the induction hypothesis ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$ and so, by ${{\bf RC}}_3$, ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$. - Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$. By ${{\bf RC}}_4$, ${\ensuremath{\mathbf{0}}}$ is in any reducibility candidate, in particular it is in ${\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$. - $T=\forall X.U$. Then ${\ensuremath{\llbracket{\forall X.U}\rrbracket}}_{\bar{\rho}}=\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}, (X_-,X_+)\mapsto({\mathsf{B}},{\mathsf{A}})}$. By the induction hypothesis $${\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho},(X_-,X_+)\mapsto({\mathsf{B}},{\mathsf{A}})}\subseteq{\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}.$$ So $\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho},(X_-, X_+)\mapsto({\mathsf{B}},{\mathsf{A}})}\subseteq\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}} {\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}$ which is ${\ensuremath{\llbracket{\forall X.U}\rrbracket}}_\rho$. - $T\equiv\sum_i\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$ and $T\not\equiv{\ensuremath{\mathbb{U}}}$. Then ${\ensuremath{\llbracket{T}\rrbracket}}_{\bar{\rho}}=\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$. By the induction hypothesis ${\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}\subseteq{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_\rho$. We proceed by induction on the definition of $\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar\rho}$. - Let ${\ensuremath{\mathbf{t}}}=F(\vec{{\ensuremath{\mathbf{r}}}})$ where $F$ is an algebraic context and ${\ensuremath{\mathbf{r}}}_i\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$. Note that by induction hypothesis $\forall {\ensuremath{\mathbf{r}}}\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$, ${\ensuremath{\mathbf{r}}}\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$ and so the result holds. - Let ${\ensuremath{\mathbf{t}}}\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$ and ${\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{t}}}'$. By the induction hypothesis ${\ensuremath{\mathbf{t}}}\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$, hence by ${{\bf RC}}_2$, ${\ensuremath{\mathbf{t}}}'\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$. - Let ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$ and ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$. By the induction hypothesis ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_\rho$ and so, by ${{\bf RC}}_3$, ${\ensuremath{\mathbf{t}}}\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_\rho$. \[lem:polar2appendix\] For any type $T$, if $\rho=(\rho_+,\rho_-)$ and $\rho'=(\rho'_+,\rho'_-)$ are two valid valuations over ${\ensuremath{FV}(T)}$ such that $\forall X$, $\rho'_-(X)\subseteq\rho_-(X)$ and $\rho_+(X)\subseteq\rho'_+(X)$, then we have ${\ensuremath{\llbracket{T}\rrbracket}}_{\rho}\subseteq{\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$ and ${\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho'}\subseteq{\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho}$. Structural induction on $T$. - $T=X$. Then ${\ensuremath{\llbracket{X}\rrbracket}}_\rho=\rho_+(X)\subseteq\rho'_+(X)={\ensuremath{\llbracket{X}\rrbracket}}_{\rho'}$ and ${\ensuremath{\llbracket{X}\rrbracket}}_{\bar\rho'}=\rho'_-(X)\subseteq\rho_-(X)={\ensuremath{\llbracket{X}\rrbracket}}_{\bar\rho}$. - $T=U\to R$. Then ${\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho={\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}\to{\ensuremath{\llbracket{R}\rrbracket}}_\rho$ and ${\ensuremath{\llbracket{U\to R}\rrbracket}}_{\bar\rho'}={\ensuremath{\llbracket{U}\rrbracket}}_{\rho'}\to{\ensuremath{\llbracket{R}\rrbracket}}_{\bar\rho'}$. By the induction hypothesis ${\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}'}\subseteq{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}$ , ${\ensuremath{\llbracket{U}\rrbracket}}_\rho\subseteq{\ensuremath{\llbracket{U}\rrbracket}}_{\rho'}$, ${\ensuremath{\llbracket{R}\rrbracket}}_\rho\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_{\rho'}$ and ${\ensuremath{\llbracket{R}\rrbracket}}_{\bar\rho'}\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_{\bar\rho}$. We proceed by induction on the definition of $\to$ to show that $\forall{\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}}\to{\ensuremath{\llbracket{R}\rrbracket}}_\rho$, then ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar{\rho}'}\to{\ensuremath{\llbracket{R}\rrbracket}}_{\rho'}={\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho'}$ - Let ${\ensuremath{\mathbf{t}}}\in\{{\ensuremath{\mathbf{t}}}\,| ({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\rho}$ and $\forall{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho}, ({\ensuremath{\mathbf{r}}})~{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_{\rho}\}$. Notice that $({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_\rho\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_{\rho'}$. Also, $\forall{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho'},\,{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho}$ and then $({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{R}\rrbracket}}_\rho\subseteq{\ensuremath{\llbracket{R}\rrbracket}}_{\rho'}$. - Let ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho}$ and ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$. By the induction hypothesis ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho'}$ and so, by ${{\bf RC}}_3$, ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho'}$. - Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$. By ${{\bf RC}}_4$, ${\ensuremath{\mathbf{0}}}$ is in any reducibility candidate, in particular it is in ${\ensuremath{\llbracket{U\to R}\rrbracket}}_{\rho'}$. Analogously, $\forall{\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\rho'}\to{\ensuremath{\llbracket{R}\rrbracket}}_{\bar\rho'}$, ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_\rho\to{\ensuremath{\llbracket{R}\rrbracket}}_{\bar\rho}={\ensuremath{\llbracket{U\to R}\rrbracket}}_\rho$. - $T=\forall X.U$. Then ${\ensuremath{\llbracket{\forall X.U}\rrbracket}}_{\rho} = \cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}$. By the induction hypothesis we have ${\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}\subseteq{\ensuremath{\llbracket{U}\rrbracket}}_{\rho',(X_+,X_-)\mapsto{({\mathsf{A}},{\mathsf{B}})}}$, Hence we have that $\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\rho,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}\subseteq\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U}\rrbracket}}_{\rho',(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}={\ensuremath{\llbracket{\forall X.U}\rrbracket}}_{\rho'}$. The case ${\ensuremath{\llbracket{\forall X.U}\rrbracket}}_{\bar\rho'}\subseteq{\ensuremath{\llbracket{\forall X.U}\rrbracket}}_{\bar\rho}$ is analogous. - $T\equiv\sum_i\alpha_i\cdot{\ensuremath{\mathbb{U}}}_i$ and $T\not\equiv{\ensuremath{\mathbb{U}}}$. Then ${\ensuremath{\llbracket{T}\rrbracket}}_{\rho}=\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$. By the induction hypothesis ${\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}\subseteq{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho'}$. We proceed by induction on the definition of $\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$ to show that $\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}\subseteq\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho'}$. - Let ${\ensuremath{\mathbf{t}}}=F(\vec{{\ensuremath{\mathbf{r}}}})$ where $F$ is an algebraic context and ${\ensuremath{\mathbf{r}}}_i\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\bar{\rho}}$. Note that by induction hypothesis $\forall {\ensuremath{\mathbf{r}}}_i\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho}$, ${\ensuremath{\mathbf{r}}}_i\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho'}$ and so $F(\vec{{\ensuremath{\mathbf{r}}}})\in\sum_i{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho'}={\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$. - Let ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho}$ and ${\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{t}}}'$. By the induction hypothesis ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$, hence by ${{\bf RC}}_2$, ${\ensuremath{\mathbf{t}}}'\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$. - Let ${{\rm Red}}({\ensuremath{\mathbf{t}}})\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho}$ and ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$. By the induction hypothesis ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq{\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$ and so, by ${{\bf RC}}_3$, ${\ensuremath{\mathbf{t}}}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho'}$. The case ${\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho'}\subseteq{\ensuremath{\llbracket{T}\rrbracket}}_{\bar\rho}$ is analogous. \[lem:sumrcappendix\] Let $\{{\mathsf{A}}_{i}\}_{i=1\cdots n}$ be a family of reducibility candidates. If ${\ensuremath{\mathbf{s}}}$ and ${\ensuremath{\mathbf{t}}}$ both belongs to ${\sum_{i=1}^{n}}{\mathsf{A}}_{i}$, then so does ${\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}$. By structural induction on $\sum_{i=1}^{n_1}{\mathsf{A}}_{i}$. - If ${\ensuremath{\mathbf{s}}}$ and ${\ensuremath{\mathbf{t}}}$ are respectively of the form $F(\vec{{\ensuremath{\mathbf{s}}}'})$ and $G(\vec{{\ensuremath{\mathbf{t}}}'})$, it is trivial. - If only ${\ensuremath{\mathbf{s}}}$ is of the form $F(\vec{{\ensuremath{\mathbf{s}}}'})$ and ${\ensuremath{\mathbf{t}}}$ is such that ${\ensuremath{\mathbf{t}}}'\to{\ensuremath{\mathbf{t}}}$, with ${\ensuremath{\mathbf{t}}}'\in\sum_i{\mathsf{A}}_{i}$, then by the induction hypothesis ${\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}'\in\sum_i{\mathsf{A}}_{i}$. We conclude by ${{\bf RC}}_2$. - If ${\ensuremath{\mathbf{s}}}$ is of the form $F(\vec{{\ensuremath{\mathbf{s}}}})$ and ${\ensuremath{\mathbf{t}}}$ is neutral such that ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq\sum_i{\mathsf{A}}_{i}$, then we have to check that ${{\rm Red}}({\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}})\in\sum_i{\mathsf{A}}_{i}$, so we can conclude with ${{\bf RC}}_3$. Let ${\ensuremath{\mathbf{r}}}\in{{\rm Red}}({\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}})$, the possible cases are: - ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}'$, with ${\ensuremath{\mathbf{t}}}'\in{{\rm Red}}({\ensuremath{\mathbf{t}}})$. Then we conclude by the induction hypothesis. - ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{s}}}'+{\ensuremath{\mathbf{t}}}$, with ${\ensuremath{\mathbf{s}}}'\in{{\rm Red}}({\ensuremath{\mathbf{s}}})$. By ${{\bf RC}}_2$, ${\ensuremath{\mathbf{s}}}'\in{\sum_{i=1}^{n}}{\mathsf{A}}_{i}$, hence we conclude by the induction hypothesis. - ${\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{r}}}$ with a rule from Group F. Cases: - Let ${\ensuremath{\mathbf{s}}}=\alpha\cdot{\ensuremath{\mathbf{r}}}$ and ${\ensuremath{\mathbf{t}}}=\beta\cdot{\ensuremath{\mathbf{r}}}$, so ${\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}\to(\alpha+\beta)\cdot{\ensuremath{\mathbf{r}}}$. Since ${\ensuremath{\mathbf{s}}}=F(\vec{{\ensuremath{\mathbf{s}}}'})=\alpha\cdot{\ensuremath{\mathbf{r}}}$, the algebraic context $F(.)$ is of the form $\alpha\cdot G(.)$ and ${\ensuremath{\mathbf{r}}} = G(\vec{{\ensuremath{\mathbf{s}}}})$. Therefore, since $(\alpha+\beta)\cdot r = G'(\vec{{\ensuremath{\mathbf{s}}}'})$ where $G'(.) = (\alpha+\beta)\cdot G(.)$, we have that $(\alpha+\beta)\cdot{\ensuremath{\mathbf{r}}}\in {\sum_{i=1}^{n}}{\mathsf{A}}_{i}$. - Cases $\alpha\cdot{\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{r}}}\to(\alpha+1)\cdot{\ensuremath{\mathbf{r}}}$ and ${\ensuremath{\mathbf{r}}}+{\ensuremath{\mathbf{r}}}\to(1+1)\cdot{\ensuremath{\mathbf{r}}}$ are analogous. - Let ${\ensuremath{\mathbf{s}}}={\ensuremath{\mathbf{0}}}$ (notice that ${\ensuremath{\mathbf{t}}}$ cannot be $0$ since it is neutral), so ${\ensuremath{\mathbf{s}}}+{\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{t}}}$. Since ${\sum_{i=1}^{n}}{\mathsf{A}}_{i}$ we are done. The other cases are similar. \[lem:alphatinsigma\] If ${\ensuremath{\mathbf{t}}}\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$, then for any $\alpha$, $\alpha\cdot{\ensuremath{\mathbf{t}}}\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. Let define the algebraic size of a term to be the sum of the absolut values of all the scalars appearing in the term (when there is no scalar, we consider the scalar $1$ is present). Hence, we proceed by induction on the algebraic size of ${\ensuremath{\mathbf{t}}}$. If the size is $0$, then the term ${\ensuremath{\mathbf{t}}}$ is ${\ensuremath{\mathbf{0}}}$: since ${\ensuremath{\mathbf{0}}}$ belongs to any of the ${\mathsf{A}}_i$, by definition $\alpha\cdot{\ensuremath{\mathbf{0}}}$ belongs to ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. Now, suppose that the result is true for any term of size less than $n$ and assume ${\ensuremath{\mathbf{t}}}$ is of size $n+1$. We proceed by structural induction on ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. - If ${\ensuremath{\mathbf{t}}}$ is of the form $F(\vec{{\ensuremath{\mathbf{t}}}'})$, it is trivial. - If $F(\vec{{\ensuremath{\mathbf{s}}}})\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$ and ${\ensuremath{\mathbf{s}}}_i={\ensuremath{\mathbf{t}}}$, then by the induction hypothesis $\alpha\cdot F(\vec{{\ensuremath{\mathbf{s}}}})\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$, hence we conclude with ${{\bf RC}}_2$ and ${{\bf CC}}$. - If ${\ensuremath{\mathbf{t}}}'\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$ and ${\ensuremath{\mathbf{t}}}'\to{\ensuremath{\mathbf{t}}}$, then by the induction hypothesis $\alpha\cdot{\ensuremath{\mathbf{t}}}'\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$, and hence we conclude with ${{\bf RC}}_2$. - If ${\ensuremath{\mathbf{t}}}\in{\mathcal{N}}$ and ${{\rm Red}}({\ensuremath{\mathbf{t}}})\subseteq{\sum_{i=1}^{n}}{\mathsf{A}}_i$, then we have to check that ${{\rm Red}}(\alpha\cdot{\ensuremath{\mathbf{t}}})\subseteq{\sum_{i=1}^{n}}{\mathsf{A}}_i$, so we can conclude with ${{\bf RC}}_3$. Let ${\ensuremath{\mathbf{r}}}\in{{\rm Red}}(\alpha\cdot{\ensuremath{\mathbf{t}}})$, the possible cases are: - ${\ensuremath{\mathbf{r}}}=\alpha\cdot{\ensuremath{\mathbf{t}}}'$ with ${\ensuremath{\mathbf{t}}}'\in{{\rm Red}}({\ensuremath{\mathbf{t}}})$. Then we conclude by the induction hypothesis. - $\alpha\cdot{\ensuremath{\mathbf{t}}}\to{\ensuremath{\mathbf{r}}}$ with a rule from Group E. Cases: - $\alpha=0$ and ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{0}}}$, notice that ${\ensuremath{\mathbf{0}}}\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. - $\alpha=1$ and ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{t}}}$, notice that ${\ensuremath{\mathbf{t}}}\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. - ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$ and ${\ensuremath{\mathbf{r}}}={\ensuremath{\mathbf{0}}}$, notice that ${\ensuremath{\mathbf{0}}}\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. - ${\ensuremath{\mathbf{t}}}=\beta\cdot{\ensuremath{\mathbf{s}}}$ and ${\ensuremath{\mathbf{r}}}=(\alpha\times\beta)\cdot{\ensuremath{\mathbf{s}}}$. By ${{\bf CC}}$, ${\ensuremath{\mathbf{s}}}$ is in ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. Since its algebraic size is strictly smaller than the one of ${\ensuremath{\mathbf{t}}}$, we can apply the induction hypothesis and deduce that $(\alpha\times\beta)\cdot{\ensuremath{\mathbf{s}}}$ belongs to ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. - ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{t}}}_1+{\ensuremath{\mathbf{t}}}_2$ and ${\ensuremath{\mathbf{r}}}=\alpha\cdot{\ensuremath{\mathbf{t}}}_1+\alpha\cdot{\ensuremath{\mathbf{t}}}_2$. By ${{\bf CC}}$, ${\ensuremath{\mathbf{t}}}_1\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$ and ${\ensuremath{\mathbf{t}}}_2\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. Since their algebraic sizes are strictly smaller than the one of ${\ensuremath{\mathbf{t}}}$, we can apply the induction hypothesis and deduce that both $\alpha\cdot{\ensuremath{\mathbf{t}}}_1$ and $\alpha\cdot{\ensuremath{\mathbf{t}}}_2$ belong to ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. We can conclude with Lemma \[lem:sumrcappendix\]. \[lem:combinationsInSum\] Let $\vec{{\ensuremath{\mathbf{t}}}}=\{{\ensuremath{\mathbf{t}}}_j\}_j$ such that for all $j$, ${\ensuremath{\mathbf{t}}}_j\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$. Then $F(\vec{{\ensuremath{\mathbf{t}}}})\in{\sum_{i=1}^{n}}{\mathsf{A}}_i$ We proceed by induction on the structure of $F(\vec{{\ensuremath{\mathbf{t}}}})$. - ${\ensuremath{\mathbf{0}}}\in {\sum_{i=1}^{n}}{\mathsf{A}}_i$: by ${\mathsf{RC}}_4$. - ${\ensuremath{\mathbf{t}}}_j\in {\sum_{i=1}^{n}}{\mathsf{A}}_i$: by hypothesis. - If $F(\vec{{\ensuremath{\mathbf{t}}}}) = F_1(\vec{{\ensuremath{\mathbf{t}}}}) + F_2(\vec{{\ensuremath{\mathbf{t}}}})$: by induction hypothesis, both $F_1(\vec{{\ensuremath{\mathbf{t}}}})$ and $F_2(\vec{{\ensuremath{\mathbf{t}}}})$ are in ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. We conclude with Lemma \[lem:sumrcappendix\]. - If $F(\vec{{\ensuremath{\mathbf{t}}}}) = \alpha\cdot F'(\vec{{\ensuremath{\mathbf{t}}}})$: by induction hypothesis, $F'(\vec{{\ensuremath{\mathbf{t}}}})$ is in ${\sum_{i=1}^{n}}{\mathsf{A}}_i$. We conclude with Lemma \[lem:alphatinsigma\]. \[lem:applircappendix\] Suppose that $\lambda x.{\ensuremath{\mathbf{s}}}\in{\mathsf{A}}\to{\mathsf{B}}$ and ${\ensuremath{\mathbf{b}}}\in{\mathsf{A}}$, then $(\lambda x.{\ensuremath{\mathbf{s}}})\,{\ensuremath{\mathbf{b}}}\in{\mathsf{B}}$. Induction on the definition of ${\mathsf{A}}\to{\mathsf{B}}$. - If $\lambda x.{\ensuremath{\mathbf{s}}}$ is in $\{{\ensuremath{\mathbf{t}}}~|~({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{0}}}\in{\mathsf{B}}$ and $\forall{\ensuremath{\mathbf{b}}}\in{\mathsf{A}}, ({\ensuremath{\mathbf{t}}})\,{\ensuremath{\mathbf{b}}} \in {\mathsf{B}}\}$, then it is trivial - $\lambda x.{\ensuremath{\mathbf{s}}}$ cannot be in ${\mathsf{A}}\to{\mathsf{B}}$ by the closure under ${{\bf RC}}_3$, because it is not neutral, neither by the closure under ${{\bf RC}}_4$, because it is not the term ${\ensuremath{\mathbf{0}}}$. Now, we can prove the Adequacy Lemma. [**Lemma \[lem:SNadeq\]** (Adequacy Lemma)**.**]{} Every derivable typing judgement is valid: For every valid sequent $\Gamma\vdash{\ensuremath{\mathbf{t}}}:T$, we have $\Gamma\models{\ensuremath{\mathbf{t}}}:T$. The proof of the adequacy lemma is made by induction on the size of the typing derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}: T$. We look at the last typing rule that is used, and show in each case that $\Gamma\models{\ensuremath{\mathbf{t}}}: T$, [[i.e.]{} ]{}if $T\equiv{\ensuremath{\mathbb{U}}}$, then ${\ensuremath{\mathbf{t}}}_\sigma\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}}\rrbracket}}_\rho$ or if $T\equiv{\sum_{i=1}^{n}}\alpha_i.{\ensuremath{\mathbb{U}}}_i$ in the sense of Lemma \[lem:typedecomp\], then ${\ensuremath{\mathbf{t}}}_{\sigma}\in\sum_{i=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho,\rho_i}$, for every valuation $\rho$, set of valuations $\{\rho_i\}_n$, and substitution $\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$ ([[i.e.]{} ]{}substitution $\sigma$ such that $(x:V)\in\Gamma$ implies $x_\sigma\in{\ensuremath{\llbracket{V}\rrbracket}}_{\bar\rho}$). In any case, we must prove that $\forall\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$, $(\lambda x.{\ensuremath{\mathbf{t}}})_\sigma\in{\ensuremath{\llbracket{U\to T}\rrbracket}}_{\rho,\rho'}$, or what is the same $\lambda x.{\ensuremath{\mathbf{t}}}_\sigma\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho,\bar\rho'}\to{\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho'}$, where $\rho'$ does not act on $FV(\Gamma)$. If we can show that ${\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho,\bar\rho'}$ implies $(\lambda x.{\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{b}}}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho'}$, then we are done. Notice that ${\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho'}=\sum_{i=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho,\rho'}$, or ${\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho'}={\ensuremath{\llbracket{{\ensuremath{\mathbb{V}}}}\rrbracket}}_{\rho,\rho'}$ Since $(\lambda x.{\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{b}}}$ is a neutral term, we just need to prove that every one-step reduction of it is in ${\ensuremath{\llbracket{T}\rrbracket}}_\rho$, which by ${{\bf RC}}_3$ closes the case. By ${{\bf RC}}_1$, ${\ensuremath{\mathbf{t}}}_\sigma$ and ${\ensuremath{\mathbf{b}}}$ are strongly normalising, and so is $\lambda x.{\ensuremath{\mathbf{t}}}_\sigma$. Then we proceed by induction on the sum of the lengths of all the reduction paths starting from $(\lambda x.{\ensuremath{\mathbf{t}}}_{\sigma})$ plus the same sum starting from ${\ensuremath{\mathbf{b}}}$: $(\lambda x.{\ensuremath{\mathbf{t}}}_{\sigma})~{\ensuremath{\mathbf{b}}}\to(\lambda x.{\ensuremath{\mathbf{t}}}_{\sigma})~{\ensuremath{\mathbf{b}}}'$ : with ${\ensuremath{\mathbf{b}}}\to {\ensuremath{\mathbf{b}}}'$. Then ${\ensuremath{\mathbf{b}}}'\in{\ensuremath{\llbracket{U}\rrbracket}}_{\bar\rho,\bar\rho'}$ and we close by induction hypothesis. $(\lambda x.{\ensuremath{\mathbf{t}}}_{\sigma})~{\ensuremath{\mathbf{b}}}\to(\lambda x.{\ensuremath{\mathbf{t}}}')~{\ensuremath{\mathbf{b}}}$ : with ${\ensuremath{\mathbf{t}}}_{\sigma}\to{\ensuremath{\mathbf{t}}}'$. If $T\equiv{\ensuremath{\mathbb{V}}}$, then ${\ensuremath{\mathbf{t}}}_\sigma\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{V}}}}\rrbracket}}_{\rho,\rho'}$, and by ${{\bf RC}}_2$ so is ${\ensuremath{\mathbf{t}}}'$. In other case ${\ensuremath{\mathbf{t}}}_{\sigma}\in\sum_{i=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{U}}}_i}\rrbracket}}_{\rho,\rho_i}$ for any $\{\rho_i\}_n$ not acting on $FV(\Gamma)$, take $\forall i,\,\rho_i=\rho'$, so ${\ensuremath{\mathbf{t}}}_{\sigma}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho'}$ and so are its reducts, such as ${\ensuremath{\mathbf{t}}}'$. We close by induction hypothesis. $(\lambda x.{\ensuremath{\mathbf{t}}}_{\sigma})~{\ensuremath{\mathbf{b}}}\to{\ensuremath{\mathbf{t}}}_{\sigma}{[{{\ensuremath{\mathbf{b}}}}/{x}]}$ : Let $\sigma'=\sigma;x\mapsto{\ensuremath{\mathbf{b}}}$. Then $\sigma'\in{\ensuremath{\llbracket{\Gamma,x: U}\rrbracket}}_{\rho,\rho'}$, so ${\ensuremath{\mathbf{t}}}_{\sigma'}\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho_i}$. Notice that ${\ensuremath{\mathbf{t}}}_{\sigma}[{\ensuremath{\mathbf{b}}}/x]={\ensuremath{\mathbf{t}}}_{\sigma'}$. t:[\_[i=1]{}\^[n]{}]{}\_i.(UT\_i) r:[\_[j=1]{}\^[m]{}]{}\_jU\[\_j/\] (t) r: [\_[i=1]{}\^[n]{}]{}[\_[j=1]{}\^[m]{}]{}\_i\_jT\_i\[\_j/\] \_E Without loss of generality, assume that the $T_i$’s are different from each other (similarly for $\vec{A}_j$). By the induction hypothesis, for any $\rho$, $\{\rho_{i,j}\}_{n,m}$ not acting on $FV(\Gamma)$, and $\forall\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$ we have ${\ensuremath{\mathbf{t}}}_\sigma\in\sum_{i=1}^n \cap_{\vec{{\mathsf{A}}}\subseteq\vec{{\mathsf{B}}}\in{\mathsf{RC}}} {\ensuremath{\llbracket{(U\to T_i)}\rrbracket}}_{\rho,\rho_i,{(\vec{X}_+,\vec{X}_-) \mapsto(\vec{{\mathsf{A}}},\vec{{\mathsf{B}}})}}$ and ${\ensuremath{\mathbf{r}}}_\sigma\in \sum_{j=1}^m{\ensuremath{\llbracket{U[\vec{A}_j/\vec{X}]}\rrbracket}}_{\rho,\rho_j}$, or if $n=\alpha_1=1$, ${\ensuremath{\mathbf{t}}}_\sigma\in \cap_{\vec{{\mathsf{A}}}\subseteq\vec{{\mathsf{B}}}\in{\mathsf{RC}}} {\ensuremath{\llbracket{(U\to T_1)}\rrbracket}}_{\rho,{(\vec{X}_+,\vec{X}_-) \mapsto(\vec{{\mathsf{A}}},\vec{{\mathsf{B}}})}}$ and if $m=1$ and $\beta_1=1$, ${\ensuremath{\mathbf{r}}}_\sigma\in{\ensuremath{\llbracket{U[\vec{A}_j/\vec{X}]}\rrbracket}}_{\rho}$. Notice that for any $\vec{A}_j$, if $U$ is a unit type, $U[\vec{A}_j/\vec{X}]$ is still unit. For every $i,j$, let $T_i[\vec{A}_j/\vec{X}]\equiv{\sum_{k=1}^{r^{ij}}}\delta^{ij}_k\cdot{\ensuremath{\mathbb{W}}}^{ij}_k$. We must show that for any $\rho$, sets $\{\rho'_{i,j,k}\}_{r_{i,j}}$ not acting on $FV(\Gamma)$ and $\forall\sigma\in{\ensuremath{\llbracket{\Gamma}\rrbracket}}_\rho$, the term $(({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}})_\sigma$ is in the set $\sum_{i=1\cdots n, j=1\cdots m,k=1\cdots r^{ij}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$, or in case of $n=m=\alpha_1=\beta_1=r^{11}=1$, $(({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}})_\sigma\in{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{11}_1}\rrbracket}}_\rho$. Since both ${\ensuremath{\mathbf{t}}}_\sigma$ and ${\ensuremath{\mathbf{r}}}_\sigma$ are strongly normalising, we proceed by induction on the sum of the lengths of their rewrite sequence. The set ${{\rm Red}}((({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}})_\sigma)$ contains: - $({\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{r}}}'$ or $({\ensuremath{\mathbf{t}}}')~{\ensuremath{\mathbf{r}}}_\sigma$ when ${\ensuremath{\mathbf{t}}}_\sigma\to{\ensuremath{\mathbf{t}}}'$ or ${\ensuremath{\mathbf{r}}}_\sigma\to{\ensuremath{\mathbf{r}}}'$. By ${{\bf RC}}_2$, the term ${\ensuremath{\mathbf{t}}}'$ is in the set $\sum_{i=1}^n \cap_{\vec{{\mathsf{A}}}\subseteq\vec{{\mathsf{B}}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{(U\to T_i)}\rrbracket}}_{\rho,\rho_i,{(\vec{X}_+,\vec{X}_-) \mapsto(\vec{{\mathsf{A}}},\vec{{\mathsf{B}}})}}$ (or if $n=\alpha_1=1$, the term ${\ensuremath{\mathbf{t}}}'$ is in $\cap_{\vec{{\mathsf{A}}}\subseteq\vec{{\mathsf{B}}}\in{\mathsf{RC}}} {\ensuremath{\llbracket{(U\to T_1)}\rrbracket}}_{\rho,{(\vec{X}_+,\vec{X}_-) \mapsto(\vec{{\mathsf{A}}},\vec{{\mathsf{B}}})}}$), and ${\ensuremath{\mathbf{r}}}'\in\sum_{j=1}^m{\ensuremath{\llbracket{U[\vec{A}_j/\vec{X}]}\rrbracket}}_{\rho,\rho_j}$ (or in ${\ensuremath{\llbracket{U[\vec{A}_1/\vec{X}]}\rrbracket}}_\rho$ if $m=\beta_1=1$). In any case, we conclude by the induction hypothesis. - $({{\ensuremath{\mathbf{t}}}_1}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma+({{\ensuremath{\mathbf{t}}}_2}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ with ${\ensuremath{\mathbf{t}}}_\sigma={{\ensuremath{\mathbf{t}}}_1}_\sigma+{{\ensuremath{\mathbf{t}}}_2}_\sigma$, where, ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{t}}}_1+{\ensuremath{\mathbf{t}}}_2$. Let $s$ be the size of the derivation of $\Gamma\vdash{\ensuremath{\mathbf{t}}}:{\sum_{i=1}^{n}}\alpha_i\cdot \forall\vec{X}.(U\to T_i)$. By Lemma \[lem:sums\], there exists $R_1+R_2\equiv{\sum_{i=1}^{n}}\alpha_i\cdot \forall\vec{X}.(U\to T_i)$ such that $\Gamma\vdash{{\ensuremath{\mathbf{t}}}_1}_\sigma:R_1$ and $\Gamma\vdash{{\ensuremath{\mathbf{t}}}_2}_\sigma:R_2$ can be derived with a derivation tree of size $s-1$ if $R_1+R_2={\sum_{i=1}^{n}}\alpha_i\cdot \forall\vec{X}.(U\to T_i)$, or of size $s-2$ in other case. In such case, there exists $N_1,N_2\subseteq\{1,\dots,n\}$ with $N_1\cup N_2=\{1,\dots,n\}$ such that $$\begin{aligned} R_1\equiv \sum\limits_{i\in N_1\setminus N_2} \alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2} \alpha'_i\cdot\forall\vec X.(U\to T_i) & \mbox{\quad and}\\ R_2\equiv \sum\limits_{i\in N_2\setminus N_1} \alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2} \alpha''_i\cdot\forall\vec X.(U\to T_i) & \end{aligned}$$ where $\forall i\in N_1\cap N_2$, $\alpha'_i+\alpha''_i=\alpha_i$. Therefore, using $\equiv$ we get $$\begin{aligned} \Gamma\vdash{{\ensuremath{\mathbf{t}}}_1}: \sum\limits_{i\in N_1\setminus N_2} \alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2} \alpha'_i\cdot\forall\vec X.(U\to T_i) & \mbox{\quad and}\\ \Gamma\vdash{{\ensuremath{\mathbf{t}}}_2}: \sum\limits_{i\in N_2\setminus N_1} \alpha_i\cdot\forall\vec X.(U\to T_i)+ \sum\limits_{i\in N_1\cap N_2} \alpha''_i\cdot\forall\vec X.(U\to T_i) & \end{aligned}$$ with a derivation three of size $s-1$. So, using rule $\to_E$, we get $$\begin{aligned} \Gamma\vdash({\ensuremath{\mathbf{t}}}_1)~{\ensuremath{\mathbf{r}}}: \sum\limits_{i\in N_1\setminus N_2} {\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ \sum\limits_{i\in N_1\cap N_2} {\sum_{j=1}^{m}}\alpha'_i\times\beta_j\cdot T_i[\vec A_j/\vec X] & \mbox{\qquad and}\\ \Gamma\vdash({\ensuremath{\mathbf{t}}}_2)~{\ensuremath{\mathbf{r}}}: \sum\limits_{i\in N_2\setminus N_1} {\sum_{j=1}^{m}}\alpha_i\times\beta_j\cdot T_i[\vec A_j/\vec X]+ \sum\limits_{i\in N_1\cap N_2} {\sum_{j=1}^{m}}\alpha''_i\times\beta_j\cdot T_i[\vec A_j/\vec X] & \end{aligned}$$ with a derivation three of size $s$. Hence, by the induction hypothesis the term $({{\ensuremath{\mathbf{t}}}_1}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ is in the set $\sum_{i=N_1, j=1\cdots m,k=1\cdots r^{ij}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$, and the term $({{\ensuremath{\mathbf{t}}}_2}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ is in $\sum_{i=N_2, j=1\cdots m,k=1\cdots r^{ij}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$. Hence, by Lemma \[lem:sumrcappendix\] the term $({{\ensuremath{\mathbf{t}}}_1}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma+({{\ensuremath{\mathbf{t}}}_2}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ is in the set $\sum_{i=1,\dots,n, j=1\cdots m,k=1\cdots r^{ij}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$. The case where $m=\alpha_1=\beta_1=r^{11}=1$, and $card(N_1)$ or $card(N_2)$ is equal to $1$ follows analogously. - $({\ensuremath{\mathbf{t}}}_\sigma)~{{\ensuremath{\mathbf{r}}}_1}_\sigma+({\ensuremath{\mathbf{t}}}_\sigma)~{{\ensuremath{\mathbf{r}}}_2}_\sigma$ with ${\ensuremath{\mathbf{r}}}_\sigma={{\ensuremath{\mathbf{r}}}_1}_\sigma+{{\ensuremath{\mathbf{r}}}_2}_\sigma$. Analogous to previous case. - $\gamma\cdot({\ensuremath{\mathbf{t}}}'_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ with ${\ensuremath{\mathbf{t}}}_\sigma=\gamma\cdot{\ensuremath{\mathbf{t}}}'_\sigma$, where ${\ensuremath{\mathbf{t}}}=\gamma\cdot{\ensuremath{\mathbf{t}}}'$. Let $s$ be the size of the derivation of $\Gamma\vdash\gamma\cdot{\ensuremath{\mathbf{t}}}': {\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec{X}.(U\to T_i)$. Then by Lemma \[lem:scalars\], ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)\equiv\alpha\cdot R$ and $\Gamma\vdash{\ensuremath{\mathbf{t}}}':R$. If ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i)=\alpha\cdot R$, such a derivation is obtained with size $s-1$, in other case it is obtained in size $s-2$ and by Lemma \[lem:typecharact\], $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i +{\sum_{k=1}^{h}}\eta_k\cdot{\ensuremath{\mathbb{X}}}_k$, however it is easy to see that $h=0$ because $R$ is equivalent to a sum of terms, where none of them is ${\ensuremath{\mathbb{X}}}$. So $R\equiv{\sum_{i=1}^{n'}}\gamma_i\cdot V_i$. Notice that ${\sum_{i=1}^{n}}\alpha_i\cdot\forall\vec X.(U\to T_i) \equiv {\sum_{i=1}^{n'}}\alpha\times\gamma_i\cdot V_i$. Then by Lemma \[lem:equivdistinctscalars\], there exists a permutation $p$ such that $\alpha_i=\alpha\times\gamma_{p(i)}$ and $\forall\vec X.(U\to T_i)\equiv V_{p(i)}$. Then by rule $\equiv$, in size $s-1$ we can derive $\Gamma\vdash{\ensuremath{\mathbf{t}}}':{\sum_{i=1}^{n}}\gamma_i\cdot\forall\vec X.(U\to T_i)$. Using rule $\to_E$, we get $\Gamma\vdash({\ensuremath{\mathbf{t}}}')~{\ensuremath{\mathbf{r}}}: {\sum_{i=1}^{n}}{\sum_{j=1}^{m}}\gamma_i\times\beta_j\cdot T_i[\vec A_j/\vec X]$ in size $s$. Therefore, by the induction hypothesis, $({{\ensuremath{\mathbf{t}}}'}_\sigma)~{\ensuremath{\mathbf{r}}}_\sigma$ is in the set $\sum_{i=1,\dots,n, j=1\cdots m,k=1\cdots r^{ij}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$. We conclude with Lemma \[lem:alphatinsigma\]. - $\gamma\cdot({\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{r}}}'_\sigma$ with ${\ensuremath{\mathbf{r}}}_\sigma=\gamma\cdot{\ensuremath{\mathbf{r}}}'_\sigma$. Analogous to previous case. - ${\ensuremath{\mathbf{0}}}$ with ${\ensuremath{\mathbf{t}}}_\sigma={\ensuremath{\mathbf{0}}}$, or ${\ensuremath{\mathbf{r}}}_\sigma={\ensuremath{\mathbf{0}}}$. By ${{\bf RC}}_4$, ${\ensuremath{\mathbf{0}}}$ is in every candidate. - The term ${\ensuremath{\mathbf{t}}}'_\sigma[{\ensuremath{\mathbf{r}}}_\sigma/x]$, when ${\ensuremath{\mathbf{t}}}_\sigma=\lambda x.{\ensuremath{\mathbf{t}}}'$ and ${\ensuremath{\mathbf{r}}}$ is a base term. Note that this term is of the form ${\ensuremath{\mathbf{t}}}'_{\sigma'}$ where $\sigma'=\sigma;x\mapsto{\ensuremath{\mathbf{r}}}$. We are in the situation where the types of ${\ensuremath{\mathbf{t}}}$ and ${\ensuremath{\mathbf{r}}}$ are respectively $\forall\vec{X}.(U\to T)$ and $U[\vec{A}/\vec{X}]$, and so $\sum_{i,j,k}{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}= \sum_{k=1}^{r}{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_k}\rrbracket}}_{\rho,\rho_{k}}$, where we omit the index “${11}$” (or directly ${\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}}\rrbracket}}_\rho$ if $r=1$). Note that $$\lambda x.{\ensuremath{\mathbf{t}}}'_\sigma\in{\ensuremath{\llbracket{\forall\vec{X}.(U\to T)}\rrbracket}}_{\rho,\rho'}=\cap_{\vec{{\mathsf{A}}}\subseteq\vec{{\mathsf{B}}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U\to T}\rrbracket}}_{\rho,\rho',(\vec{X}_+,\vec{X}_-)\mapsto(\vec{A},\vec{B})}$$ for all possible $\rho'$ such that $|\rho'|$ does not intersect ${\ensuremath{FV}(\Gamma)}$. Choose $\vec{{\mathsf{A}}}$ and $\vec{{\mathsf{B}}}$ equal to ${\ensuremath{\llbracket{\vec A}\rrbracket}}_{\rho,\rho'}$ and choose $\rho'_-$ to send every $X$ in its domain to $\cap_{k}\rho_{k-}(X)$ and $\rho'_+$ to send all the $X$ in its domain to $\sum_{k}\rho_{k+}(X)$. Then by definition of $\to$ and Lemma \[lem:substRed\], $$\begin{aligned} \lambda x.{\ensuremath{\mathbf{t}}}'_\sigma & \in {\ensuremath{\llbracket{U\to T}\rrbracket}}_{\rho,\rho',(\vec X_+,\vec X_-)\mapsto ({\ensuremath{\llbracket{\vec A}\rrbracket}}_{\bar\rho,\bar\rho'}, {\ensuremath{\llbracket{\vec A}\rrbracket}}_{\rho,\rho'})} \\ &= {\ensuremath{\llbracket{U[\vec{A}/\vec{X}]}\rrbracket}}_{\bar\rho,\bar\rho'}\to {\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho',(\vec X_+,\vec X_-)\mapsto ({\ensuremath{\llbracket{\vec A}\rrbracket}}_{\bar\rho,\bar\rho'}, {\ensuremath{\llbracket{\vec A}\rrbracket}}_{\rho,\rho'})}. \end{aligned}$$ Since ${\ensuremath{\mathbf{r}}}\in{\ensuremath{\llbracket{U[\vec{A}/\vec{X}]}\rrbracket}}_{\bar\rho,\bar\rho'}$, using Lemmas \[lem:applircappendix\] and \[lem:substRed\], $$\begin{aligned} (\lambda x.{\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{r}}} &\in{\ensuremath{\llbracket{T}\rrbracket}}_{\rho,\rho',(\vec X_+,\vec X_-)\mapsto ({\ensuremath{\llbracket{\vec A}\rrbracket}}_{\bar\rho,\bar\rho'}, {\ensuremath{\llbracket{\vec A}\rrbracket}}_{\rho,\rho'}) }\\ &= {\ensuremath{\llbracket{T[\vec{A}/\vec{X}]}\rrbracket}}_{\rho,\rho'}\\ &= \sum_{k=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_k}\rrbracket}}_{\rho,\rho'}\quad\mbox{ or just }\quad{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_1}\rrbracket}}_{\rho,\rho'}\mbox{ if }n=1. \end{aligned}$$ Now, from Lemma \[lem:polar2appendix\], for all $k$ we have ${\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_k}\rrbracket}}_{\rho,\rho'}\subseteq{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_k}\rrbracket}}_{\rho,\rho_k}$. Therefore $$(\lambda x.{\ensuremath{\mathbf{t}}}_\sigma)~{\ensuremath{\mathbf{r}}}\in\sum_{k=1}^n{\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}_k}\rrbracket}}_{\rho,\rho_k}\enspace.$$ Since the set ${{\rm Red}}((({\ensuremath{\mathbf{t}}})~{\ensuremath{\mathbf{r}}})_\sigma)\subseteq{\sum_{i=1}^{n}}{\sum_{j=1}^{m}}{\sum_{k=1}^{r^{ij}}} {\ensuremath{\llbracket{{\ensuremath{\mathbb{W}}}^{ij}_k}\rrbracket}}_{\rho,\rho_{ijk}}$, we can conclude by ${{\bf RC}}_3$. [Since it is valid for any ${\mathsf{B}}\subseteq{\mathsf{A}}$, we can take all the intersections, thus we have ${\ensuremath{\mathbf{t}}}_\sigma\in\sum_{i=1}^n\cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U_i}\rrbracket}}_{\rho,\rho'_i,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})}=\sum_{i=1}^n{\ensuremath{\llbracket{\forall X.U_i}\rrbracket}}_{\rho,\rho'_i}$ (or if $n=\alpha_1=1$ simply ${\ensuremath{\mathbf{t}}}_\sigma\in \cap_{{\mathsf{B}}\subseteq{\mathsf{A}}\in{\mathsf{RC}}}{\ensuremath{\llbracket{U_1}\rrbracket}}_{\rho, \rho'_1,(X_+,X_-)\mapsto({\mathsf{A}},{\mathsf{B}})} ={\ensuremath{\llbracket{\forall X.U_1}\rrbracket}}_{\rho,\rho'_1}$). ]{} Detailed proofs of lemmas and theorems in Section \[sec:examples\] {#app:examples} ================================================================== [**Theorem \[thm:termcharact\]** (Characterisation of terms)**.**]{} Let $T$ be a generic type with canonical decomposition ${\sum_{i=1}^{n}}\alpha_i.{\ensuremath{\mathbb{U}}}_i$, in the sense of Lemma \[lem:typedecomp\]. If ${}\vdash{\ensuremath{\mathbf{t}}}:T$, then ${\ensuremath{\mathbf{t}}}\to^*{\sum_{i=1}^{n}}{\sum_{j=1}^{m_i}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}$, where for all $i$, $\vdash{\ensuremath{\mathbf{b}}}_{ij}:{\ensuremath{\mathbb{U}}}_i$ and ${\sum_{j=1}^{m_i}}\beta_{ij}=\alpha_i$, and with the convention that ${\sum_{j=1}^{0}}\beta_{ij}=0$ and ${\sum_{j=1}^{0}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}={\ensuremath{\mathbf{0}}}$. We proceed by induction on the maximal length of reduction from ${\ensuremath{\mathbf{t}}}$. - Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{b}}}$ or ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{0}}}$. Trivial using Lemma \[lem:basevectors\] or \[lem:termzero\], and Lemma \[lem:typedecomp\]. - Let ${\ensuremath{\mathbf{t}}}=({\ensuremath{\mathbf{t}}}_1)~{\ensuremath{\mathbf{t}}}_2$. Then by Lemma \[lem:app\], $\vdash{\ensuremath{\mathbf{t}}}_1:{\sum_{k=1}^{o}}\gamma_k\cdot\forall\vec{X}.(U\to T_k)$ and $\vdash{\ensuremath{\mathbf{t}}}_2:\sum_{l=1}^{p}\delta_l\cdot U[\vec{A}_l/\vec{X}]$, where ${\sum_{k=1}^{o}}\sum_{l=1}^p\gamma_k\times\delta_l\cdot T_k[\vec{A}_l/\vec{X}]\succeq^{({\ensuremath{\mathbf{t}}}_1){\ensuremath{\mathbf{t}}}_2}_{{\mathcal{V}},\emptyset}T$, for some ${\mathcal{V}}$. Without loss of generality, consider these two types to be already canonical decompositions, that is, for all $k_1,k_2$, $T_{k_1}\not\equiv T_{k_2}$ and for all $l_1,l_2$, $U[\vec{A}_{l_1}/\vec{X}]\not\equiv U[\vec{A}_{l_2}/\vec{X}]$ (in other case, it suffices to sum up the equal types). Hence, by the induction hypothesis, ${\ensuremath{\mathbf{t}}}_1\to^*{\sum_{k=1}^{o}}\sum_{s=1}^{q_k}\psi_{ks}\cdot{\ensuremath{\mathbf{b}}}_{ks}$ and ${\ensuremath{\mathbf{t}}}_2\to^*\sum_{l=1}^p\sum_{r=1}^{t_l}\phi_{lr}\cdot{\ensuremath{\mathbf{b}}}'_{lr}$, where for all $k$, $\vdash{\ensuremath{\mathbf{b}}}_{ks}:\forall\vec{X}.(U\to T_k)$ and $\sum_{s=1}^{q_k}\psi_{ks}=\gamma_k$, and for all $l$, $\vdash{\ensuremath{\mathbf{b}}}'_{lr}: U[\vec{A}_l/\vec{X}]$ and $\sum_{r=1}^{t_l}\phi_{lr}=\delta_l$. By rule $\to_E$, for each $k,s,l,r$ we have $\vdash({\ensuremath{\mathbf{b}}}_{ks})~{\ensuremath{\mathbf{b}}}'_{lr}:T_k[\vec{A}_l/\vec{X}]$, where the induction hypothesis also apply, and notice that $({\ensuremath{\mathbf{t}}}_1)~{\ensuremath{\mathbf{t}}}_2\to^*({\sum_{k=1}^{o}}\sum_{s=1}^{q_k}\psi_{ks}\cdot{\ensuremath{\mathbf{b}}}_{ks})~\sum_{l=1}^p\sum_{r=1}^{t_l}\phi_{lr}\cdot{\ensuremath{\mathbf{b}}}'_{lr} \to^* {\sum_{k=1}^{o}}\sum_{s=1}^{q_k}\sum_{l=1}^p\sum_{r=1}^{t_l}\psi_{ks}\times\phi_{lr}\cdot({\ensuremath{\mathbf{b}}}_{ks})~{\ensuremath{\mathbf{b}}}'_{lr}$. Therefore, we conclude with the induction hypothesis. - Let ${\ensuremath{\mathbf{t}}}=\alpha\cdot{\ensuremath{\mathbf{r}}}$. Then by Lemma \[lem:scalars\], $\vdash{\ensuremath{\mathbf{r}}}:R$, with $\alpha\cdot R\equiv T$. Hence, using Lemmas \[lem:typedecomp\] and \[lem:equivdistinctscalars\], $R$ has a type decomposition ${\sum_{i=1}^{n}}\gamma_i\cdot{\ensuremath{\mathbb{U}}}_i$, where $\alpha\times\gamma_i=\alpha_i$. Hence, by the induction hypothesis, ${\ensuremath{\mathbf{r}}}\to^*{\sum_{i=1}^{n}}{\sum_{j=1}^{m_i}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}$, where for all $i$, $\vdash{\ensuremath{\mathbf{b}}}_{ij}:{\ensuremath{\mathbb{U}}}_i$ and ${\sum_{j=1}^{m_i}}\beta_{ij}=\gamma_i$. Notice that ${\ensuremath{\mathbf{t}}}=\alpha\cdot{\ensuremath{\mathbf{r}}}\to^*\alpha\cdot{\sum_{i=1}^{n}}{\sum_{j=1}^{m_i}}\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}\to^*{\sum_{i=1}^{n}}{\sum_{j=1}^{m_i}}\alpha\times\beta_{ij}\cdot{\ensuremath{\mathbf{b}}}_{ij}$, and $\alpha\cdot{\sum_{j=1}^{m_i}}\beta_{ij}={\sum_{j=1}^{m_i}}\alpha\times\beta_{ij}=\alpha\times\gamma_i=\alpha_i$. - Let ${\ensuremath{\mathbf{t}}}={\ensuremath{\mathbf{t}}}_1+{\ensuremath{\mathbf{t}}}_2$. Then by Lemma \[lem:sums\], $\vdash{\ensuremath{\mathbf{t}}}_1:T_1$ and $\vdash{\ensuremath{\mathbf{t}}}_2:T_2$, with $T_1+T_2\equiv T$. By Lemma \[lem:typedecomp\], $T_1$ has canonical decomposition ${\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{V}}}_j$ and $T_2$ has canonical decomposition ${\sum_{k=1}^{o}}\gamma_k\cdot{\ensuremath{\mathbb{W}}}_k$. Hence by the induction hypothesis ${\ensuremath{\mathbf{t}}}_1\to^*{\sum_{j=1}^{m}}\sum_{l=1}^{p_j}\delta_{jl}\cdot{\ensuremath{\mathbf{b}}}_{jl}$ and ${\ensuremath{\mathbf{t}}}_2\to^*{\sum_{k=1}^{o}}\sum_{s=1}^{q_k}\epsilon_{ks}\cdot{\ensuremath{\mathbf{b}}}'_{ks}$, where for all $j$, $\vdash{\ensuremath{\mathbf{b}}}_{jl}:{\ensuremath{\mathbb{V}}}_{j}$ and $\sum_{l=1}^{p_j}\delta_{jl}=\beta_j$, and for all $k$, $\vdash{\ensuremath{\mathbf{b}}}'_{ks}:{\ensuremath{\mathbb{W}}}_{k}$ and $\sum_{s=1}^{q_k}\epsilon_{ks}=\gamma_k$. In for all $j,k$ we have ${\ensuremath{\mathbb{V}}}_j\neq{\ensuremath{\mathbb{W}}}_k$, then we are done since the canonical decomposition of $T$ is ${\sum_{j=1}^{m}}\beta_j\cdot{\ensuremath{\mathbb{V}}}_j+{\sum_{k=1}^{o}}\gamma_k\cdot{\ensuremath{\mathbb{W}}}_k$. In other case, suppose there exists $j',k'$ such that ${\ensuremath{\mathbb{V}}}_{j'}={\ensuremath{\mathbb{W}}}_{k'}$, then the canonical decomposition of $T$ would be $\sum_{j=1,j\neq j'}^{m}\beta_j\cdot{\ensuremath{\mathbb{V}}}_j+\sum_{k=1,k\neq k'}^{o}\gamma_k\cdot{\ensuremath{\mathbb{W}}}_k+(\beta_{j'}+\gamma_{k'})\cdot{\ensuremath{\mathbb{V}}}_{j'}$. Notice that $\sum_{l=1}^{p_{j'}}\delta_{j'l}+\sum_{s=1}^{q_{k'}}\epsilon_{k's}=\beta_{j'}+\gamma_{k'}$. [^1]: Note that Barendregth’s original proof contains a mistake [@stackexchange]. We use the corrected proof proposed in [@ArrighiDiazcaroLMCS12]
--- abstract: 'We extend Bourgain’s return times theorem to arbitrary locally compact second countable amenable groups. The proof is based on a version of the Bourgain–Furstenberg–Katznelson–Ornstein orthogonality criterion.' address: | Institute of Mathematics\ Hebrew University, Givat Ram\ Jerusalem, 91904, Israel author: - 'Pavel Zorin-Kranich' bibliography: - 'pzorin-ergodic-MR.bib' title: Return times theorem for amenable groups --- Introduction ============ Bourgain’s return times theorem states that for every ergodic measure-preserving system $(X,T)$ and $f\in L^{\infty}(X)$, for a.e. $x\in X$ the sequence $c_{n}=f(T^{n}x)$ is a *good sequence of weights* for the pointwise ergodic theorem, i.e. for every measure-preserving system $(Y,S)$ and $g\in L^{\infty}(Y)$, for a.e. $y\in Y$ the averages $$\frac1N \sum_{n=1}^{N} c_{n} g(S^{n}y)$$ converge as $N\to\infty$. This has been extended to discrete amenable groups for which an analog of the Vitali covering lemma holds by Ornstein and Weiss [@MR1195256]. In this article we use the Lindenstrauss random covering lemma to extend this result to not necessarily discrete locally compact second countable (lcsc) amenable groups. It has been observed by Lindenstrauss that this is possible in the discrete case. In the non-discrete case we have to restrict ourselves to the class of *strong* Følner sequences (see Definition \[def:folner\]), but we will show that every lcsc amenable group admits such a sequence. A secondary motivation is to formulate and prove the Bourgain–Furstenberg–Katznelson–Ornstein (BFKO) orthogonality criterion [@MR1557098] at an appropriate level of generality. This criterion provides a sufficient condition for the values of a function along an orbit of an ergodic measure-preserving transformation to be good weights for convergence to zero in the pointwise ergodic theorem. The original formulation of the orthogonality criterion is slightly artificial since it assumes something about the whole measure-preserving system but concludes something that only involves a single orbit. A more conceptual approach is to find a condition that identifies good weights and to prove that it is satisfied along almost all orbits of a measure-preserving system in a separate step. For ${\mathbb{Z}}$-actions this seems to have been first explicitly mentioned in [@MR1286798 4]. In order to state the appropriate condition for general lcsc amenable groups we need some notation. Throughout the article $G$ denotes a lcsc amenable group with left Haar measure $|\cdot|$ and $(F_{N})$ a Følner sequence in $G$ that is usually fixed. The *lower density* of a subset $S\subset G$ is defined by ${\underline{d}}(S) := \liminf_{N} |S\cap F_{N}| / |F_{N}|$ and the *upper density* is defined accordingly as ${\overline{d}}(S) := \limsup_{N} |S\cap F_{N}| / |F_{N}|$. All functions on $G$ that we consider are real-valued and bounded by $1$. We denote averages by ${\mathbb{E}}_{g\in F_{n}}:=\frac{1}{|F_{n}|} \int_{g\in F_{n}}$. For $c\in L^{\infty}(G)$ we let $$S_{\delta,L,R}(c) := \{a : \forall L\leq n\leq R\, | {\mathbb{E}}_{g\in F_{n}}c(g)c(ga) | < \delta\}.$$ Our orthogonality condition on the map $c$ is then the following. $$\tag{$\perp$} \label{eq:cond} \forall\delta>0\, \exists N_{\delta}\in{\mathbb{N}}\, \forall N_{\delta} \leq L \leq R\quad {\underline{d}}(S_{\delta,L,R}(c)) > 1-\delta.$$ A very rough approximation to this condition is that there is little correlation between $c$ and its translates. The significance of this condition is explained by the following statements. \[lem:ae-perp\] Let $(X,\mu,G)$ be an ergodic measure-preserving system and $f\in L^{\infty}(X)$ be orthogonal to the Kronecker factor. Then for a.e. $x\in X$ the map $g\mapsto f(gx)$ satisfies . \[thm:good-weight-ptw-conv-to-zero\] Assume that $(F_{N})$ is a tempered strong Følner sequence and $c\in L^{\infty}(G)$ satisfies the condition . Then for every ergodic measure-preserving system $(X,G)$ and $f\in L^{\infty}(X)$ we have $$\lim_{N\to\infty} {\mathbb{E}}_{g\in F_{N}} c(g) f(gx) = 0 \quad \text{for a.e.\ } x\in X.$$ This, together with a Wiener–Wintner type result, leads to the following return times theorem. \[thm:rtt-amenable\] Let $G$ be a lcsc group with a tempered strong Følner sequence $(F_{n})$. Then for every ergodic measure-preserving system $(X,G)$ and every $f\in L^{\infty}(X)$ there exists a full measure set $\tilde X\subset X$ such that for every $x\in\tilde X$ the map $g\mapsto f(gx)$ is a good weight for the pointwise ergodic theorem along $(F_{n})$. Background and tools ==================== Følner sequences ---------------- A lcsc group is called *amenable* if it admits a weak Følner sequence in the sense of the following definition. \[def:folner\] Let $G$ be a lcsc group with left Haar measure $|\cdot|$. A sequence of nonnull compact sets $(F_{n})$ is called 1. a *weak Følner sequence* if for every compact set $K\subset G$ one has $|F_{n} \Delta KF_{n}| / |F_{n}| \to 0$ and 2. a *strong Følner sequence* if for every compact set $K\subset G$ one has $|\partial_{K}(F_{n})|/|F_{n}| \to 0$, where $\partial_{K}(F)= K{^{-1}}F \cap K{^{-1}}F^{\complement}$ is the *$K$-boundary* of $F$. 3. *($C$-)tempered* if there exists a constant $C$ such that $$|\cup_{i<j} F_{i}{^{-1}}F_{j}| < C |F_{j}| \quad\text{for every }j.$$ Every strong Følner sequence is also a weak Følner sequence. In countable groups the converse is also true, but already in ${\mathbb{R}}$ this is no longer the case: let for example $(F_{n})$ be a sequence of nowhere dense sets $F_{n}\subset [0,n]$ of Lebesgue measure $n-1/n$, say. This is a weak but not a strong Følner sequence (in fact, $\partial_{[-1,0]} F_{n}$ is basically $[0,n+1]$). However, a weak Følner sequence can be used to construct a strong Følner sequence. Assume that $G$ admits a weak Følner sequence. Then $G$ also admits a strong Følner sequence. By the argument in [@2012arXiv1205.3649P Lemma 2.6] it suffices, given $\epsilon>0$ and a compact set $K\subset G$, to find a compact set $F$ with $|\partial_{K}(F)|/|F|<\epsilon$. Let $(F_{n})$ be a weak Følner sequence, then there exists $n$ such that $|K{^{-1}}K F_{n} \Delta F_{n}| < \epsilon |F_{n}|$. Set $F=KF_{n}$, then $$\partial_{K}F = K{^{-1}}KF_{n} \cap K{^{-1}}(KF_{n})^{\complement} \subset K{^{-1}}KF_{n}\cap F_{n}^{\complement},$$ and this has measure less than $\epsilon |F_{n}| \leq \epsilon |F|$. Since every weak (hence also every strong) Følner sequence has a tempered subsequence [@MR1865397 Proposition 1.4] this implies that every lcsc amenable group admits a tempered strong Følner sequence. Fully generic points -------------------- Let $(X,G)$ be an ergodic measure-preserving system and $f\in L^{\infty}(X)$. Recall that a point $x\in X$ is called *generic* for $f$ if $$\lim_{n} {\mathbb{E}}_{g\in F_{n}} f(gx) = \int_{X}f.$$ In the context of countable group actions fully generic points for $f$ are usually defined as points that are generic for every function in the closed $G$-invariant algebra spanned by $f$. For uncountable groups this is not a good definition since this algebra need not be separable. The natural substitute for shifts of a function $f\in L^{\infty}(X)$ is provided by convolutions $$c*f(x) = \int_{G} c(g) f(g{^{-1}}x) {\mathrm{d}}g, \quad c\in L^{1}(G).$$ Since $L^{1}(G)$ is separable and convolution is continuous as an operator $L^{1}(G)\times L^{\infty}(X) \to L^{\infty}(X)$, the closed convolution-invariant algebra generated by $f$ is separable. We call a point $x\in X$ *fully generic* for $f$ if it is generic for every function in this algebra. In view of the Lindenstrauss pointwise ergodic theorem [@MR1865397 Theorem 1.2], if $(F_{n})$ is tempered, then for every $f\in L^{\infty}(X)$ a.e. $x\in X$ is fully generic. Now we verify that the BFKO condition implies . \[lem:to-zero-in-density\] Let $G$ be a lcsc group with a tempered Følner sequence $(F_{n})$. Let $(X,G)$ be an ergodic measure-preserving system and $f\in L^{\infty}(X)$ be bounded by $1$. Let $x\in X$ be a fully generic point for $f$ such that $$\label{eq:BFKO-cond} \lim_{n}{\mathbb{E}}_{g\in F_{n}} f(gx)f(g\xi) = 0 \quad\text{for a.e. }\xi\in X.$$ Then the map $g\mapsto f(gx)$ satisfies . Let $\delta>0$ be arbitrary. By Egorov’s theorem there exists an $N_{\delta}\in{\mathbb{N}}$ and a set $\Xi\subset X$ of measure $>1-\delta$ such that for every $n\geq N_{\delta}$ and $\xi\in\Xi$ the average in is bounded by $\delta/2$. Let $N_{\delta} \leq L \leq R$ be arbitrary and choose a continuous function $\eta : {\mathbb{R}}^{[L,R]} \to [0,1]$ that is $1$ when all its arguments are less than $\delta/2$ and $0$ when one of its arguments is greater than $\delta$ (here and later $[L,R]=\{L,L+1,\dots,R\}$). Then by the Stone–Weierstrass theorem the function $$h(\xi):=\eta(|{\mathbb{E}}_{g\in F_{L}}f(gx)f(g\xi)|,\dots,|{\mathbb{E}}_{g\in F_{R}}f(gx)f(g\xi)|)$$ lies in the closed convolution-invariant subalgebra of $L^{\infty}(X)$ spanned by $f$. By the assumption $x$ is generic for $h$. Since $h|_{\Xi} \equiv 1$, we have $\int_{X} h > 1-\delta$. Hence the set of $a$ such that $h(ax)>0$ has lower density $>1-\delta$. For every such $a$ we have $$| {\mathbb{E}}_{g\in F_{n}} f(gax)f(gx) | < \delta, \quad L\leq n\leq R. \qedhere$$ Let $f\in L^{\infty}(X)$ be orthogonal to the Kronecker factor. By [@MR0174705 Theorem 1] this implies that the ergodic averages of $f\otimes f$ converge to $0$ in $L^{2}(X\times X)$. By the Lindenstrauss pointwise ergodic theorem [@MR1865397 Theorem 1.2] this implies for a.e. $x\in X$. Since a.e. $x\in X$ is also fully generic for $f$, the conclusion follows from Lemma \[lem:to-zero-in-density\]. Lindenstrauss covering lemma ---------------------------- Given a collection of intervals, the classical Vitali covering lemma allows one to select a disjoint subcollection that covers a fixed fraction of the union of the full collection. The appropriate substitute in the setting of tempered Følner sequences is the Lindenstrauss random covering lemma. It allows one to select a *random* subcollection that is *expected* to cover a fixed fraction of the union and to be *almost* disjoint. The almost disjointness means that the expectation of the counting function of the subcollection is uniformly bounded by a constant. As such, the Vitali lemma is stronger whenever it applies, and the reader who is only interested in the standard Følner sequence in ${\mathbb{Z}}$ can skip this subsection. We use two features of Lindenstrauss’ proof of the random covering lemma that we emphasize in its formulation below. The first feature is that the second moment (and in fact all moments) of the counting function is also uniformly bounded (this follows from the bound for the moments of a Poisson distribution). The second feature is that the random covering depends measurably on the data. We choose to include the explicit construction of the covering in the statement of the lemma instead of formalizing this measurability statement. To free up symbols for subsequent use we replace the auxiliary parameter $\delta$ in Lindenstrauss’ statement of the lemma by $C{^{-1}}$ and expand the definition of $\gamma$. For completeness we recall that a *Poisson point process* with intensity $\alpha$ on a measure space $(X,\mu)$ is a counting (i.e. atomic, with at most countably many atoms and masses of atoms in ${\mathbb{N}}$) measure-valued map $\Upsilon:\Omega \to M(X)$ such that for every finite measure set $A\subset X$ the random variable $\omega\mapsto \Upsilon(\omega)(A)$ is Poisson with mean $\alpha\mu(A)$ and for any disjoint sets $A_{i}$ the random variables $\omega\mapsto \Upsilon(\omega)|_{A_{i}}$ are jointly independent (here and later $\Upsilon|_{A}$ is the measure $\Upsilon|_{A}(B)=\Upsilon(A\cap B)$). It is well-known that on every $\sigma$-finite measure space there exists a Poisson process. \[lem:lindenstrauss-covering\] Let $G$ be a lcsc group with left Haar measure $|\cdot|$. Let $(F_{N})_{N=L}^{R}$ be a $C$-tempered sequence. Let $\Upsilon_{N}:\Omega_{N}\to M(G)$ be independent Poisson point processes with intensity $\alpha_{N}=\delta/|F_{N}|$ w.r.t. the right Haar measure $\rho$ on $G$ and let $\Omega:=\prod_{N}\Omega_{N}$. Let $A_{N|R+1}\subset G$, $N=L,\dots,R$, be sets of finite measure. Define (dependent!) counting measure-valued random variables $\Sigma_{N} : \Omega \to M(G)$ in descending order for $N=R,\dots,L$ by 1. $\Sigma_{N} := \Upsilon_{N}|_{A_{N|N+1}}$, 2. $A_{i|N} := A_{i|N+1}\setminus F_{i}{^{-1}}F_{N}\Sigma_{N} = \{ a\in A_{i|N+1} : F_{i}a\cap F_{N}\Sigma_{N}=\emptyset\}$ for $i<N$. Then for the counting function $$\Lambda = \sum_{N}\Lambda_{N}, \quad \Lambda_{N}(g)(\omega) = \sum_{a\in\Sigma_{N}(\omega)}1_{F_{N}a}(g)$$ the following holds. 1. $\Lambda$ is a measurable, a.s. finite function on $\Omega\times G$, 2. ${\mathbb{E}}(\Lambda(g)|\Lambda(g)\geq 1) \leq 1+C{^{-1}}$ for every $g\in F$, 3. ${\mathbb{E}}(\Lambda^{2}(g)|\Lambda(g)\geq 1) \leq (1+C{^{-1}})^{2}$ for every $g\in F$, 4. ${\mathbb{E}}(\int\Lambda) \geq (2C){^{-1}}|\cup_{N=L}^{R}A_{N|R+1}|$. Orthogonality criterion for amenable groups =========================================== In our view, the BFKO orthogonality criterion is a statement about bounded measurable functions on $G$. We encapsulate it in the following lemma. \[lem:orth\] Let $(F_{N})$ be a $C$-tempered strong Følner sequence. Let $\epsilon>0$, $K\in{\mathbb{N}}$ and $\delta>0$ be sufficiently small depending on $\epsilon,K$. Let $c\in L^{\infty}(G)$ be bounded by $1$ and $[L_{1},R_{1}],\dots,[L_{K},R_{K}]$ be a sequence of increasing intervals of natural numbers such that the following holds for any $j<k$ and any $N\in [L_{k},R_{k}]$. 1. $|\partial_{F_{(j)}}F_{N}|<\delta |F_{N}|$, where $F_{(j)}=\cup_{N=L_{j}}^{R_{j}} F_{N}$ 2. $S_{\delta,L_{j},R_{j}}(c)$ has density at least $1-\delta$ in $F_{N}$. Let $f\in L^{\infty}(G)$ be bounded by $1$ and consider the sets $$A_{N}:= \{a : |{\mathbb{E}}_{g\in F_{N}}c(g)f(ga)| \geq \epsilon\}, \quad A_{(j)} := \cup_{N=L_{j}}^{R_{j}}A_{N}.$$ Then for every compact set $I\subset G$ with $|I\cap F_{(j)}{^{-1}}I^{\complement}|<\delta |I|$ for every $j$ we have $$\frac1K \sum_{j=1}^{K} d_{I}(A_{(j)}) < \frac{5C}{\epsilon \sqrt{K}}.$$ Under the assumption a sequence $[L_{1},R_{1}],\dots,[L_{K},R_{K}]$ with the requested properties can be constructed for any $K$. For $1\leq k\leq K$, $L_{k}\leq N \leq R_{k}$ let $\Upsilon_{N}:\Omega_{N}\to M(G)$ be independent Poisson point processes of intensity $\alpha_{N}=\delta |F_{N}|{^{-1}}$ w.r.t. the *right* Haar measure. Let $\Omega = \prod_{k=1}^{K}\prod_{N=L_{k}}^{R_{k}}\Omega_{N}$. We construct random variables $\Sigma_{N} : \Omega \to M(A_{N})$ that are in turn used to define functions $$c^{(k)} := \sum_{N=L_{k}}^{R_{k}}\sum_{a\in\Sigma_{N}} \pm c|_{F_{N}}(\cdot a{^{-1}}), \quad k=1,\dots,K,$$ where the sign is chosen according to as to whether ${\mathbb{E}}_{g\in F_{N}}c(g)f(ga)$ is positive or negative. These functions will be mutually nearly orthogonal on $I$ and correlate with $f$, from where the estimate will follow by a standard Hilbert space argument. We construct the random variables in reverse order, beginning with $k=K$. Let the set of “admissible origins” be $$O^{(j)}:=A_{(j)}\cap \Big(\big(I \setminus F_{(j)}{^{-1}}I^{\complement} \big) \setminus \cup_{k=j+1}^{K}\cup_{N=L_{k}}^{R_{k}}\cup_{a\in\Sigma_{N}} (\partial_{F_{(j)}}(F_{N}) \cup (S_{\delta,L_{j},R_{j}}(c)^{\complement} \cap F_{N}))a \Big).$$ This set consists of places where we could put copies of initial segments of $c$ in such a way that they would correlate with $f$ and would not correlate with the copies that were already used in the functions $c^{(k)}$ for $k>j$. Let $A_{N|R_{j}+1} := O^{(j)}\cap A_{N}$ and construct random coverings $\Sigma_{N}$, $N=L_{j},\dots,R_{j}$ as in Lemma \[lem:lindenstrauss-covering\] (if the Vitali lemma is available then one can use deterministic coverings that it provides instead). By Lemma \[lem:lindenstrauss-covering\] the counting function $$\Lambda^{(j)} = \sum_{N=L_{j}}^{R_{j}}\Lambda_{N}, \quad \Lambda_{N}(g)(\omega) = \sum_{a\in\Sigma_{N}(\omega)}1_{F_{N}a}(g)$$ satisfies 1. ${\mathbb{E}}(\Lambda^{(j)}(g)) \leq (1+C{^{-1}})$ for every $g\in G$ 2. ${\mathbb{E}}(\Lambda^{(j)}(g)^{2}) \leq (1+C{^{-1}})^{2}$ for every $g\in G$ 3. ${\mathbb{E}}(\int\Lambda^{(j)}) \geq (2C){^{-1}}|O^{(j)}|$. In particular, the last condition implies that $${\mathbb{E}}\int_{I} c^{(j)}f > \epsilon(2C){^{-1}}|O^{(j)}|,$$ while the second shows that $\|c^{(j)}\|_{L^{2}(\Omega\times I)}\leq (1+C{^{-1}})|I|^{1/2}$. Moreover, it follows from the definition of $O^{(j)}$ that $$|{\mathbb{E}}\int_{I} c^{(j)} c^{(k)} | \leq |I|\delta(1+C{^{-1}})$$ whenever $j<k$. Using the fact that $|c^{(j)}|\leq \Lambda^{(j)}$ and the Hölder inequality we obtain $$\begin{gathered} \sum_{j=1}^{K} \epsilon(2C){^{-1}}{\mathbb{E}}|O^{(j)}| < {\mathbb{E}}\int_{I} \sum_{j=1}^{K} c^{(j)}f\\ \leq \big( {\mathbb{E}}\int_{I} \big( \sum_{j=1}^{K} c^{(j)} \big)^{2} \big)^{1/2} |I|^{1/2} < |I| \sqrt{K(1+C{^{-1}})^{2} + K^{2}\delta(1+C{^{-1}})}.\end{gathered}$$ This can be written as $$\frac1K \sum_{j=1}^{K} {\mathbb{E}}|O^{(j)}| < \frac{2C|I|}{\epsilon} \sqrt{(1+C{^{-1}})^{2}/K+\delta(1+C{^{-1}})}.$$ Finally, the set $O^{(j)}$ has measure at least $$\begin{gathered} |I|(d_{I}(A_{(j)})-\delta) -\sum_{k=j+1}^{K}\sum_{N=L_{k}}^{R_{k}}\sum_{a\in\Sigma_{N}} (|\partial_{F_{(j)}}(F_{N}a)|+|(S_{\delta,L_{j},R_{j}}(c)^{\complement} \cap F_{N})a|)\\ \geq |I| (d_{I}(A_{(j)})-\delta) -2\delta \sum_{k=j+1}^{K}\sum_{N=L_{k}}^{R_{k}}\sum_{a\in\Sigma_{N}} |F_{N}a|,\end{gathered}$$ (here we have used the largeness assumptions on $L_{k}$), so $${\mathbb{E}}|O^{(j)}| \geq |I| (d_{I}(A_{(j)})-\delta) -2\delta (K-j) |I| (1+C{^{-1}}) > |I| (d_{I}(A_{(j)})-4\delta K)$$ and the conclusion follows provided that $\delta$ is sufficiently small. The BFKO criterion for measure-preserving systems follows by a transference argument. Assume that the conclusion fails for some measure-preserving system $(X,G)$ and $f\in L^{\infty}$. Then we obtain some $\epsilon>0$ and a set of positive measure $\Xi\subset X$ such that $$\limsup_{N\to\infty} | {\mathbb{E}}_{g\in F_{N}}c(g)f(gx) | > 2\epsilon \quad\text{for all } x\in\Xi.$$ We may assume $\mu(\Xi)>\epsilon$. Shrinking $\Xi$ slightly (so that $\mu(\Xi)>\epsilon$ still holds) we may assume that for every ${\underline{N}}\in{\mathbb{N}}$ there exists $F({\underline{N}})\in{\mathbb{N}}$ (independent of $x$) such that for every $x\in\Xi$ there exists ${\underline{N}}\leq N\leq F({\underline{N}})$ such that the above average is bounded below by $2\epsilon$. Let $K > 25 C^{2} \epsilon^{-4}$ and $[L_{1},R_{1}],\dots,[L_{K},R_{K}]$ be as in Lemma \[lem:orth\] with $R_{j}=F(L_{j})$. In this case that lemma says that at least one of the sets $\cup_{N=L_{j}}^{R_{j}}A_{N}$ has lower density less than $\epsilon$. Choose continuous functions $\eta_{j} : {\mathbb{R}}^{[L_{j},R_{j}]}\to [0,1]$ that are $1$ when at least one of their arguments is greater than $2\epsilon$ and $0$ if all their arguments are less than $\epsilon$. Let $$h(x) := \prod_{j=1}^{K} \eta_{j}(|{\mathbb{E}}_{g\in F_{L_{j}}}c(g)f(gx)|,\dots,|{\mathbb{E}}_{g\in F_{R_{j}}}c(g)f(gx)|).$$ By construction of $F$ we know that $h|_{\Xi}\equiv 1$, so that $\int_{X} h > \epsilon$. Let $x_{0}$ be a generic point for $h$ (e.g. any fully generic point for $f$), then ${\underline{d}}\{a : h(ax_{0})>0\} > \epsilon$. In other words, $${\underline{d}}\{ a : \forall j\leq K\, \exists N \in [L_{j},R_{j}]\, | {\mathbb{E}}_{g\in F_{N}} c(g)f(gax_{0}) | \geq \epsilon \} > \epsilon.$$ This contradicts Lemma \[lem:orth\] with $f(g)=f(gx_{0})$. For translations on compact groups we obtain the same conclusion everywhere. It is not clear to us whether an analogous statement holds for general uniquely ergodic systems. \[cor:good-weight-everywhere-conv-to-zero\] Let $G$ be a lcsc group with a $C$-tempered strong Følner sequence $(F_{n})$. Let $c\in L^{\infty}(G)$ be a function bounded by $1$ that satisfies the condition . Let also $\Omega$ be a compact group and $\chi : G\to\Omega$ a continuous homomorphism. Then for every $\phi\in C(\Omega)$ we have $$\lim_{N\to\infty} {\mathbb{E}}_{g\in F_{N}} c(g) \phi(\chi(g)\omega) = 0 \quad \text{for every } \omega\in\Omega.$$ We may assume that $\chi$ has dense image, so that the translation action by $\chi$ becomes ergodic. By Theorem \[thm:good-weight-ptw-conv-to-zero\] we obtain the conclusion a.e. and the claim follows by uniform continuity of $\phi$. Return times theorem ==================== We turn to the deduction of the return times theorem (Theorem \[thm:rtt-amenable\]). This will require two distinct applications of Theorem \[thm:good-weight-ptw-conv-to-zero\]. We begin with a Wiener–Wintner type result. Recall that the Kronecker factor of a measure-preserving dynamical system corresponds to the reversible part of the Jacobs–de Leeuw–Glicksberg decomposition of the associated Koopman representation. In particular, it is spanned by the finite-dimensional $G$-invariant subspaces of $L^{2}(X)$. We refer to [@isem-book] for a treatment of the JdLG decomposition. Let $F \subset L^{2}(X)$ be a $d$-dimensional $G$-invariant subspace and $f\in F$. We will show that for a.e. $x\in X$ we have $f(gx) = \phi(\chi(g)u)$ for some $\phi\in C(U(d))$, continuous representation $\chi : G \to U(d)$, and a.e. $g\in G$. To this end choose an orthonormal basis $(f_{i})_{i=1,\dots,d}$ of $F$. Then by the invariance assumption we have $f_{i}(g\cdot) = \sum_{j} c_{i,j} f_{j}(\cdot)$, and the matrix $(c_{i,j})=:\chi(g)$ is unitary since the $G$-action on $X$ is measure-preserving. This gives us a $d$-dimensional measurable representation $\chi$ that is automatically continuous [@0837.43002 Theorem 22.18]. The point $u=(u_{i})$ is given by the coordinate representation $f=\sum u_{i}f_{i}$. Thus we have $f(g\cdot) = \sum_{i}(\chi(g)u)_{i} f_{i}(\cdot)$ in $L^{2}(X)$ and hence, fixing some measurable representatives for $f_{i}$’s, a.e. on $X$. By Fubini’s theorem we obtain a full measure subset of $X$ such that the above identity holds for a.e. $g\in G$. For every $x$ from this set we obtain the claim with the continuous function $\phi(U) = \sum_{i} (Uu)_{i}f_{i}(x)$. \[cor:wiener-wintner\] Let $G$ be a lcsc group with a $C$-tempered strong Følner sequence $(F_{n})$. Then for every ergodic measure-preserving system $(X,G)$ and every $f\in L^{\infty}(X)$ there exists a full measure set $\tilde X \subset X$ such that the following holds. Let $\Omega$ be a compact group and $\chi : G\to\Omega$ a continuous homomorphism. Then for every $\phi\in C(\Omega)$, every $\omega\in\Omega$ and every $x\in \tilde X$ the limit $$\lim_{N\to\infty} {\mathbb{E}}_{g\in F_{N}} f(gx) \phi(\chi(g)\omega)$$ exists. By Lemma \[lem:ae-perp\] and Corollary \[cor:good-weight-everywhere-conv-to-zero\] we obtain the conclusion for $f$ orthogonal to the Kronecker factor. By linearity and in view of the Lindenstrauss maximal inequality [@MR1865397 Theorem 3.2] it remains to consider $f$ in a finite-dimensional invariant subspace of $L^{2}(X)$. In this case, for a.e. $x\in X$ we have $f(gx)=\phi'(\chi'(g)u_{0})$ for some finite-dimensional representation $\chi':G\to U(d)$, some $u_{0}\in U(d)$, some $\phi'\in C(U(d))$ and a.e. $g\in G$. The result now follows from uniqueness of the Haar measure on the closure of $\chi\times\chi'(G)$. A different proof using unique ergodicity of an ergodic group extension of a uniquely ergodic system can be found in [@MR1195256]. Finally, the return times theorem follows from a juxtaposition of previous results. By Lemma \[lem:ae-perp\] and Theorem \[thm:good-weight-ptw-conv-to-zero\] the conclusion holds for $f\in L^{\infty}(X)$ orthogonal to the Kronecker factor. By linearity and in view of the Lindenstrauss maximal inequality [@MR1865397 Theorem 3.2] it remains to consider $f$ in a finite-dimensional invariant subspace of $L^{2}(X)$. In this case, for a.e. $x\in X$ we have $f(gx)=\phi(\chi(g)u_{0})$ for some finite-dimensional representation $\chi:G\to U(d)$, some $u_{0}\in U(d)$, some $\phi\in C(U(d))$ and a.e. $g\in G$. The conclusion now follows from Corollary \[cor:wiener-wintner\].
--- abstract: 'Detecting manipulated images has become a significant emerging challenge. The advent of image sharing platforms and the easy availability of advanced photo editing software have resulted in a large quantities of manipulated images being shared on the internet. While the intent behind such manipulations varies widely, concerns on the spread of fake news and misinformation is growing. Current state of the art methods for detecting these manipulated images suffers from the lack of training data due to the laborious labeling process. We address this problem in this paper, for which we introduce a manipulated image generation process that creates true positives using currently available datasets. Drawing from traditional work on image blending, we propose a novel generator for creating such examples. In addition, we also propose to further create examples that force the algorithm to focus on boundary artifacts during training. Strong experimental results validate our proposal.' author: - | Peng Zhou[^1^]{}Bor-Chun Chen[^1^]{}Xintong Han[^2^]{}Mahyar Najibi[^1^]{}\ Abhinav Shrivastava[^1^]{}Ser Nam Lim[^3^]{}Larry S. Davis[^1^]{}\ [[^1^]{}University of Maryland, College Park ]{} [[^2^]{}Malong Technologies ]{} [[^3^]{}Facebook\ ]{} bibliography: - 'main.bib' title: 'Generate, Segment and Refine: Towards Generic Manipulation Segmentation' --- Introduction ============ ![**Examples of manipulated images across different datasets.** Columns from left to right are images in CASIA [@dong2013casia], COVER [@wen2016coverage], Carvalho [@de2013exposing], and In-The-Wild [@huh2018fighting]. The odd rows are manipulated images and the even rows are the ground truth masks. Different datasets contain different distributions (from animals to person), manipulation techniques (from copy-move (the second column) to splicing (the rest columns)) and post-processing methods (from no post-processing to various processes including filtering, illumination, and blurring). []{data-label="fig:eg"}](./images/dataset.pdf){width="1\linewidth"} ![image](./images/framework_new.pdf){width="0.96\linewidth"} Manipulated photos are becoming ubiquitous on social media due to the availability of advanced editing software, including powerful generative adversarial models such as [@isola2017image; @yeh2017semantic]. While such images have been created for a variety of purposes, including memes, satires, etc., there are growing concerns on the abuse of manipulated images to spread fake news and misinformation. To this end, a variety of solutions have been developed towards detecting such manipulated images. While a number of proposed solutions posed the problem as a classification task [@cozzolino2018forensictransfer; @zhou2017two], where the goal is to classify whether a given image has been tampered with, there is great utility for solutions that are capable of detecting manipulated regions in a given image [@huh2018fighting; @zhou2017two; @park2018double; @salloum2018image]. In this paper, we similarly treat this problem as a semantic segmentation task and adapt GANs [@goodfellow2014generative] to generate samples to alleviate the lack of training data. The lack of training data has been an ongoing problem for training models to detect manipulated images. Scouring the internet for “real” tampered images [@moreira2018image] is a laborious process that often also leads to over-fitting in the training process. Alternatively, one could employ a self-supervised process, where detected objects in one image are spliced onto another, with the caveat that such a process often results in training images that are not realistic. Of course, the best approach for generating training samples is to employ professional labelers to create realistic looking manipulated images, but this remains a very tedious process. It is therefore not surprising that existing datasets [@huh2018fighting; @dong2010casia; @dong2013casia; @wen2016coverage; @de2013exposing] are often not comprehensive enough to train models that generalize well. Additionally, in contrast to standard semantic image segmentation, correctly segmenting manipulated regions depends more on visual artifacts that are often created at the boundaries of manipulated regions than on semantic content [@bappy2017exploiting; @zhou2018learning]. Several challenges exist in recognizing these boundary artifacts. First, the space of manipulations is very diverse. One can, for example, do a [*copy-move*]{}, which copies and pastes image regions within the same image (the second column in Figure \[fig:eg\]) , or [*splice*]{}, which copies a region from one image and pastes it to another image (the remaining columns in Figure \[fig:eg\]). Second, a variety of post-processing such as compression, blurring, and various color transformations make it harder to detect boundary artifacts caused by tampering. See Figure \[fig:eg\] for some examples. Most existing methods [@huh2018fighting; @zhou2018learning; @park2018double; @salloum2018image] that utilize discriminative features like image metadata, noise models, or color artifacts due to, for example, Color Filter Array (CFA) inconsistencies, have failed to generalize well for these reasons. This paper introduces a two-pronged approach to (1) address the lack of comprehensive training data, as well as, (2) focus the training process on learning to recognize boundary artifacts better. We adopt GANs for addressing (1), but instead of relying on prior GAN methods [@isola2017image; @CycleGAN2017; @karras2017progressive] that mainly explore image level manipulation, we introduce a novel objective function (Section \[Generator\]) that optimizes for the realism of the manipulated regions by blending tampered regions in existing datasets to assist segmentation. That is, given an annotated image from an existing dataset, our GAN takes the given annotated regions and optimizes via a blending based objective function to enhance the realism of the regions. Blending has been shown to be effective in creating training images effective for the task of object detection in [@debi2017cutpaste], and this forms our main motivation in formulating our GAN. To address (2), we propose a segmentation and refinement procedure. The segmentation stage localizes manipulated regions by learning to spot boundary artifacts. To further prevent the network from just focusing on semantic content, the refinement stage refines the predicted manipulation boundaries with authentic background and feed the new manipulated images back to the segmentation network. We will show empirically that the segmentation and refinement has the effect of focusing the model’s attention on boundary artifacts during learning. (See Table \[tab:ablation\]) We design an architecture called GSR-Net which includes these three components—a generation stage, a segmentation stage and a refinement stage. The architecture of GSR-Net is shown in Figure \[fig:frame\]. During training, we alternatively train the generation GAN, followed by the segmentation and refinement stage, which take as input the output of the generation stage as well as images from the training datasets. The additional varieties of manipulation artifacts provided by both the generation and refinement stages produce models that exhibit very good generalization ability. We evaluate GSR-Net on four public benchmarks and show that it performs better or equivalent to state-of-the-art methods. Experiments with two different post-processing attacks further demonstrate the robustness of GSR-Net. In summary, the contributions of this paper are 1) A framework that augments existing datasets in a way that specifically addresses the main weaknesses of current approaches without requiring new annotations efforts; 2) Introducing a generation stage with a novel objective function based on blending for generating images effective for training models to detect tampered regions; 3) Introducing a novel refinement stage that encourages the learning of boundary artifacts inherent in manipulated regions, which, to the best of our knowledge, no prior work in this field has utilized to help training. Related Work ============ **Image Manipulation Segmentation**. Park . [@park2018double] train a network to find JPEG compression discrepancies between manipulated and authentic regions. Zhou . [@zhou2017two; @zhou2018learning] harness noise features to find inconsistencies within a manipulated image. Huh and Liu . [@huh2018fighting] treat the problem as anomaly segmentation and use metadata to locate abnormal patches. The features used in these works are based on the assumption that manipulated regions are from a different image, which is not the case in copy-move manipulation. However, our method directly focuses on general artifacts in the RGB channel without specific feature extraction and thus can be applied to copy-move segmentation. More related works of Salloum . [@salloum2018image] and Bappy . [@bappy2017exploiting] show the potential of boundary artifacts in different image manipulation techniques. Salloum . [@salloum2018image] adopt a Multi-task Fully Convolutional Network (MFCN) [@long2015fully] to manipulation segmentation by providing both segmentation and edge annotations. Bappy . [@bappy2017exploiting] design a Long Short-Term Memory (LSTM) [@hochreiter1997long] based network to identify RGB boundary artifacts at both the patch and pixel level. These methods are sources of motivation for us to exploit boundary artifacts as a strong cue for detecting manipulations. **GAN Based Image Editing**. GAN based image editing approaches have witnessed a rapid emergence and impressive results have been demonstrated recently [@tsai2017deep; @lalonde2007using; @wang2017high; @karras2017progressive; @CycleGAN2017]. Prior and concurrent works force the output of GAN to be conditioned on input images through extra regression losses (., $\ell_2$ loss) or discrete labels. However, these methods manipulate the whole images and do not fully explore region based manipulation. In contrast, our GAN manipulates minor regions and help better generalization ability of a manipulation segmentation network. A more related work is Tsai . [@tsai2017deep] that generates natural composite images using both scene parsing and harmonized ground truth. Even though it targets on region manipulation, Section \[ablate\] shows that our method performs better in terms of assisting segmentation. **Adversarial Training**. Discriminative feature learning has motivated recent research on adversarial training on several tasks. Shrivastava . [@shrivastava2017learning] propose a simulated and unsupervised learning approach which utilizes synthetic images to generate realistic images. Wang . [@wang2017fast] boost the performance on occluded and deformed objects through an online hard negative generation network. Wei . [@wei2017object] investigate an adversarial erasing approach to learn dense and complete semantic segmentation. Le . [@le2018a+] propose an adversarial shadow attenuation network to make correct predictions on hard shadow examples. However, their approach is difficult to adapt to manipulation segmentation because they either generate whole synthetic images or introduce new manipulation such as erasing. In contrast, we refine manipulated regions with original ones to assist segmentation network. Approach ======== We describe the GSR-net in details in the following sections. Key to the generation is the utilization of a GAN with a loss function central around using blending to optimize for producing realistic training images. The segmentation and refinement stage are specially designed to single out boundaries of the manipulated regions in order to guide the training process to pay extra attention to boundary artifacts. ![Generation stage. The generator $G$ learns to refine the manipulated regions to match with authentic regions given a simple copy-pasted image $M$ and a position mask where the manipulated regions are. $M$ is created using the tampered image $S$, ground truth mask $K$ and authentic target image $T$. The discriminator $D$ learns to classify manipulated images from authentic ones.[]{data-label="fig:g"}](./images/generate.pdf){width="50.00000%"} Generation {#Generator} ---------- **Generator.** Referring to Figure \[fig:frame\] (a), the generator is given as input both copy-pasted images and ground truth masks. To prepare the input images, we start with the training samples in manipulation datasets (., CASIA 2.0 [@dong2013casia]). Given a training image $S$, the corresponding ground truth binary mask $K$ and an authentic target image $T$ from a clean dataset (., COCO [@lin2014microsoft]), we first create a simple copy-pasted image $M$ by taking $S$ as foreground and $T$ as background: $$M = K \odot S + (1-K) \odot T,$$ where $\odot$ represents pointwise multiplication. In Poisson blending [@perez2003poisson], the final value of pixel $i$ in the manipulated regions is $$\begin{aligned} b_i & = \displaystyle \operatorname*{arg\,min}_{b_i} \sum_{s_i\in S, \mathcal{N}_i \subset S }{||\nabla b_i-\nabla s_i||_2} \nonumber \\ & + \sum_{s_i\in S, \mathcal{N}_i \not \subset S}{||b_i-t_i||_2}, \end{aligned}$$ where $\nabla$ denotes the gradient, $\mathcal{N}_i$ is the neighborhood (., up, down, left and right) of the pixel at position $i$, $b_i$ is the pixel in the blended image $B$, $s_i$ is the pixel in $S$ and $t_i$ is the pixel in $T$. Similar to Poisson blending, we optimize the generator to blend neighborhoods in the resulting image that now contains copy-pasted regions and background regions. A key part of our loss function enforces the shapes of the tampered regions, while maintaining the background regions. To maintain background regions, we utilize $\ell_1$ loss as in [@isola2017image] to reconstruct the background: $$L_\text{bg}=\frac{1}{N_\text{bg}}\sum_{t_i\in T, k_i=0}||m_i-t_i||_1,$$ where $N_\text{bg}$ is the total number of pixels in the background, $m_i$ is the pixel in $M$ and $k_i$ is the value in mask $K$ at position $i$. To maintain the shape of manipulated regions, we apply a Laplacian operator to the pasted regions and reconstruct the gradient of this region to match the source region: $$L_\text{grad}=\frac{1}{N_\text{fg}}\sum_{s_i\in S, k_i=1}||\Delta m_i-\Delta s_i||_1,$$ where $\Delta$ denotes the Laplacian operator and $N_\text{fg}$ is the total number of pixels in pasted regions. To further constrain the shape of pasted regions, we add an additional edge loss as denoted by $$L_\text{edge}=\frac{1}{N_\text{edge}}\sum_{s_i\in S, e_i=1}||m_i-s_i||_1,$$ where $N_\text{edge}$ is the number of boundary pixels and $e_i$ is the value of the edge mask at position $i$, which is obtained by the absolute difference between a dilation and an erosion on $K$. To generate realistic manipulated images, we add an adversarial loss $L_\text{adv}$, as explained below, that serves to encourage the generator to produce increasingly realistic images as the training progresses. **Discriminator.** In our discriminator, a crucial detail to point out is that the manipulated regions are typically occupying only a small area in the image. Hence, it is beneficial to restrict the GAN discriminator’s attention to the structure in local images patches. This is reminiscent of “PatchGAN” [@isola2017image] that only penalizes structure at the scale of patches. Similar to PatchGAN, our discriminator applies a final fully convolutional layer at a patch scale of $N \times N$. The discriminator distinguishes the authentic image $T$ as real and the generated image $G(K,M)$ as fake by maximizing: $$\begin{aligned} L_\text{adv} & =\mathbb{E}_{T}{[\log(D(K,T))]} \nonumber \\ & + \mathbb{E}_{M}{[1-\log(D(K,G(K,M)))]}, \end{aligned}$$ where $K$ is concatenated with $G(K,M)$ or $T$ as the input to the discriminator. The final loss function of the generator is given as $$L_G =L_\text{bg} + \lambda_\text{grad} L_\text{grad} + \lambda_\text{edge} L_\text{edge} + \lambda_\text{adv} L_\text{adv},$$ where $\lambda_\text{grad}$, $\lambda_\text{edge}$, and $\lambda_\text{adv}$ are parameters which control the importance of the corresponding loss terms. Conditioned on this constraint, the generator preserves background and texture information of pasted regions while blending the manipulated regions with the background. ![Segmentation stage. We concatenate lower level features to predict boundary artifacts and then concatenate back the boundary feature to the segmentation branch to learn to fill the interior of boundaries. The backbone is DeepLab VGG-16.](./images/segment.pdf){width="50.00000%"} \[fig:seg\] Segmentation ------------ For segmentation, we simply adopt the publicly available VGG-16 [@simonyan2014very] based DeepLab model [@chen2018deeplab]. The network structure is depicted in Figure \[fig:frame\] (c), consisting of a boundary branch predicting the manipulated boundaries and a segmentation branch predicting the interior. In particular, to enhance attention on boundary artifacts, we introduce boundary information by subtracting the erosion from the dilation of the binary ground truth mask to obtain the boundary mask. We then predict this boundary mask through concatenating bi-linearly up-sampled intermediate features and passing them to a $1 \times 1$ convolutional layer to form the boundary branch. Finally, we concatenate the output features of the boundary branch with the up-sampled features of the segmentation branch. Empirically, we noticed such multi-task learning helps the generalization of the final model. Only the segmentation branch output is used for evaluation during inference. During training, we select the copy-pasted examples $M$, generated examples $G(M)$ and training samples $S$ in the dataset as input to the segmentation network which provides a larger variety of manipulation. The loss function of the segmentation network is an average, two class softmax cross entropy loss. Refinement ---------- The goal of the refinement stage is to draw attention to the boundary artifacts during training, taking into account the fact that boundary artifacts play a more pivotal role than semantic content in detecting manipulations [@bappy2017exploiting; @zhou2018learning]. While we may be able to employ prior erasing based adversarial mining methods [@wei2017object; @wang2017fast], they are not suitable for our purpose because it will introduce new erasing artifacts. Instead, the refinement stage utilizes the prediction of the segmentation stage to produce new boundary artifacts through replacing with original regions. As illustrated in Figure \[fig:frame\] (d), given an authentic target image $T$ in which the manipulated regions was inserted, the manipulated image $M$ (which could also be the generated image $G(M)$), and the manipulated boundary prediction $P$ by the segmentation stage, we replace the pixels in predicted boundaries by the authentic regions in $T$ and create a novel manipulated image: $$M' = T \odot P + M \odot (1-P),$$ where $M'$ is the novel manipulated image with new boundary artifacts. The corresponding segmentation ground truth now becomes $$K' = K - K \odot P,$$ where $K'$ is the new manipulated mask for $M'$. The new boundary artifact mask can be extracted in the same way as the previous step. Notice that the refinement stage utilizes the target images $T$ to help training, providing more side information to spot the artifacts. Taking as input the new manipulated images, the same segmentation network then learns to predict the new manipulated boundaries and interior regions. Similar to [@wei2017object], multiple refinement operations are possible and there is a tradeoff between training time and performance. However, the difference is that the segmentation network in the refinement stage shares weights with that in the segmentation stage. The weight sharing enables us to use a single segmentation network at inference. As a result, the network learns to focus more attention on boundary artifacts with no additional cost at inference time. Experiments =========== We evaluate the performance of GSR-Net on four public benchmarks and compare it with the state-of-the-art methods. We also analyze its robustness under several forms of post-processing. Implementation Details ---------------------- GSR-Net is implemented in Tensorflow [@abadi2016tensorflow] and trained on an NVIDIA GeForce TITAN P6000. The input to the generation network (both generator and discriminator) is resized to $256 \times 256$. The generator is based on U-net [@ronneberger2015u] and the discriminator has the same structure as $70 \times 70$ PatchGAN [@isola2017image] without the batch normalization layers . We add batch normalization [@ioffe2015batch] to the DeepLab VGG-16 model. The segmentation network is fine-tuned from ImageNet [@deng2009imagenet] pre-trained weights and the generation network is trained from scratch. We use Adam [@kingma2014adam] optimizer with a fixed learning rate of $1 \times 10^{-4}$ for all the subnetworks. Optimizer of the generator, discriminator and segmentation network are updated in an alternating fashion. To avoid overfitting, weight decay with a factor of $5 \times 10^{-5}$ and $50\%$ dropout [@srivastava2014dropout] are applied on the segmentation network. The hyperparameters of $(\lambda_\text{grad},\lambda_\text{edge},\lambda_\text{adv})$ are set to $(1,2,5)$ empirically to balance the loss values. Only random flipping augmentation is applied during training. We feed the copy-pasted images, generated images, and the training samples to the segmentation network. We train the whole network jointly for 50K iterations with a batch size of 4. Only the segmentation network is used for inference and we use the segmentation branch as the final prediction. Due to the small batch size used during training, we utilize instance normalization as in [@isola2017image] on every test image. After prediction, instead of using mean-shift as in [@huh2018fighting], we simply dilate and threshold connected components to remove small noisy particles. --------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -- Metrics [MCC]{} [F1]{} [MCC]{} [F1]{} [MCC]{} [F1]{} [MCC]{} [F1]{} NOI [@mahdian2009using] 0.255 0.343 0.159 0.278 0.172 0.269 0.180 0.263 CFA [@ferrara2012image] 0.164 0.292 0.144 0.270 0.050 0.190 0.108 0.207 MFCN [@salloum2018image] 0.408 0.480 - - - - 0.520 0.541 RGB-N [@zhou2018learning] 0.261 0.383 0.290 0.424 0.334 0.379 0.364 0.408 EXIF-consistency [@huh2018fighting]\* 0.420 0.520 0.415 0.504 0.102 0.276 0.127 0.204 DeepLab (baseline) 0.343 0.420 0.352 0.472 0.304 0.376 0.435 0.474 **GSR-Net (ours)** **0.462** **0.525** **0.446** **0.555** **0.439** **0.489** **0.553** **0.574** --------------------------------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -- Datasets and Experiment Setting ------------------------------- **Datasets**. We evaluate our performance on the following four datasets: $\bullet$**In-The-Wild [@huh2018fighting]**: In-The-Wild is a splicing dataset with 201 spliced images collected online. The annotation is manually created by [@huh2018fighting] and the manipulated regions are usually person and animal. $\bullet$**COVER [@wen2016coverage]**: COVER focuses on copy-move manipulation and has 100 images. The manipulation objects are used to cover similar objects in the original authentic images and thus are challenging for humans to recognize visually without close inspection. $\bullet$**CASIA [@dong2013casia; @dong2010casia]**: CASIA has two versions. CASIA 1.0 contains 921 manipulated images including splicing and copy move. The objects are carefully selected to match with the context in the background. Cropped regions are subjected to post-processing including rotation, distortion, and scaling. CASIA 2.0 is a more complicated dataset with 5123 images. Manipulations include splicing and copy move. Post-processing like filtering and blurring is applied to make the regions visually realistic. The manipulated regions cover animals, textures, natural scenes, . We use CASIA 2.0 to train our network and test it on CASIA 1.0 in Section \[main\]. $\bullet$**Carvalho [@de2013exposing]**: Carvalho is a manipulation dataset designed to conceal illumination differences between manipulated regions and authentic regions. The dataset contains 100 images and all the manipulated objects are people. Contrast and illumination are adjusted in a post-processing step. **Evaluation Metrics**. We use pixel-level F1 score and MCC as the evaluation metrics when comparing to other approaches. For fair comparison, following the same measurement as [@salloum2018image; @huh2018fighting; @zhou2018learning], we vary the prediction threshold to get binary prediction mask and report the optimal score over the whole dataset. Dataset Carvalho In-the-Wild COVER CASIA ------------- ---------- ------------- ------- ------- -- DeepLab 0.420 0.472 0.376 0.474 DL + CP 0.446 0.504 0.410 0.503 DL + G 0.460 0.524 0.434 0.506 DL + DIH 0.384 0.421 0.342 0.420 DL + CP + G 0.472 0.528 0.444 0.507 GS-Net 0.515 0.540 0.455 0.545 GSR-Net 0.525 0.555 0.489 0.574 : **Ablation analysis on four datasets.** Each entry is the F1 score tested on individual dataset.[]{data-label="tab:ablation"} Main Results {#main} ------------ In this section, We present our results for the task of manipulation segmentation. We fine-tune our model on CASIA 2.0 from the ImageNet pre-trained model and test the performance on the aforementioned four datasets. We compare with methods described below: $\bullet$**NoI** [@mahdian2009using]: A noise inconsistency method which predicts regions as manipulated where the local noise is inconsistent with authentic regions. We use the code provided by Zampoglou . [@zampoglou2017large] for evaluation. $\bullet$**CFA** [@ferrara2012image]: A CFA based method which estimates the internal CFA pattern of the camera for every patch in the image and segments out the regions with anomalous CFA features as manipulated regions. The evaluation code is based on Zampoglou . [@zampoglou2017large]. $\bullet$**RGB-N** [@zhou2018learning]: A two-stream Faster R-CNN based approach which combines features from the RGB and noise channel to make the final prediction. We train the model on CASIA 2.0 using the code provided by the authors [^1]. $\bullet$**MFCN** [@salloum2018image]: A multi-task FCN based method which harnesses both an edge mask and segmentation mask for manipulation segmentation. Hole filling is applied for the edge branch to make the prediction. The final decision is the intersection of the two branches. We directly report the results from the paper since the code is not publicly available. $\bullet$**EXIF-consistency** [@huh2018fighting]: A self-consistency approach which utilizes metadata to learn features useful for manipulation localization. The prediction is made patch by patch and post-processing like mean-shift [@cheng1995mean] is used to obtain the pixel-level manipulation prediction. We use the code provided by the authors [^2] for evaluation. $\bullet$**DeepLab**: Our baseline model which directly adopts DeepLab VGG-16 model to manipulation segmentation task. No generation, boundary branch or refinement stage is added. $\bullet$**GSR-Net**: Our full model combining generation, segmentation and refinement for manipulation segmentation. The final results, presented in Table \[tab:f1\], highlight the advantage of GSR-Net. For supervised methods [@zhou2018learning; @salloum2018image], we train the model on CASIA 2.0 and evaluate on all the four datasets. For other unsupervised methods [@mahdian2009using; @ferrara2012image; @huh2018fighting], we directly test the model on all datasets. GSR-Net outperforms other approaches by a large margin on COVER, suggesting the advantage of our network on copy-move manipulation. Also, GSR-Net has an improvement on CASIA 1.0 and Carvalho. Moreover, in terms of computation time, EXIF-consistency takes $160$ times more computation (80 seconds for an $800\times 1200$ image on average) than ours (0.5s per image). Compared to boundary artifact based methods, our GSR-Net outperforms MFCN by a large margin, indicating the effectiveness of the generation and refinement stages. In addition to that, no hole filling is required since our approach does not perform late fusion with the boundary branch, but utilizing boundary artifacts to guide the segmentation branch instead. Our method outperforms the baseline model by a large margin, showing the effectiveness of the proposed generation, segmentation and refinement stages. ![Qualitative visualization. The first row shows manipulated images on different datasets. The second indicates the final manipulation segmentation prediction. The third row illustrates the output of boundary artifacts branch. The last row is the ground truth. []{data-label="fig:qr"}](./images/q_visualization_v.pdf){width="\linewidth"} Ablation Analysis {#ablate} ----------------- We quantitatively analyze the influence of each component in GSR-Net in terms of F1 score. $\bullet$**DL + CP**: DeepLab VGG-16 model with just the segmentation output, using simple copy-pasted (no generator) and CASIA 2.0 images during training. $\bullet$**DL + G**: DeepLab VGG-16 model with just the segmentation output, using generated and CASIA 2.0 images during training. $\bullet$**DL + [@tsai2017deep]**: DeepLab VGG-16 model with just the segmentation output, using the images generated from [@tsai2017deep] and CASIA 2.0 images during training. We adapt deep harmonization network for the generation stage as it also manipulate regions. $\bullet$**DL + CP + G**: DeepLab VGG-16 model with just the segmentation output, using both copy-pasted, generated and CASIA 2.0 images during training. $\bullet$**DL + GDA**: DeepLab VGG-16 model with only segmentation output, using both copy-pasted, generated and CASIA 2.0 images during training. We augment manipulated images from the beginning by replacing the boundary artifact regions with the regions in target images using ground truth masks. $\bullet$**DL + G edge**: Generation and DeepLab VGG-16 network only predict boundary masks. Hole filling is applied to generate the final pixel level prediction. $\bullet$**DL + GR**: DeepLab VGG-16 model with only segmentation output, using both copy-pasted, generated and CASIA 2.0 images during training. The refinement stage is added. $\bullet$**DL + GR2**: Same as **DL + GR** except that the refinement stage is applied twice. $\bullet$**GS-Net**: Generation and segmentation network with boundary artifact guided manipulation segmentation. No refinement stage is incorporated. ![Qualitative visualization of the generation network. The first two columns show the authentic background and manipulation mask. As the number of epochs increases, the manipulated region matches better with the background and thus boundary artifacts are harder to identify.[]{data-label="fig:gan"}](./images/gan_cmp.pdf){width="1\linewidth"} The results are shown in Table \[tab:ablation\]. Starting from our baseline model, simply adding copy-pasted images (**DL + CP**) achieves improvement due to broadening the manipulation distribution. In addition, replacing copy-pasted images with generated images (**DL + G**) also shows improvement compared to **DL + CP** on most of the datasets as it refines the boundary from naive copy-pasting. As expected, adding both copy-pasted images and generated hard examples (**DL + CP + G**) is more useful because the network has access to a larger distribution of manipulation. Compared to applying deep harmonization network **DL + [@tsai2017deep]**), our generation approach (**DL + G**) performs better as it aligns well with the natural process of manipulation. The results also indicate the impact of boundary guided segmentation network. Directly predicting segmentation (**DL + CP + G**) does not explicitly learn manipulation artifacts, and thus has limited generalization ability compared to **GS-Net**. Furthermore, **GSR-Net** boosts the performance on **GS-Net** since the refinement stage introduces new boundary artifacts. Robustness to Attacks {#robust} --------------------- We apply both JPEG compression and image scaling attacks to test images of In-The-Wild and Carvalho datasets. We compare GSR-Net with RGB-N [@zhou2018learning] and EXIF-selfconsistency [@huh2018fighting] using their publicly available code, and MFCN [@salloum2018image] using the numbers reported in their paper. Figure \[fig:attack\] shows the results, which indicates our approach yields more stable performance than prior methods. Segmentation with COCO Annotations ---------------------------------- This experiment shows how much gain our model achieves without using the manipulated images in CASIA 2.0. Instead of carefully manipulated training data, we only utilize the object annotations in COCO to create manipulated images. We compare the result of using different training data as follows: $\bullet$**CP + S**: Only using copy-pasted images to train the segmentation network. $\bullet$**CP + G + S**: Using both copy-pasted and generated images. $\bullet$**CP + G + SR**: Using copy-pasted images and generated images. The refinement stage is applied. Results are presented in Table \[tab:weakly\]. The performance using only copy-pasted images (**CP + S**) on the four datasets indicates that our network truly learns boundary artifacts. Also, the improvement after adding generated images (**CP + G + S**) shows that our generation network provides useful manipulation examples that increases generalization. Last, the refinement stage (**CP + G + SR**) boosts performance further by encouraging the network to spot new boundary artifacts. Dataset Carvalho In-The-Wild COVER CASIA ------------ ----------- ------------- ----------- ----------- -- CP + S 0.343 0.430 0.351 0.242 CP + G + S 0.354 0.441 0.355 0.270 CP + GSR **0.418** **0.479** **0.381** **0.331** : **F1 score manipulation segmentation comparison trained with COCO annotations.**[]{data-label="tab:weakly"} Qualitative Results ------------------- **Generation Visualization**. We illustrate some visualizations of the generation network in Figure \[fig:gan\]. It is clear that the generation network learns to match the pasted region with background during training. As a result, the boundary artifacts are becoming subtle and the generation network produces harder examples for the segmentation network. **Segmentation Results**. We present qualitative segmentation results on four datasets in Figure \[fig:qr\]. Unsurprisingly, the boundary branch outputs the potential boundary artifacts in manipulated images and the other branch fills in the interior based on the predicted manipulated boundaries. The examples indicate that our approach deals well with both splicing and copy-move manipulation based on the manipulation clues from the boundaries. Conclusion ========== We propose a novel segmentation framework that firstly utilizes a generation network to enable generalization across variety of manipulations. Starting from copy-pasted examples, the generation network generates harder examples during training. We also design a boundary artifact guided segmentation and refinement network to focus on manipulation artifacts rather than semantic content. Furthermore, the segmentation and refinement stage share the same weights, allowing for much faster inference. Extensive experiments demonstrate the generalization ability and effectiveness of GSR-Net on four standard datasets and show state-of-the-art performance. The manipulation segmentation problem is still far from being solved due to the large variation of manipulations and post-processing methods. Including more manipulation techniques in the generation network could potentially boost the generalization ability of the existing model and is part of our future research. Acknowledgement {#acknowledgement .unnumbered} =============== This work was supported by the DARPA MediFor program under cooperative agreement FA87501620191, “Physical and Semantic Integrity Measures for Media Forensics”. [^1]: <https://github.com/pengzhou1108/RGB-N> [^2]: <https://github.com/minyoungg/selfconsistency>
--- abstract: 'We present a detailed analysis and implementation of a splitting strategy to identify simultaneously the local-volatility surface and the jump-size distribution from quoted European prices. The underlying model consists of a jump-diffusion driven asset with time and price dependent volatility. Our approach uses a forward Dupire-type partial-integro-differential equations for the option prices to produce a parameter-to-solution map. The ill-posed inverse problem for such map is then solved by means of a Tikhonov-type convex regularization. The proofs of convergence and stability of the algorithm are provided together with numerical examples that substantiate the robustness of the method both for synthetic and real data.' author: - 'Vinicius Albani[^1] and Jorge P. Zubelli[^2]' title: '**A Splitting Strategy for the Calibration of Jump-Diffusion Models**' --- [**keywords:**]{} Jump-Diffusion Simulation, Partial Integro-Differential Equations, Finite Difference Schemes, Inverse Problems, Tikhonov-type regularization. Introduction {#sec:introduction} ============ Model selection and calibration is still one of the crucial problems in derivative trading and hedging. From a mathematical view-point it should be treated as an ill-posed inverse problem by suitable regularization as in the work of [@ern] and [@schervar]. The subject is deeply connected to nonparametric statistics as described by [@somersalo]. The problem of model selection and calibration has thus attracted the attention of number of authors as can be seen in the book of [@ConTan2003] and references therein. Amongst the most successful nonparametric approaches, the local volatility model of [@dupire] has become one of the market’s standards. It consists in assuming that the underlying price $S_t$ satisfies a stochastic dynamics of the form $$dS_t = r S_t dt + \sigma(t,S_t) dW_t \mbox{, }$$ where $W_t$ is the Wiener process under the risk-neutral measure and $\sigma$ is the so-called local volatility. Besides its intrinsic elegance and simplicity, Dupire’s model success is due to at least two factors: Firstly, the existence of a forward partial differential equation (PDE) satisfied by the price of call (or put) options when considered as functions of the strike price $K$ and the time to expiration $\tau$. Secondly, to the importance of having a backwards pricing PDE to compute other (perhaps exotic) derivatives. Yet, one of the main shortcomings of local volatility models is the fact that such models are still diffusive ones. Thus, well-known stylized facts such as fat tails and jumps in the log-returns become awkward to fit and justify ([@ConTan2003]). The present article is concerned with the calibration of jump-diffusion models with local volatility. We make use of a fairly recent contribution to the literature, namely the existence of a forward equation of Dupire’s type for such models present in [@BenCon2015]. The availability of a forward equation, allows us, for each fixed time and underlying price, to look at the option prices as a function of the time to expiration and the strike price. Furthermore, by considering collected data from past underlying and derivative prices, we can enrich our observed data and strive for better calibration prices. Efforts to calibrate jump-diffusion models from option prices have been undertaken by a number of authors either from a parametric and a nonparametric perspective. See [@AndAnd2000], [@ConTan2003] and [@volguide]. In this work, differently from previous efforts, we focus on using Dupire’s forward equation, as generalized in the work of [@BenCon2015] and propose a splitting calibration methodology to recover simultaneously the local volatility surface and the jump-size distribution. For a fixed dataset of European vanilla option prices we calibrate, for example, the volatility surface for some fixed jump-size distribution. Then, we find a new reconstruction of the jump-size distribution for the volatility surface previously calibrated. We repeat these steps until a stopping criteria for convergence is satisfied. The resulting pair of functional parameters is indeed a stable approximation of the true local volatility surface and the jump-size distribution, whenever they exist. It is important to mention that, the dataset used to identify this pair of functional parameters is the same one used in Dupire’s local volatility calibration problem. No additional data is required as it would be necessary if we wanted to calibrate both parameters at the same time using standard regularization techniques ([@ern]). The resulting methodology is amenable to regularization techniques as those studied in [@AlbZub2014] and [@AlbAscZub2016]. In particular, different a priori distributions could be used. As a byproduct, we prove convergence estimates for the calibration of the jump-diffusion models as the data noise decreases. We also obtain stable and robust calibration algorithms which perform well either under real or synthetic data. The plan for this work is the following: In Section \[sec:preliminaries\] we set the notation and review some basic facts, including the fundamental forward equation for jump-diffusion processes. In Section \[sec:par2sol\] we discuss the main functional-analytic properties of the parameter to solution map. Section \[sec:splitting\] is concerned with the splitting strategy and the regularization of inverse problems. In particular, we review the tangential cone condition and prove its validity under certain assumptions in our context. This condition, in turn, ensures the convergence of Landweber type methods. The results in this section are not specific to the jump-diffusion model under consideration. Indeed, they apply to more general inverse problems, although, to the best of our knowledge we have not seen presented in this form. In Section \[sec:calibration\] we compute the gradient of the nonlinear parameter-to-solution map, which is crucial for the iterative methods. Section \[sec:numerics\] is concerned with the numerical methods for the solution of the calibration problem and its validation. Differently from [@ConVol2005a; @ConVol2005b], we consider also the case where the jumps may be infinite. Section \[sec:examples\] presents a number of numerical examples that validate the theoretical results and display the effectiveness of our methodology. We close in Section \[sec:conclusion\] with some final remarks and suggestions for further work. Preliminaries {#sec:preliminaries} ============= Let us consider the probability space $(\Omega,\mathcal{G},\mathbb{P})$ with a filtration $\{\mathscr{F}_t\}_{t\geq 0}$. Denote by $S_t$ the price at time $t \geq 0$ of our underlying asset and assume that it satisfies $$\begin{gathered} S_t = S_0 + \int_0^t rS_{t^\prime-}dt^\prime + \int_0^t\sigma(t^\prime,S_{t^\prime-})S_{t^\prime}dW_{t^\prime} + \\ \int_0^t \int_{{\mathbb{R}}}S_{t^\prime-}(\text{e}^y-1)\tilde N(dt^\prime dy),~ ~ ~0\leq t\leq T, \label{ito1}\end{gathered}$$ where $W$ is a Brownian motion, and $\tilde N$ is the compensated version of the Poisson probability measure on $[0,T]\times {\mathbb{R}}$, denoted by $N$, with compensator $\nu(dy)dt$. See @ConTan2003. Assume also that $\sigma$ is positive and bounded from below and above by positive constants, and the compensator $\nu$ satisfies $$\int_{|y|>1} \text{e}^{2y}\nu(dy) < \infty.$$ Since $\sigma$ is uniformly bounded and nonnegative, then, by setting $t=0$ and denoting $\tau$ the time to maturity and $K$ the strike price, the seminal work of @BenCon2015 shows that the price of an European call option on the asset in , defined by $$C(\tau,K) = \text{e}^{-r\tau}\mathbb{E}[\max\{0,S_\tau-K\}|\mathscr F_{0}],$$ is the unique solution in the sense of distributions of the partial integro-differential equation (PIDE): $$\begin{gathered} C_\tau(\tau,K) - \frac{1}{2}K^2\sigma(\tau,K)^2 C_{KK}(\tau,K) + rKC_K(\tau,K)=\\ \int_{{\mathbb{R}}}\nu(dz)\text{e}^z\left(C(\tau,K\text{e}^{-z}) - C(\tau,K) - (\text{e}^{-z}-1)KC_K(\tau,K)\right), \label{pide1}\end{gathered}$$ with $\tau\geq 0$, $K>0$, and the initial condition $$C(0,K) = \max\{0,S_0-K\}, ~K>0. \label{pide1b}$$ Since the diffusion coefficient in Equation is unbounded and goes to zero as $K\rightarrow 0$, let us perform the change of variable $y = \log(K/S_0)$ and define $$a(\tau,y) = \frac{1}{2}\sigma(\tau,S_0\text{e}^y)^2 \qquad \text{and} \qquad u(\tau,y) = C(\tau,S_0\text{e}^y)/S_0.$$ So, denoting $D = [0,T]\times{\mathbb{R}}$, the PIDE problem - becomes $$\begin{gathered} u_\tau(\tau,y) - a(\tau,y) \left(u_{yy}(\tau,y) - u_{y}(\tau,y)\right) + ru_y(\tau,y)=\\ \int_{{\mathbb{R}}}\nu(dz)\text{e}^z\left(u(\tau,y-z) - u(\tau,y) - (\text{e}^{-z}-1)u_y(\tau,y)\right), \label{eq:pide2}\end{gathered}$$ with $(\tau,y)\in D$, and the initial condition $$u(0,y) = \max\{0,1-\text{e}^y\}, ~y\in{\mathbb{R}}. \label{eq:pide2b}$$ Instead of using $\nu$ directly in the PIDE problem -, we consider, as in @KinMay2011, the double-exponential tail of $\nu$ $$\varphi(y) = \varphi(\nu;y) = \left\{ \begin{array}{ll} \int_{-\infty}^y(\text{e}^y-\text{e}^x)\nu(dx), & y < 0\\ \int_{y}^\infty(\text{e}^x-\text{e}^y)\nu(dx), & y > 0, \end{array} \right. \label{eq:tail}$$ and the convolution operator $$I_\varphi f (y) := \varphi * f (y) = \int_{{\mathbb{R}}}\varphi(y-x)f(x)dx.$$ Applying Lemma 2.6 in @BenCon2015 to the integral part of the PIDE , $$\begin{gathered} \int_{{\mathbb{R}}}\nu(dz)\text{e}^z\left(u(\tau,y-z) - u(\tau,y) - (\text{e}^{-z}-1)u_y(\tau,y)\right)\\ = \int_{{\mathbb{R}}}\varphi(y-z)(u_{yy}(\tau,z) - u_y(\tau,z))dz. \label{eq:integral_part}\end{gathered}$$ In what follows, we replace the integral part of the PIDE by the right-hand side of . Define $g(\tau,y) := \max\{0,1-\text{e}^y\}$, so, by the definition of $u$, it follows that $u(\tau,y) = {{\tilde{v}}}(\tau,y) + g(\tau,y)$, where ${{\tilde{v}}}$ is the solution of the PIDE: $${{\tilde{v}}}_\tau = a\left({{\tilde{v}}}_{yy}-{{\tilde{v}}}_{y}\right) - r{{\tilde{v}}}_{y} + I_\varphi \left(b({{\tilde{v}}}_{yy}-{{\tilde{v}}}_{y})\right) + G \label{eq:pide3}$$ with homogeneous boundary and initial conditions, where $$G = a\left(g_{yy}-g_{y}\right) - rg_{y} + I_\varphi \left(b(g_{yy}-g_{y})\right),$$ with $g_{yy}$ and $g_{y}$ weak derivatives of $g$. By assuming that $a \in C_B(D)$ with $a_y \in L^\infty\left([0,T],L^2({\mathbb{R}})\right)$ and $a(\tau,y) \geq c > 0$, for every $(\tau,y)\in D$, and $\nu$ is a Lévy measure satisfying $\displaystyle\int_{x\geq 1}x\text{e}^x\nu(dx) < \infty$, Theorem 3.9 in @KinMay2011 states the existence and uniqueness of ${{\tilde{v}}}$. The proof that $u$ is a weak solution of the PIDE problem - is an easy adaptation of the proof of Theorem 3.9 in @KinMay2011. To see that, just replace the test functions in $H^1({\mathbb{R}})$, by test functions with compact support in $C_0^\infty(D)$, as in @BenCon2015, and replace ${{\tilde{v}}}$ by $u-g$. Uniqueness of solution can also be proved by analytical methods as in @BarImb2008 [@GarMen2002] or by probabilistic arguments as in Theorem 2.8 in @BenCon2015. An alternative proof is to consider the difference between two different solutions of the PIDE problem -. The resulting function is the solution of the PIDE with $G \equiv 0$. By Theorem 3.7 in @KinMay2011, the norm of the solution of the PIDE is dominated by the norm of $G$, which is zero. So, the difference is also zero and uniqueness holds. Since $v$ and $g$ are continuous in $D$, it follows that $u$ is also a continuous function. In Section \[sec:par2sol\] we give an alternative proof for the existence and uniqueness of a solution of the PIDE problem - based on the classical theory of parabolic partial differential equations. See [@lady]. The Parameter to Solution Map and its Properties {#sec:par2sol} ================================================ The goal of the present section is to show the well-posedness of the PIDE problem - and some regularity properties of the parameter-to-solution map. We make the following additional assumption: The restrictions of the double-exponential tail $\varphi$ to the sets $(-\infty,0)$ and $(0,+\infty)$ are in the Sobolev spaces $W^{2,1}(-\infty,0)$ and $W^{2,1}(0,+\infty)$, respectively. \[ass:4\] $W^{2,1}(-\infty,0)$ (respectively $W^{2,1}(0,+\infty)$) is the Sobolev space of $L^1(-\infty,0)$ (respectively $L^1(0,+\infty)$) functions such that its first and second weak derivatives are in $L^1(-\infty,0)$ (respectively $L^1(0,+\infty)$). The above assumption holds, for example, if we assume that the measure $\nu$ is such that the functions $x \in (-\infty,0) \mapsto \nu((-\infty,x])$ and $x \in (0,+\infty) \mapsto \nu([x,+\infty))$ are continuous. We recall, that the set of non-negative non-increasing functions has a nonempty interior in $W^{2,1}(0,+\infty)$ as well as the set of non-negative non-decreasing functions in $W^{2,1}(-\infty,0)$. This is of particular importance since we need to show that the direct operator has a Frechét derivative. In order to define the domain of the direct operator in the Banach space $$X = H^{1+\varepsilon}(D)\times W^{2,1}(-\infty,0) \times W^{2,1}(0,+\infty),$$ let $0<\underline{a}\leq\overline{a}<\infty$ be fixed constants and $a_0:D\rightarrow (\underline{a},\overline{a})$ be a fixed continuous function such that its weak derivatives with respect to $\tau$ and $y$ are in $L^2(D)$. $$\begin{gathered} \mathcal{D}(F) = \left\{ (\tilde a,\varphi_-,\varphi_+)\in X ~:~\mbox{let}~ a = \tilde a + a_0,~\mbox{be s.t.}~ \underline{a}\leq a \leq\overline{a}, \right.\\ \left. ~\mbox{let}~\varphi ~\mbox{be s.t.},~\varphi = \varphi_- ~\mbox{in}~ (-\infty,0) ~\mbox{and}~ \varphi = \varphi_+ ~\mbox{in}~(0,+\infty)\right\}\end{gathered}$$ It is easy to see that $\varphi \in L^1({\mathbb{R}})$. For simplicity, in what follows we shall write $(a,\varphi) \in \mathcal{D}(F)$, meaning that $a$ and $\varphi$ are given as in the definition of $\mathcal{D}(F)$. Let $(a,\varphi)$ be in $\mathcal{D}(F)$, in addition, assume that $\|\varphi\|_{L^ 1({\mathbb{R}})} < C^{-1}$, where the constant $C$ depends on $\underline{a}$, $\overline{a}$ and $r$. Then, there exists a unique solution of the PIDE problem - in $W^{1,2}_{2,loc}(D)$. \[prop:existence\] The existence and uniqueness proof follows by a fixed point argument. Given $(a,\varphi)$ in $\mathcal{D}(F)$ and $f\in L^2(D)$, define the operator $G$ that associates each $v\in W^{1,2}_2(D)$ to $w\in W^{1,2}_2(D)$, solution of $$w_\tau = a(w_{yy}-w_y) - rw_y + I_\varphi(v_{yy}-v_y) + f \label{eq:pde1}$$ with homogeneous boundary conditions. By Young’s inequality, $I_\varphi(v_{yy}-v_y) \in L^2(D)$. So, by Proposition A.1 in [@EggEng2005], it follows that the PDE problem has a unique solution and $\|w\|_{W^{1,2}_2(D)}\leq C\|I_\varphi(v_{yy}-v_y)+f\|_{L^2(D)}$. Again, by Young’s inequality, $\|I_\varphi(v_{yy}-v_y)\|_{L^2(D)}\leq \|\varphi\|_{L^1({\mathbb{R}})}\|v_{yy}-v_y\|_{L^2(D)} \leq \|\varphi\|_{L^1({\mathbb{R}})}\|v\|_{W^{1,2}_2(D)}$. Since $\|\varphi\|_{L^1({\mathbb{R}})}< C^{-1}$, it follows that, for any $v\in W^{1,2}_2(D)$ with $v\not=0$, $\|w\|_{W^{1,2}_2(D)} < \|v\|_{W^{1,2}_2(D)} + C\|f\|_{L^2(D)}$. Let us see that $G$ is a contraction. For any $v_1,v_2 \in W^{1,2}_2(D)$, set $w_1 = G(v_1)$, $w_2 = G(v_2)$ and $w=w_1-w_2$. It follows that $w$ is the solution of with $f=0$ and $\|w_1-w_2\|_{W^{1,2}_2(D)} < \|v_1-v_2\|_{W^{1,2}_2(D)}$. So, $G$ is indeed a contraction in $W^{1,2}_2(D)$ and has a unique fixed point $\tilde w$, which is the unique solution of $$\tilde w_\tau = a(\tilde w_{yy}-\tilde w_y) - r\tilde w_y + I_\varphi(\tilde w_{yy}-\tilde w_y) + f, \label{eq:aux_pide}$$ with homogeneous boundary conditions. Any solution $u$ of the PIDE problem - can be written as $u = \tilde w + \tilde u$, where $\tilde w$ is the solution of the PIDE problem with $f = -I_{\varphi}(\tilde u_{yy} - \tilde u_y)$ and $\tilde u$ the solution of with $\varphi =0$, $f=0$ and the same boundary and initial conditions as the PIDE problem -. The existence and uniqueness of $\tilde u \in W^{1,2}_{2,loc}(D)$ is guaranteed by Corollary A.1 in [@EggEng2005]. Therefore, the assertion follows. Since $\tilde w$ in the above proof is a fixed point of $G$, it satisfies the inequality $$\|\tilde w\|_{W^{1,2}_{p}(D)} \leq C\left(\|\varphi\|_{L^1({\mathbb{R}})}\|\tilde w\|_{W^{1,2}_{p}(D)}+\|f\|_{L^2(D)}\right) \mbox{ .}$$ Assuming further that $\|\varphi\|_{L^1({\mathbb{R}})} \leq K/C$ with the constant $0<K<1$, we have that $$\|\tilde w\|_{W^{1,2}_{p}(D)} \leq \frac{C}{1-K}\|f\|_{L^2(D)}. \label{eq:aux_estimate}$$ The direct operator $F : \mathcal{D}(F) \rightarrow W^{1,2}_2(D)$ associates $(\tilde a,\varphi_-,\varphi_+)$ to $u(a,\varphi) - u(a_0,0)$, where $u(a,\varphi)$ is the solution of the PIDE problem -, with $(a,\varphi)$ in $\mathcal{D}(F)$. In other words, $F(\tilde a,\varphi_-,\varphi_+)$ is the solution of the PIDE problem with homogeneous boundary condition and $f = -I_{\varphi}( u(a_0,0)_{yy} - u(a_0,0)_y)$. \[def:direct\_op\] For any $(a,\varphi)$ given by $\mathcal{D}(F)$, $u(a,\varphi)$ solution of the PIDE problem - satisfies $$\|u(a,\varphi)_{yy} - u(a,\varphi)_y\|_{L^2(D)} \leq \displaystyle\frac{C}{1-K}, \label{eq:unif_estimate}$$ with $C$ and $K$ depending on the bounds of the coefficients $a$ and $\varphi$. By Corollary A.1 in [@EggEng2005], $\|u(a_0,0)_{yy} - u(a_0,0)_y\|_{L^2(D)} < C$ for some constant $C$, and since $u(a,\varphi)-u(a_0,0) \in W^{1,2}_2(D)$ for any $(a,\varphi) \in \mathcal{D}(F)$, it follows that $$\begin{gathered} \|u(a,\varphi)_{yy} - u(a,\varphi)_y\|_{L^2(D)} = \|u(a,\varphi)_{yy} - u(a,\varphi)_y \pm (u(a_0,0)_{yy} - u(a_0,0)_y)\|_{L^2(D)}\\ \leq \|(u(a,\varphi)-u(a_0,0))_{yy} - (u(a,\varphi)-u(a_0,0))_y\|_{L^2(D)} +\| u(a_0,0)_{yy} - u(a_0,0)_y\|_{L^2(D)}. \end{gathered}$$ By Equation  and Corollary A.1 in [@EggEng2005], $$\begin{gathered} \|(u(a,\varphi)-u(a_0,0))_{yy} - (u(a,\varphi)-u(a_0,0))_y\|_{L^2(D)}\leq \|u(a_0,\varphi)-u(a_0,0)\|_{W^{1,2}_2(D)} \\ \leq \frac{C}{1-K}\|\varphi\|_{L^1({\mathbb{R}})}\|u(a_0,0)_{yy} - u(a_0,0)_y\|_{L^2(D)}. \end{gathered}$$ Since $\|\varphi\|_{L^1({\mathbb{R}})} \leq K/C$, it follows that $$\|u(a,\varphi)_{yy} - u(a,\varphi)_y\|_{L^2(D)} \leq \displaystyle\frac{C}{1-K},$$ for any $(a,\varphi)$ given by $\mathcal{D}(F)$. We can now state the following: The map $F : \mathcal{D}(F) \rightarrow W^{1,2}_2(D)$ is continuous. \[prop:continuity\] Let the sequence $\{(\tilde a,\varphi_{-,n},\varphi_{+,n})\}_{n\in{\mathbb{N}}}$ in $\mathcal{D}(F)$ converge to $(\tilde a,\varphi_{-},\varphi_{+})$. We must show that $\|F(\tilde a,\varphi_{-,n},\varphi_{+,n}) - F(\tilde a,\varphi_{-},\varphi_{+})\|\rightarrow 0$. Define $$w_n := F(\tilde a,\varphi_{-,n},\varphi_{+,n}) - F(\tilde a,\varphi_{-},\varphi_{+}) = u(a_n,\varphi_n) - u(a,\varphi).$$ By the linearity of the PIDE problem -, $w_n$ is the solution of the PIDE problem with $a$ and $\varphi$ replaced by $a_n$ and $\varphi_n$, respectively, homogeneous boundary conditions and $$f_n = (a_n-a)(u(a,\varphi)_{yy} - u(a,\varphi)_y) - I_{\varphi_n-\varphi}(u(a,\varphi)_{yy} - u(a,\varphi)_y).$$ So, by the estimate and Young’s inequality for convolutions, $$\begin{gathered} \|w_n\|_{W^{1,2}_2(D)}\leq \displaystyle\frac{C}{1-K}\|(a_n-a)(u(a,\varphi)_{yy} - u(a,\varphi)_y) - I_{\varphi_n-\varphi}(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)}\\ \leq \displaystyle\frac{C}{1-K}\left(\|(a_n-a)(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)} +\right.\\ \left. \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\|u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)}\right) \end{gathered}$$ By the Sobolev embedding (see Theorem 7.75 in [@iorio]), it follows that $$\begin{gathered} \|(a_n-a)(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)} \leq \|a_n-a\|_{L^\infty(D)}\|u(a,\varphi)_{yy} - u(a,\varphi)_y\|_{L^2(D)}\\ \leq c\|a_n-a\|_{H^{1+\varepsilon}(D)}\|u(a,\varphi)_{yy} - u(a,\varphi)_y\|_{L^2(D)}\end{gathered}$$ The above estimate and Equation  imply that $$\begin{gathered} \|(a_n-a)(u_{yy} - u_y)\|_{L^2(D)} + \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\|u_{yy} - u_y\|_{L^2(D)}\\ \leq \displaystyle\frac{\tilde C}{1-K}\left(\|a_n-a\|_{H^{1+\varepsilon}(D)} + \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\right)\end{gathered}$$ Summarizing, $$\|w_n\|_{W^{1,2}_2(D)} \leq \left(\displaystyle\frac{\tilde C}{1-K}\right)^2\left(\|a_n-a\|_{H^{1+\varepsilon}(D)} + \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\right),$$ and the assertion follows. The map $F : \mathcal{D}(F) \rightarrow W^{1,2}_2(D)$ is weakly continuous and compact. \[prop:compactness\] Let the sequence $\{(\tilde{a}_n,\varphi_{-,n},\varphi_{+,n})\}_{n\in{\mathbb{N}}}$ in $\mathcal{D}(F)$ converge weakly to $(\tilde a,\varphi_{-},\varphi_{+})$. Proceeding as in the proof of Proposition \[prop:continuity\] define $$w_n := F(\tilde{a}_n,\varphi_{-,n},\varphi_{+,n}) - F(\tilde a,\varphi_{-},\varphi_{+}) = u(a_n,\varphi_n) - u(a,\varphi).$$ So it satisfies the PIDE  with $a$ and $\varphi$ replaced by $a_n$ and $\varphi_n$, respectively, and homogeneous boundary condition. Furthermore, it satisfies $$\begin{gathered} \|w_n\|_{W^{1,2}_2(D)} \leq \displaystyle\frac{C}{1-K}\left(\|(a_n-a)(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)}\right. \\ +\left. \|I_{\varphi_n-\varphi}(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|_{L^2(D)}\right). \label{sum2}\end{gathered}$$ We shall prove that each of the two terms on the RHS of Equation  goes to zero as $n\rightarrow \infty$. In either case, we decompose the set $D$ as the disjoint union $D = D_M\cup D_M^c$, where $$D_M = [0,T]\times [-M,M],$$ with $M>0$. Concerning the first term on the RHS of Equation , we have by Sobolev’s embedding that $$\begin{gathered} \|(a_n-a)(u_{yy} - u_y)\|_{L^2(D)} = \|(a_n-a)(u_{yy} - u_y)\|_{L^2(D_M)} + \|(a_n-a)(u_{yy} - u_y)\|_{L^2(D_M^c)}\\ \leq \|a_n-a\|_{H^{1+\varepsilon/2}(D_M)}\|u_{yy} - u_y\|_{L^2(D_M)} + \|a_n-a\|_{H^{1+\varepsilon/2}(D_M^c)}\|u_{yy} - u_y\|_{L^2(D_M^c)} \label{sum3}\end{gathered}$$ By the compact immersion of $H^{1+\varepsilon}(D_M)$ into $H^{1+\varepsilon/2}(D_M)$ we have that weakly convergent sequences of $H^{1+\varepsilon}(D_M)$ are sent into norm convergent ones in $H^{1+\varepsilon/2}(D_M)$ (Proposition IV.4.4 in [@Tay2011]). Thus, $\|a_n-a\|_{H^{1+\varepsilon/2}(D_M)}\rightarrow 0$. Now, we recall that $\|u_{yy} - u_y\|_{L^2(D_M^c)}\rightarrow 0$ as $M\rightarrow +\infty$. To see that the RHS of Inequality  goes to zero, note that, given $\eta>0$, for a sufficiently large $M$, $\|a_n-a\|_{H^{1+\varepsilon/2}(D_M^c)}\|u_{yy} - u_y\|_{L^2(D_M^c)}<\eta/2$, since $\|a_n-a\|_{H^{1+\varepsilon/2}(D_M^c)}$ is dominated by $\|a_n-a\|_{H^{1+\varepsilon/2}(D)}$, which is uniformly bounded. In addition, $\|u_{yy} - u_y\|_{L^2(D_M)}$ is bounded by $\|u_{yy} - u_y\|_{L^2(D)}$, which is finite. Thus, for all sufficiently large $n\in {\mathbb{N}}$, and with the same $M$ of the previous estimate, $\|a_n-a\|_{H^{1+\varepsilon/2}(D_M)}\|u_{yy} - u_y\|_{L^2(D_M)} < \eta/2$. Concerning the convergence of the second term in Equation , by Jensen’s inequality, we have that $$\begin{gathered} \|I_{\varphi_n-\varphi}(u(a,\varphi)_{yy} - u(a,\varphi)_y)\|^2_{L^2(D)}\\ \leq \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\displaystyle\int_{D}\int_{{\mathbb{R}}}|\varphi_n(x-y)-\varphi(x-y)|(u_{yy}(\tau,y) - u_y(\tau,y))^2dy d\tau dx.\end{gathered}$$ So, breaking it into the following two integrals we get $$\begin{gathered} \displaystyle\int_{D}\int_{{\mathbb{R}}}|\varphi_n(x-y)-\varphi(x-y)|(u_{yy}(\tau,y) - u_y(\tau,y))^2dy d\tau dx \\ = \displaystyle\int_{D_M^c}\int_{{\mathbb{R}}}|\varphi_n(y)-\varphi(y)|(u_{yy}(\tau,x-y) - u_y(\tau,x-y))^2dy d\tau dx\\ + \displaystyle\int_{D_M}\int_{{\mathbb{R}}}|\varphi_n(x-y)-\varphi(x-y)|(u_{yy}(\tau,y) - u_y(\tau,y))^2dy d\tau dx =: I_1 + I_2.\end{gathered}$$ The integral $I_1$ goes to zero by the dominated convergence theorem as $M\rightarrow \infty$ (Theorem 1.50 in [@AdaFou2003]). By Fubini’s Theorem, it follows that $$I_2 = \displaystyle\int_{D}(u_{yy}(\tau,y) - u_y(\tau,y))^2\int_{|x|\leq M}|\varphi_n(x-y)-\varphi(x-y)|dxd\tau dy$$ For almost every $y\in {\mathbb{R}}$, the Rellich-Kondrachov theorem (Part II of Theorem 6.3 in [@AdaFou2003]) implies that $\int_{|x|\leq M}|\varphi_n(x-y)-\varphi(x-y)|dx$ goes to zero. Just recall that $\varphi|_{(-\infty,0)}\in W^{2,1}(-\infty,0)$ and $\varphi|_{(0,+\infty)}\in W^{2,1}(0,+\infty)$. By the estimate $$I_2 \leq \|\varphi_n-\varphi\|_{L^1({\mathbb{R}})}\|u_{yy}-u_y\|^2_{L^2(D)} \leq \frac{2KC}{(1-K)^2} \mbox{, }$$ we can apply the dominated convergence theorem to get that $I_2$ goes to zero as $n\rightarrow \infty$, for each fixed $M$. Therefore, $\|w_n\|_{W^{1,2}_2(D)}\rightarrow 0$ and the assertion follows. We formally define the derivative of $F$ and then we show that it is in fact the Frechét derivative of $F$. The derivative of $F$ at $(a,\varphi)$ in the direction $h = (h_1,h_2) \in X$, such that $(a+h_1,\varphi+h_2)\in \mathcal{D}(F)$, is the solution of the PIDE problem with homogeneous boundary condition and $$f = h_1(u(a,\varphi)_{yy} - u(a,\varphi)_y) - I_{h_2}(u(a,\varphi)_{yy} - u(a,\varphi)_y),$$ where $u(a,\varphi)$ denotes the solution of the PIDE problem -. Such derivative is denoted by $F^\prime(a,\varphi)h$ or $u^\prime(a,\varphi)h$, and is in $W^{1,2}_2(D)$. By the proof of Proposition \[prop:existence\], for any $h_1 \in H^{1+\varepsilon}(D)$ and any $h_2 \in L^1({\mathbb{R}})$ the PIDE problem of the definition above still has a solution in $W^{1,2}_2(D)$. In addition, such PIDE problem is linear with respect to $h = (h_1,h_2) \in X$. So, for every $(a,\varphi) \in \mathcal{D}(F)$, $h \mapsto F^\prime(a,\varphi)h$ is a linear and bounded map from $X$ to $W^{1,2}_2(D)$, satisfying $$\|F^\prime(a,\varphi)h\| \leq \left(\displaystyle\frac{C}{1-K}\right)^2\|h\|_X.$$ The map $F : \mathcal{D}(F) \rightarrow W^{1,2}_2(D)$ is Frechét differentiable and satisfies $$\begin{gathered} \|F(a+h_1,\varphi+h_2) - F(a,\varphi) - F^\prime(a,\varphi)h\|_{W^{1,2}_2(D)} \\ \leq \displaystyle\frac{C}{1-K}\|h\|_X\|F(a+h_1,\varphi+h_2) - F(a,\varphi)\|_{W^{1,2}_2(D)}, \label{eq:tangential_pide} \end{gathered}$$ for any $(a,\varphi) \in \mathcal{D}(F)$ and any $h = (h_1,h_2)\in X$, such that $(a+h_1,\varphi+h_2) \in \mathcal{D}(F)$. \[prop:frechet\] Let $(a,\varphi) \in \mathcal{D}(F)$ be fixed and $h = (h_1,h_2)\in X$ be such that $(a+h_1,\varphi+h_2)\in \mathcal{D}(F)$. Define $$w = F(a+h_1,\varphi+h_2) - F(a,\varphi) - F^\prime(a,\varphi)h.$$ and $v = F(a+h_1,\varphi+h_2) - F(a,\varphi)$. By the linearity of the PIDE problems - and , $w$ is the solution of the PIDE problem , with homogeneous boundary conditions and $$f = -h_1(v_{yy} - v_y) + I_{h_2}(v_{yy} - v_y).$$ So, $w$ satisfies the estimate $$\|w\|_{W^{1,2}_2(D)}\leq \displaystyle\frac{C}{1-K}\|-h_1(v_{yy} - v_y) + I_{h_2}(v_{yy} - v_y)\|_{L^2(D)} \mbox{ .}$$ By the triangle inequality, Young’s inequality for the convolution and Sobolev’s embedding’s theorem, it follows that $$\begin{gathered} \|-h_1(v_{yy} - v_y) + I_{h_2}(v_{yy} - v_y)\|_{L^2(D)} \leq \|v_{yy}-v_y\|_{L^2(D)}\left(\|h_1\|_{H^{1+\varepsilon}(D)} + \|h_2\|_{L^1({\mathbb{R}})}\right)\\ \leq \|v\|_{W^{1,2}_2(D)}\|h\|_{X},\end{gathered}$$ and the asserted estimate holds. The set $\mathcal{D}(F)$ has a nonempty interior, $h\mapsto F^\prime(a,\varphi)h$ is a bounded linear map from $X$ to $W^{1,2}_2(D)$, and the estimate implies that $$\displaystyle\lim_{\|h\|_X\rightarrow 0}\frac{\|F(a+h_1,\varphi+h_2) - F(a,\varphi) - F^\prime(a,\varphi)h\|_{W^{1,2}_2(D)}}{\|h\|_X}=0 \mbox{ .}$$ Thus, $F$ is Frechét differentiable. Splitting Strategy and Regularization {#sec:splitting} ===================================== In this section, under an abstract setting, we consider a Tikhonov-type regularization of the simultaneous calibration of two parameters from a set of observations. A splitting strategy is used to solve the resulting minimization problem. Results concerning the convergence of this approach to an approximate solution of the inverse problem are provided. They rely on certain assumptions which will be shown to hold for the calibration problem at hand of jump-diffusion local volatility models. Tikhonov-type Regularization {#sec:tikhonov} ---------------------------- Firstly, let us introduce some basic notions of Tikhonov-type regularization. This methodology has been used extensively for the solution of ill-posed inverse problems. See [@schervar] and [@ern] for more details. Consider the map $F:\mathcal{D}(F)\subset X \rightarrow Y$ between two Banach spaces $X$ and $Y$. Given $\tilde y$ in the range of $F$, $\mathcal R(F)$, find some $x \in \mathcal{D}(F)$ solution of the equation: $$\tilde y = F(x). \label{eq:inverse_problem1}$$ Since there may be more than one element in $\mathcal{D}(F)$ solving , it is common to search for a solution $x^\dagger$ that minimizes some convex functional $f_{x_0}:\mathcal{D}(f_{x_0})\subset X\rightarrow {\mathbb{R}}_+$, which is related to some [*a priori*]{} information. So, $$x^\dagger \in \argmin\left\{f_{x_0}(x) ~:~x\in\mathcal{D}(F)\,\mbox{ and }\, F(x) = \tilde y\right\},$$ and is so-called a $f_{x_0}$-minimizing solution. In general, it is not possible to have access to the data $\tilde y$ in $\mathcal R(F)$, but only some imperfect approximation $y^\delta \in Y$ satisfying $$\|P\tilde y-y^\delta\|_Y \leq \delta, \label{eq:data}$$ where $\delta > 0$ is the noise level and $P:Y\rightarrow Y$ is a projection onto some subspace of $Y$, where $y^\delta$ is defined. For example, $P$ can define the observation of $y$ in some discrete mesh. Since the inverse problem can be ill-posed, Tikhonov-type regularization is applied, i.e., we must find an element of $\mathcal{D}(F)$ that minimizes the (Tikhonov-type) functional: $$\mathcal F(x) = \phi(x) + \alpha f_{x_0}(x), \label{eq:tikhonov1}$$ where $$\phi(x) = \|F(x) - y^\delta\|^p_Y \label{eq:data_misfit}$$ is the [*data misfit*]{} or [*merit function*]{}, and $\alpha > 0$ is a constant so-called [*regularization parameter*]{}. The penalization $f_{x_0}$ is called the [*regularization functional*]{}. The minimizers of in $\mathcal{D}:=\mathcal{D}(F)\cap\mathcal{D}(f_{x_0})$ are called Tikhonov minimizers or reconstructions, and are denoted by $x^{\delta}_{\alpha}$. The framework of convex regularization will now be used. See [@schervar] for more information. In what follows, we shall need: Let us assume that 1. The topologies $T_X$ and $T_Y$ associated to $X$ and $Y$, respectively, are weaker than the corresponding norm topologies. 2. The exponent in Equation  satisfies $p \geq 1$. 3. The norm of $Y$ is sequentially lower semi-continuous with respect to $T_Y$. 4. $f_{x_0}$ is convex and continuous with respect to $T_X$. 5. The objective set satisfies $\mathcal{D}\not= \emptyset$, and $\mathcal D$ has a nonempty interior. 6. For every $\alpha > 0$ and $M > 0$ the level set $$M_{\alpha}(M) := \left\{x\in \mathcal D~:~ \mathcal F(x) \leq M\right\}$$ is sequentially pre-compact with respect to $T_X$. 7. For every $\alpha > 0$ and $M > 0$ the level set $M_{\alpha}(M)$ is sequentially closed w.r.t. $T_X$ and the restriction of $F$ to $M_{\alpha}(M)$ is sequentially continuous w.r.t. $T_X$ and $T_Y$. \[ass:2\] By Assumption \[ass:2\], the existence of stable Tikhonov minimizers is guaranteed by Theorems 3.22 and 3.23 in [@schervar]. If the inverse problem in has a solution, then, also based on Assumption \[ass:2\], Theorem 3.25 in [@schervar] says that there exists an $f_{x_0}$-minimizing solution of and Theorem 3.26 states the convergence of a sequence of Tikhonov minimizers to an $f_{x_0}$-minimizing solution whenever $\delta\rightarrow 0$ and $\alpha = \alpha(\delta)$ satisfies the limits: $$\lim_{\delta\rightarrow 0}\alpha(\delta) = 0 ~\mbox{ and }~ \lim_{\delta\rightarrow 0}\frac{\delta^p}{\alpha(\delta)}=0. \label{eq:limits_alpha}$$ A Splitting Strategy Algorithm ------------------------------ The presence of jumps together with the diffusive parts motivates separating the regularization into two parts. In this section we shall now describe such approach in the general framework of convex regularization. Let $X$ be given by $X:= W \times Z$, where $W$ and $Z$ are Banach spaces. Consider $T_W$ and $T_Z$ two topologies of $W$ and $Z$, respectively, which are assumed to be weaker than the norm topologies of each of the corresponding spaces. So, $X$ will be endowed with two natural topologies: The norm $\|(w,z)\|_X = \|w\|_W+\|z\|_Z$ and the product topology $T_X:=T_W\times T_Z$ which is weaker than the norm topology. Consider again the operator $F:\mathcal{D}(F)\subset X \rightarrow Y$, where $F(x) = F(w,z)$. The penalty term in can be rewritten as $$\alpha f_{x_0}(x) = \alpha f_{x_0}(w,z) = \alpha \beta_1g_{w_0}(w) + \alpha \beta_2 h_{z_0}(z) = \alpha_1g_{w_0}(w) + \alpha_2 h_{z_0}(z), \label{eq:fx0}$$ where $\alpha_j = \alpha\cdot\beta_j$ with $\beta_j \geq 0$, $j=1,2$, and the functionals $g_{w_0}$ and $h_{z_0}$ are convex and continuous w.r.t. $T_W$ and $T_Z$, respectively. So, the Tikhonov-type functional now reads: $$\mathcal F(w,z) = \phi(w,z) + \alpha_1g_{w_0}(w) + \alpha_2 h_{z_0}(z). \label{eq:tikhonov2}$$ Let us assume that Assumption \[ass:2\] holds. Thus, if $\alpha_1,\alpha_2>0$, $\mathcal F(w,z)$ has minimizers in $\mathcal D$. Since the norm topology of $X$ and $T_X$ are defined by the products of the norm topologies of $W$ and $Z$, and $T_W$ and $T_Z$, respectively, the projection operators $P_W:(w,z)\mapsto w$ and $P_Z:(w,z)\mapsto z$ are continuous with respect to the norm topologies of $X$, $W$ and $Z$ and to $T_X$, $T_W$ and $T_Z$. For each $z \in Z$, define the operator $F_z:P_W(\mathcal D) \subset W \rightarrow Y$ as $F_z(w) = F(w,z)$, the Tikhonov-type functional $\mathcal F_z(w) = \mathcal F(w,z)$, and the set $\mathcal D_z = P_W(\mathcal D)\times\{z\}$. Similarly $F_w$, $\mathcal F_w$ and $\mathcal D_w$ are defined. Assume also that Items 5, 6 and 7 in Assumption \[ass:2\] remain valid whenever $\mathcal D$ is replaced by $P_W(\mathcal D)$ or $P_Z(\mathcal D)$, $F$ by $F_w$ or $F_z$ and $\mathcal F$ by $\mathcal F_w$ or $\mathcal F_z$. In this case, Theorems 3.22 and 3.23 in [@schervar] guarantee the existence of stable Tikhonov minimizers of $\mathcal F_w$ and $\mathcal F_z$, for each $w\in P_W(\mathcal D)$ and $z\in P_Z(\mathcal D)$. Our approach is to split the iteration so that at each step the jump and the diffusive component are updated successively. More precisely, For any $w\in P_W(\mathcal D)$ (or $z\in P_Z(\mathcal D)$), set $w^0 = w$ ($z^0 = z$) and consider the iterations with $n\in{\mathbb{N}}$: $$\begin{aligned} z^n \in \argmin\left\{\mathcal F_{w^{n-1}}(z) ~:~ z\in P_Z(\mathcal D)\right\}\nonumber\\ w^n \in \argmin\left\{\mathcal F_{z^n}(w) ~:~ w\in P_W(\mathcal D)\right\}. \label{algorithm1}\end{aligned}$$ Repeat the iterations until some termination criteria. If the algorithm starts with $z$ instead of $w$, the order of the two iterations must be reversed. A stationary point of the functional $\mathcal F$ is some point $\hat x = (\hat w, \hat z) \in \mathcal D$, such that $$\hat w\in \argmin\{\mathcal F_{\hat z}(w)~:~ w \in P_W(\mathcal D)\} $$    z{F\_[w]{}(z) :  z P\_Z(D)}. In what follows we shall assume the continuity of $\mathcal F$ with respect to $T_X$. This holds, for example, if $F$ and $f_{x_0}$ are $T_X$-continuous. This hypothesis is necessary in the proof of the following proposition. For every initializing pair $(w,z) \in \mathcal{D}$, any convergent subsequence produced by the algorithm of Equation  converges to some stationary point of $\mathcal F$. \[prop:splitting1\] Consider the sequence $\{(w^n,z^n)\}_{n\in{\mathbb{N}}}$ defined by the iterations in . By construction, the sequence $\{\mathcal F (w^n,z^n)\}_{n\in{\mathbb{N}}}$ is non-increasing and bounded, and thus it converges. In addition, $\{(w^n,z^n)\}_{n\in{\mathbb{N}}}$ is a subset of some level set $\mathcal{M}_\alpha(M)$, which is $T_X$-pre-compact by Item 6 in Assumption \[ass:2\]. For every cluster point $(\overline w, \overline z)$ of $\{(w^n,z^n)\}_{n\in{\mathbb{N}}}$, $\mathcal F(\overline w,\overline z) \leq \mathcal F(w^n,z^n)$ for all $n\in{\mathbb{N}}$. Given $w \in P_W(\mathcal D)$, it follows that $\mathcal F(w,\overline z) = \lim_{k\rightarrow \infty} \mathcal F (w,z^{n_k})$ by the $T_X$-continuity of $\mathcal F$, since the subsequence $\{(w^{n_k},z^{n_k})\}_{k\in{\mathbb{N}}}$ converges to $(\overline w,\overline z)$ w.r.t $T_X$. So, for each $k\in{\mathbb{N}}$, $$\mathcal F(w,z^{n_k}) \geq \mathcal F(w^{n_{k}+1},z^{n_k}),$$ because $w^{n_k}+1$ is a minimizer of $\mathcal F_{z^{n_k}}$. Applying more steps of the algorithm of Equation , it follows that $$\mathcal F(w^{n_k+1},z^{n_k}) \geq \mathcal F(w^{n_k+1},z^{n_k+1})\geq \cdots \geq \mathcal F(w^{n_{k+1}},z^{n_{k+1}}).$$ So, $\mathcal F(w,z^{n_k})\geq \mathcal F(w^{n_{k+1}},z^{n_{k+1}})$. In addition, for every $k\in{\mathbb{N}}$, $$\mathcal F(w^{n_{k}},z^{n_k}) \geq \mathcal F(\overline w,\overline z).$$ Hence, $\overline w$ is a minimizer of $\mathcal F_{\overline z}$. To see that $\overline z$ is a minimizer of $\mathcal F_{\overline w}$, note that, for any $z \in P_Z(\mathcal{D}$, $ \mathcal{F}(\overline w,z) = \lim_{k\rightarrow \infty}\mathcal F(w^{n_k},z). $ Since $z^{n_k}$ is a minimizer of $\mathcal F_{w^{n_k}}$, it follows that, $\mathcal F(w^{n_k},z)\geq \mathcal F(w^{n_k},z^{n_k})$. By the fact that $\mathcal F(w^{n_k},z^{n_k})\geq F(\overline w, \overline z)$, for every $k\in{\mathbb{N}}$, the assertion follows. Denote the stationary point obtained with Algorithm  by $(\overline w^\delta_{\alpha},\overline z^\delta_{\alpha})$. Note that a stationary point need not to be a Tikhonov minimizer, since, in principle, it can be a saddle point. However, we shall see in Proposition \[pr:convergence\] that such stationary point is indeed an approximation of the inverse problem solution. Recall the definition of the sub-differential of a convex function $f: \mathcal{D}(f) \subset X \rightarrow {\mathbb{R}}$ at the point $\overline x \in \mathcal{D}(f)$, with $X$ a Banach space, which is the set $\partial f(x)$ of elements $x^*$ in the dual space $X^*$ satisfying $$f(x) - f(\overline x) - \langle x^*,x-\overline x\rangle \geq 0 ~\forall x \in \mathcal{D}(f).$$ If $w\mapsto\phi(w,z)$ and $z\mapsto\phi(w,z)$ are Frechét differentiable, it follows that, for each $w$ and $z$, $$\partial \mathcal F_z(w) = \left\{\frac{\partial }{\partial w}\phi( w, z)\right\} + \alpha_1 \partial g_{w_0}( w) \quad\mbox{and}\quad \partial \mathcal F_w(z) = \left\{\frac{\partial }{\partial z}\phi( w, z)\right\} + \alpha_2 \partial h_{z_0}( z).$$ Moreover, $\phi$ is also Frechét differentiable and $$\partial \mathcal F(w,z) = \left\{\left(\frac{\partial }{\partial w}\phi(w,z),\frac{\partial }{\partial z}\phi(w,z)\right)\right\} + \{\alpha_1 \partial g_{w_0}(w)\} \times\{\alpha_2 \partial h_{z_0}(z)\}.$$ \[rem:1\] For a proof of Remark \[rem:1\], see Item (c) of Exercise 8.8 and Proposition 10.5 in [@RocWet2009]. So, if $0 \in \partial \mathcal F_w(z)$ and $0 \in \partial \mathcal F_z(w)$, then $0 \in \partial \mathcal F(w,z)$. Let $(\overline w, \overline z)$ denote the stationary point obtained with the algorithm of Equation , this means that $\overline w$ is a local minimum of $\mathcal F_{\overline z}$ and $\overline z$ is a local minimum of $\mathcal F_{\overline w}$. By Theorem 10.1 in [@RocWet2009], $0 \in \partial \mathcal F_{\overline w}(\overline z)$ and $0 \in \partial \mathcal F_{\overline z}(\overline w)$. So, $0 \in \partial \mathcal F(\overline w, \overline z)$. If, in addition, $\mathcal{F}$ is convex, then, $(\overline w, \overline z)$ is a Tikhonov minimizer. A stationary point $(\overline w,\overline z)$ with data $y^\delta$ is stable, if for every sequence $\{y_k\}_{k\in{\mathbb{N}}} \subset Y$ such that $y_k\rightarrow y^\delta$ in norm, then, $\{(\overline w^k,\overline z^k)\} \subset \mathcal D$, the sequence of solutions obtained with the algorithm of Equation  considering the data $y_k$ for each $k\in{\mathbb{N}}$ has a $T_X$-convergent subsequence. In addition, the limit of every $T_X$-convergent subsequence $\{(\overline w^{k_l},\overline z^{k_l})\}$ is a stationary point of the Tikhonov functional $\mathcal F$ with data $y^\delta$. \[def:stability\] The stationary point obtained by the algorithm of Equation  is stable. \[prop:splitting2\] Consider the sequences $\{y_k\}_{k\in{\mathbb{N}}} \subset Y$ and $\{(\overline w^k,\overline z^k)\} \subset \mathcal D$ as in Definition \[def:stability\]. Firstly, it is necessary to prove that $\{(\overline w^k,\overline z^k)\} \subset \mathcal D$ has a convergent subsequence. By Lemma 3.21 in [@schervar], $$\mathcal F(\overline w^k,\overline z^k;y^\delta) \leq 2^{p-1}\mathcal F(\overline w^k,\overline z^k;y_k) + 2^{p-1}\|y_k-y^\delta\|^p.$$ The sequence $\{y_k\}_{k\in{\mathbb{N}}}$ converges to $y^\delta$, so $\|y_k-y^\delta\|^p$ is uniformly bounded in $k$. In addition, if we assume further that, for each $y_k$, the algorithm of Equation  is initialized with the same $(w^0,z^0)$, it follows that $$\mathcal F(\overline w^k,\overline z^k;y_k) \leq \mathcal F( w^0,z^0;y_k),$$ and applying Lemma 3.21 in [@schervar] again, $$\mathcal F( w^0,z^0;y_k) \leq 2^{p-1}\mathcal F( w^0,z^0;y^\delta)+2^{p-1}\|y_k-y^\delta\|^p.$$ By the estimates above, $$\mathcal F(\overline w^k,\overline z^k;y^\delta) \leq 4^{p-1}\mathcal F( w^0,z^0;y^\delta) + (4^{p-1}+2^{p-1})\|y_k-y^\delta\|^p,$$ which implies that $\{(w^k,z^k)\}_{k\in{\mathbb{N}}}$ is a subset of some level set of $\mathcal F(\cdot,\cdot\cdot;y^\delta)$. Item 6 in Assumption \[ass:2\] implies that such level set is $T_X$-pre-compact, and the assertion follows. Suppose with no loss of generality that $\{(\overline w^k,\overline z^k)\}$ converges to $(\tilde w, \tilde z)$, w.r.t. $T_X$. For every $w \in P_W(\mathcal D)$, since $\overline w^k$ is in $\argmin \mathcal F_{\overline z^k;y_k}(w)$, $$\mathcal F(\tilde w,\tilde z;y^\delta) \leq \liminf_{k\rightarrow \infty}\mathcal F(\overline w^k,\overline z^k;y_k) \leq \lim_{k\rightarrow \infty}\mathcal F(w,\overline z^k,y_k) = \mathcal F(w,\tilde z;y^\delta).$$ So, $\tilde w$ is in $\argmin \mathcal F_{\tilde z;y^\delta}(w)$. Similarly, it follows that $\tilde z$ is in $\argmin \mathcal F_{\tilde w;y^\delta}(z)$ and the assertion follows. Since the stationary point obtained by the algorithm in Equation  is determined w.r.t $y^\delta$ and the regularization parameters $\alpha_1$ and $\alpha_2$, let us denote it by $(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2})$. Let us also denote by $x^\dagger = (w^\dagger,z^\dagger)$ an $f_{x_0}$-minimizing solution of the inverse problem in , and by $\tilde y$ the noiseless data in . #### Tangential Cone Condition Now, we show that the tangential cone condition is a sufficient condition for the splitting strategy algorithm of Equation  to converge to some approximation of an $f_0$-minimizing solution of the inverse problem . Let the operator $F$ be Fréchet differentiable on each variable $w$ and $z$, so it is Fréchet differentiable and its Fréchet derivative satisfies $$F^\prime(x) = (\partial_w F(w,z), \partial_z F(w,z)).$$ In addition, there exist positive constants $r>0$ and $0 \leq \eta < 1/2$ such that, if $x,\tilde x$ are in the ball $B(x^*;r)$, centered at $x^*$ with radius $r$, then the [*tangential cone condition*]{} is satisfied: $$\|F(\tilde x) - F(x) - F^\prime(x)(\tilde x -x)\| \leq \eta\|F(\tilde x) - F(x)\|.$$ \[ass:3\] So, we can state the following result: Let Assumptions \[ass:2\] and \[ass:3\] hold. If the initializing pair and $x^\dagger$ are inside the ball $B(x^*;r)$ and $\lambda > (1+\eta)/(1-\eta)$ is fixed, then, for any pair of regularization parameters $(\alpha_1,\alpha_2)$ with sufficiently small entries, there exists some finite $n = n(\alpha_1,\alpha_2)$ such that the iterates of the splitting algorithm satisfy $$\|F(w^n,z^n) - y^\delta\|_Y \geq \lambda \delta > \|F(w^{n+1},z^{n+1}) - y^\delta\|_Y. \label{eq:discrepancy}$$ \[pr:saddle\_point\] By Proposition \[prop:splitting1\], the splitting algorithm converges to $(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2})$, a stationary point of the functional in . Since the operator $F$ is Fréchet differentiable, by Remark \[rem:1\], zero is in the sub-differential of $\mathcal F$ at $(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2})$. In other words, there exists $\gamma \in \partial g_{w_0}(w)$ and $\beta\in \partial h_{z_0}(z)$, such that $$\begin{gathered} 0 = \left(\frac{\partial }{\partial w}\phi(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2}),\frac{\partial }{\partial z}\phi(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2})\right) + (\alpha_1 \gamma, \alpha_2 \beta) =\\ F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})^*J(F(\overline x^\delta_{\alpha_1,\alpha_2}) - y^\delta) + (\alpha_1 \gamma, \alpha_2 \beta), \end{gathered}$$ where $J:Y\rightarrow Y^*$ is the duality map, and $\overline x^\delta_{\alpha_1,\alpha_2} = (\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2})$. See [@MarRie2014] and Chapter II in [@Cio1990] for more details on duality maps. Applying $\overline x^\delta_{\alpha_1,\alpha_2} - x^\dagger$ on both sides of the above equality, we have: $$\alpha_1\langle \gamma,w^\dagger - \overline w^\delta_{\alpha_1,\alpha_2}\rangle + \alpha_2\langle \beta, z^\dagger - \overline z^\delta_{\alpha_1,\alpha_2}\rangle = \langle J(F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta), F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})(\overline x^\delta_{\alpha_1,\alpha_2} - x^\dagger)\rangle.$$ Note that, $$\begin{gathered} \langle J(F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta), F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})(\overline x^\delta_{\alpha_1,\alpha_2} - x^\dagger)\rangle =\\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p - \langle J(y^\delta-F(\overline x^\delta_{\alpha_1,\alpha_2})), y^\delta - F(\overline x^\delta_{\alpha_1,\alpha_2}) - F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})(x^\dagger -\overline x^\delta_{\alpha_1,\alpha_2})\rangle \geq\\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p - \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^{p-1}\|y^\delta - F(\overline x^\delta_{\alpha_1,\alpha_2}) - F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})(x^\dagger-\overline x^\delta_{\alpha_1,\alpha_2})\| \geq \\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p +\\- \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta)\|^{p-1}\left(\delta + \|F(x^\dagger) - F(\overline x^\delta_{\alpha_1,\alpha_2}) - F^\prime(\overline x^\delta_{\alpha_1,\alpha_2})(x^\dagger - \overline x^\delta_{\alpha_1,\alpha_2})\|\right)\geq \\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p - \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^{p-1}\left(\delta + \eta\|F(x^\dagger) - F(\overline x^\delta_{\alpha_1,\alpha_2})\|\right)\geq \\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p - \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^{p-1}\left((1+\eta)\delta + \eta\|y^\delta - F(\overline x^\delta_{\alpha_1,\alpha_2})\|\right). \end{gathered}$$ Let us assume, by contradiction, that there is no $\alpha_1,\alpha_2 > 0$ such that $\phi(w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2}) < \lambda^p\delta^p$. So, by the above estimates and assuming that $\lambda > (1+\eta)/(1-\eta)$, it follows $$\begin{gathered} \alpha_1\langle \gamma,w^\dagger - \overline w^\delta_{\alpha_1,\alpha_2}\rangle + \alpha_2\langle \beta, z^\dagger - \overline z^\delta_{\alpha_1,\alpha_2}\rangle \geq\\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p - \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta)\|^{p-1}\left((1+\eta)\delta + \eta\|y^\delta - F(\overline x^\delta_{\alpha_1,\alpha_2})\|\right) \geq \\ \|F(\overline x^\delta_{\alpha_1,\alpha_2})-y^\delta\|^p\left(1 - \eta - \displaystyle\frac{1+\eta}{\lambda}\right) \geq (\lambda\delta)^p\left(1 - \eta - \displaystyle\frac{1+\eta}{\lambda}\right).\end{gathered}$$ Since $g_{w_0}$ and $h_{z_0}$ are convex, $$\alpha_1 (g_{w_0}(w^\dagger)-g_{w_0}(w^\delta_{\alpha_1,\alpha_2})) + \alpha_2(h_{z_0}(z^\dagger)-h_{z_0}(z^\delta_{\alpha_1,\alpha_2})) \geq \alpha_1\langle \gamma,\overline w^\dagger - w^\delta_{\alpha_1,\alpha_2}\rangle + \alpha_2\langle \beta, z^\dagger - \overline z^\delta_{\alpha_1,\alpha_2}\rangle.$$ Summarizing, $$\alpha_1 (g_{w_0}(w^\dagger)-g_{w_0}(w^\delta_{\alpha_1,\alpha_2})) + \alpha_2(h_{z_0}(z^\dagger)-h_{z_0}(z^\delta_{\alpha_1,\alpha_2})) \geq (\lambda\delta)^p\left(1 - \eta - \displaystyle\frac{1+\eta}{\lambda}\right), \label{eq:4}$$ and the right-hand side of the inequality  is positive. Since $\alpha_1,\alpha_2>0$, by the estimate above, $g_{w_0}(w^\dagger)-g_{w_0}(w^\delta_{\alpha_1,\alpha_2})\geq 0$ and $h_{z_0}(z^\dagger)-h_{z_0}(z^\delta_{\alpha_1,\alpha_2}) \geq 0$. If $K = \max\{g_{w_0}(w^\dagger),h_{z_0}(z^\dagger)\}$, then, $$2(\alpha_1 + \alpha_2)K \geq \alpha_1 (g_{w_0}(w^\dagger)-g_{w_0}(w^\delta_{\alpha_1,\alpha_2})) + \alpha_2(h_{z_0}(z^\dagger)-h_{z_0}(z^\delta_{\alpha_1,\alpha_2})).$$ Hence, we can find $\alpha_1,\alpha_2>0$ such that the left-hand side of becomes smaller than the right-hand side, which is a contradiction. Therefore, there must exist $\alpha_1^+,\alpha_2^+>0$ such that $$\phi(w^\delta_{\alpha_1^+,\alpha_2^+},\overline z^\delta_{\alpha_1^+,\alpha_2^+}) < (\lambda\delta)^p \mbox{ ,}$$ for each fixed $\lambda>(1+\eta)/(1-\eta)$. By the continuity of $\phi(\cdot,\cdot\cdot)$ with respect to the norm topology of $X$ and to the topology $T_X$ the existence of some finite iterate number $n$ holds. To see that this also holds for any sufficiently small regularization parameters $\alpha_1,\alpha_2$, just note that, by the same arguments above, there is no sequence $\{\alpha_1^n,\alpha_2^n\}_{n\in{\mathbb{N}}}$, of regularization parameters with $\alpha_1^n,\alpha_2^n\searrow 0$, such that $\phi(\overline w^\delta_{\alpha_1^n,\alpha_2^n},\overline z^\delta_{\alpha_1^n,\alpha_2^n}) \geq \lambda^p \delta^p$ for every $n\in{\mathbb{N}}$. So, there must be $\alpha_1^+,\alpha_2^+>0$, such that, for any $\alpha_1\in (0,\alpha_1^+)$ and $\alpha_2\in(0,\alpha_2^+)$. It follows that $\phi(\overline w^\delta_{\alpha_1,\alpha_2},\overline z^\delta_{\alpha_1,\alpha_2}) < \lambda^p \delta^p$. As a corollary of the proof above, we have the following estimate: $$\begin{gathered} (1-\eta)\|F(\overline w_{\alpha_1,\alpha_2}^\delta,\overline z_{\alpha_1,\alpha_2}^\delta)-y^\delta\|^{p} - (1+\eta)\delta\|F(\overline w_{\alpha_1,\alpha_2}^\delta,\overline z_{\alpha_1,\alpha_2}^\delta)-y^\delta\|^{p-1}\\ \leq \alpha_1\left[g_{w_{0}}(w^\dagger)-g_{w_{0}}(\overline w_{\alpha_1,\alpha_2}^\delta)\right] +\alpha_2\left[h_{z_0}(z^\dagger) -h_{z_0}(\overline w_{\alpha_1,\alpha_2}^\delta)\right]. \label{eq:aux_ineq1} \end{gathered}$$ The following proposition states that the algorithm of Equation  produces a stable approximation of the solution of the inverse problem in . If Assumptions \[ass:2\] and \[ass:3\] hold and the regularization parameters satisfy $$\lim_{\delta\rightarrow 0}\alpha_j(\delta)=\delta~ \mbox{ and }~ \lim_{\delta\rightarrow 0}\frac{\delta^p}{\alpha_j(\delta)}=0, ~j=1,2, \label{eq:alpha_delta}$$ then, every sequence of solutions obtained by the algorithm of Equation , satisfying the discrepancy , when $\delta\searrow 0$, has a $T_X$-convergent subsequence converging to some $f_{x_0}$-minimizing solution of the Inverse Problem , with $f_{x_0}$ as in Equation . \[pr:convergence\] Consider $\{\delta_k\}_{k\in{\mathbb{N}}}$, such that $\delta_k\searrow 0$, for each $k\in {\mathbb{N}}$, choose $\alpha_1 = \alpha_1(\delta_k)$ and $\alpha_2=\alpha_2(\delta_k)>0$ such that the discrepancy principle in Equation  and the estimates in Equation  hold with data $y^{\delta_k}$ and noise level $\delta_k$. Consider also the sequence $\{(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})\}_{k\in{\mathbb{N}}}$ of the corresponding stationary points generated by the algorithm of Equation . We need to show that this sequence has a $T_X$-convergent subsequence. Assume that the algorithm in Equation  initializes always with the same pair $(w^0,z^0)$. So, for every $k$, $$\mathcal F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2};y^{\delta_k})\leq \lambda^p\delta^p.$$ In addition, by the relation in Equation  and the estimate in , $$\limsup_{k\rightarrow\infty} \left[\beta_1 g_{w_0}(\overline w^{\delta_k}_{\alpha_1,\alpha_2}) + \beta_2 h_{w_0}(\overline z^{\delta_k}_{\alpha_1,\alpha_2})\right]\leq \beta_1 g_{w_0}(w^\dagger) +\beta_2 h_{z_0}(z^\dagger). \label{eq:estimate_gh}$$ So, taking $\alpha^+ = \max_{k\in{\mathbb{N}}}\max\{\alpha_1(\delta_k),\alpha_2(\delta_k)\}$, and since $$\|F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2}) - \tilde y\| \leq \|F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2}) - y^{\delta_k}\| + \delta_k \leq (\lambda+1)\delta_k,$$ it follows that, $$\begin{gathered} \limsup_{k\rightarrow \infty}\left[\|F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2}) - \tilde y\|^p + \alpha^+g_{w_0}(\overline w^{\delta_k}_{\alpha_1,\alpha_2}) + \alpha^+h_{z_0}(\overline z^{\delta_k}_{\alpha_1,\alpha_2})\right]\\ \leq \alpha^+g_{w_0}(w^\dagger) + \alpha^+h_{z_0}(z^\dagger),\end{gathered}$$ i.e., there exists a constant $K>0$ such that the sequence $\{(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})\}_{k\in{\mathbb{N}}}$ is in the level set $M_{\alpha^+}(K)$, which is pre-compact w.r.t. $T_X$. Hence, it has a $T_X$-convergent subsequence, which is denoted again by $\{(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})\}_{k\in{\mathbb{N}}}$, and converging to $(\tilde w,\tilde z)$, w.r.t. $T_X$. Since, for each $k\in {\mathbb{N}}$, $\|F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2}) - y^{\delta_k}\|_Y \leq \lambda \delta_k$, by the weakly lower semi-continuity of $\phi$, $$\|F(\tilde w,\tilde z) - \tilde y\|^p \leq \displaystyle\liminf_{k\rightarrow 0}\|F(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2}) - y^{\delta_k}\|^p \leq \lim_{k\rightarrow 0}\lambda \delta_k= 0. $$ This means that $(\tilde w,\tilde z)$ is a solution of the Inverse Problem . Note that, by the estimate in Equation ,$$\beta_1 g_{w_0}(\tilde w) + \beta_2 h_{z_0}(\tilde z)\leq \beta_1 g_{w_0}(w^\dagger) + \beta_2 h_{z_0}(z^\dagger).$$ So, $(\tilde w,\tilde z)$ is an $f_{x_0}$-minimizing solution. The following proposition states the convergence of [*inexact solutions*]{} to some solution of the inverse problem in Equation . By inexact solution we mean the iterate $(w^{n+1},z^{n+1})$ satisfying the discrepancy in Equation . Let the hypotheses of Proposition \[pr:saddle\_point\] be satisfied. Assume further that the functionals $g_{w_0}(w)$ and $h_{z_0}(z)$ are uniformly bounded for $(w,z) \in \mathcal{D}$. Then, when $\delta\searrow 0$, every sequence of inexact solutions satisfying the discrepancy in Equation  has a $T_X$-convergent subsequence converging to a solution of the inverse problem in Equation . As in the proof of Proposition , let us consider $\{\delta_k\}_{k\in{\mathbb{N}}}$, such that $\delta_k\searrow 0$, for each $k\in {\mathbb{N}}$, choose $\alpha_1 = \alpha_1(\delta_k)$ and $\alpha_2=\alpha_2(\delta_k)>0$ such that the discrepancy principle in Equation  is satisfied and assume that $\max\{\alpha_1(\delta_k),\alpha_2(\delta_k)\}\leq \alpha^+$ for some finite constant $\alpha^+$. Consider also the iterates $(w^{n+1,\delta_k},z^{n+1,\delta_k})$ corresponding to $\alpha_1,\alpha_2$ and satisfying the discrepancy in Equation . Since $\|F(w^{n+1,\delta_k},z^{n+1,\delta_k}) - y^{\delta_k}\| \leq \lambda\delta_k$ and by Lemma 3.21 in [@schervar], $$\begin{gathered} \mathcal F(w^{n+1,\delta_k},z^{n+1,\delta_k};\tilde y,\alpha^+)\leq 2^{p-1}\mathcal F(w^{n+1,\delta_k},z^{n+1,\delta_k};y^{\delta_k},\alpha^+) + 2^{p-1}\delta_k^p\\ \leq 2^{p-1}(\lambda^p+1)\delta_k^p + \alpha^+g_{w_0}(w^{n+1,\delta_k}) + \alpha^+h_{z_0}(z^{n+1,\delta_k}).\end{gathered}$$ Since $g_{w_0}(w)$ and $h_{z_0}(z)$ are uniformly bounded for $(w,z) \in \mathcal{D}$, there exists some constant $K>0$ such that $(w^{n+1,\delta_k},z^{n+1,\delta_k})\in M_{\alpha^+}(K)$, where $y^\delta$ is replaced by $\tilde y$ in the Tikhonov-type functional. Since $M_{\alpha^+}(K)$ is pre-compact w.r.t. $T_X$, the sequence of iterates $\{(w^{n+1,\delta_k},z^{n+1,\delta_k})\}_{k\in{\mathbb{N}}}$ has a $T_X$-convergent subsequence, which is also denoted by $\{(w^{n+1,\delta_k},z^{n+1,\delta_k})\}_{k\in{\mathbb{N}}}$ and converges to $(\tilde w,\tilde z)$ w.r.t. $T_X$. So, by the $T_X$- continuity of $F$ and the norm of $Y$, it follows that $$\|F(\tilde w,\tilde z)-\tilde y\| \leq \liminf_{k\rightarrow \infty}\|F(w^{n+1,\delta_k},z^{n+1,\delta_k})-y^\delta_k\| \leq \lim_{k\rightarrow \infty}\lambda\delta_k = 0,$$ and the assertion follows. It is not difficult to prove that, under the hypotheses of Proposition \[pr:convergence\] there exists a sequence of finite iterates or inexact solutions that converges w.r.t. $T_X$ to some $f_{x_0}$-minimizing solution of the inverse problem in Equation , when $\delta\searrow 0$. Let us consider $\{\delta_k\}_{k\in{\mathbb{N}}}$, such that $\delta_k\searrow 0$ and assume that for each $k\in{\mathbb{N}}$, the solution $(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})$ provided by Algorithm \[algorithm1\] satisfies the discrepancy in Equation . Also, for each $k\in{\mathbb{N}}$, find a subsequence of iterates converging w.r.t. $T_X$ to $(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})$ and select one iterate that also satisfies the discrepancy and is close to $(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})$ w.r.t. $T_X$, and gets arbitrarily closer as $k$ increases. By Proposition \[pr:convergence\], the sequence $\{(\overline w^{\delta_k}_{\alpha_1,\alpha_2},\overline z^{\delta_k}_{\alpha_1,\alpha_2})\}$ has a $T_X$-convergent subsequence, converging to $(\tilde w,\tilde z)$, a $f_{x_0}$-minimizing solution of the inverse problem in Equation . It is easy to see that the corresponding subsequence of iterates also converges to $(\tilde w,\tilde z)$ w.r.t. $T_X$. The Calibration {#sec:calibration} =============== This section is devoted to the solution of the calibration from quoted European vanilla option prices of the local volatility surface $\left\{ a(\tau,y) | (\tau,y)\in D \right\}$ and the double exponential tail $\varphi$, by the splitting technique presented in Section \[sec:splitting\]. From the double-exponential tail, we estimate the jump-size distribution $\nu$. Calibration of Local Volatility Surface and Double Exponential Tail {#sec:calibexptail} ------------------------------------------------------------------- The inverse problem can be stated as: [ *Given a set of European call option prices ${{\tilde{u}}}$, such that, ${{\tilde{u}}}- u(a_0,0)$ is in the range of $F$, $\mathcal R(F)$, find $(a^\dagger,\varphi^\dagger)$ in $\mathcal D(F)$ satisfying the equation*]{} $${{\tilde{u}}}= u(a^\dagger,\varphi^\dagger), \label{eq:inverseproblem}$$ [*where $u$ is the solution of the PIDE problem in -, using the integral representation .*]{} In practice, it is only possible to observe noisy option data given in a sparse mesh of strikes. Such data is denoted by $u^\delta$, where $$\|{{\tilde{u}}}- u^\delta\| \leq \delta,$$ and $\delta>0$ is the noise level. To use the results from Section \[sec:splitting\] in this context, we introduce the following notation: $$\begin{gathered} w := \tilde a = a-a_0,~ z := (\varphi_-,\varphi_+),~w_0 := a_0, z_0 := 0,\\ \tilde y := {{\tilde{u}}}- u(a_0,0),~\mbox{and}~ y^\delta := u^\delta - Pu(a_0,0),\end{gathered}$$ where $P$ projects the solution of - onto the sparse mesh where $u^\delta$ is given. Since $X = W\times Z$, $$W := H^{1+\varepsilon}(D),\quad\mbox{and}\quad Z := W^{2,1}(-\infty,0)\times W^{2,1}(0,+\infty). $$ Let $T_X$ and $T_Y$ be the weak topologies of $X$ and $Y=W^{1,2}_2(D)$, respectively. By assuming that $g_{a_0} = g_{w_0}$ and $h_{\varphi_0} = h_{z_0}$ are convex, proper and weakly lower semi-continuous functionals, Propositions \[prop:continuity\]-\[prop:frechet\] imply that Assumptions \[ass:2\]-\[ass:3\] hold true. Note that, the tangential cone condition in Assumption \[ass:3\] is an easy consequence of the Inequality  in Proposition \[prop:frechet\]. Hence, given the data $u^\delta$, the splitting algorithm applied to the simultaneous calibration of $a$ and $\varphi$ converges to some approximation of the true solution of the inverse problem , if the latter exists. Since the inclusion of $W^{1,2}_2(D)$ into $L^2(D)$ is continuous, the existence and stability of solutions given by the splitting algorithm as well as its convergence to the true solution also hold whenever $Y=W^{1,2}_2(D)$ is replaced $Y=L^2(D)$ in the Tikhonov regularization functional in and . A possible choice of the penalization term to fulfill the weak pre-compactness of the level sets of the Tikhonov functional is $g_{a_0}(a) = \|a-a_0\|^2_{H^{1+\varepsilon}(D)}$ for the variable $a$ and for the variable $\varphi$, $h_{\varphi_0}(\varphi)$ is $$\begin{gathered} h_{\varphi_0}(\varphi) = KL(\varphi_+|\varphi_{+,0}) + KL(\varphi^\prime_+|\varphi^\prime_{+,0}) + KL(\varphi^{\prime\prime}_+|\varphi^{\prime\prime}_{+,0})\\ + KL(\varphi_-|\varphi_{-,0}) + KL(\varphi^\prime_-|\varphi^\prime_{-,0}) + KL(\varphi^{\prime\prime}_-|\varphi^{\prime\prime}_{-,0})\end{gathered}$$ where the $KL$ stands for the Kullback-Leibler divergence $$KL(\varphi_+|\varphi_{+,0}) = \displaystyle\int_{0}^{+\infty}\left[\varphi_+\ln\left(\frac{\varphi_+}{\varphi_{+,0}}\right)+ (\varphi_{+,0}-\varphi_+)\right]dx,$$ with $\varphi_0> 0$ given. In this case, $g_{a_0}$ and $h_{\varphi_0}$ are convex, weakly continuous and coercive. In addition, the level sets of the Kullback-Leibler divergence $$\{\varphi \in L^1({\mathbb{R}}) ~:~ KL(\varphi|\varphi_0) \leq C\}$$ are weakly pre-compact in $ L^1({\mathbb{R}})$. See Lemma 3.4 in [@resa]. Calibration of Jump-Size Distribution from Double Exponential Tail ------------------------------------------------------------------ One possible way, but not recommended, to obtain the jump-size distribution $\nu$ is by differentiating once the double exponential tail $\varphi$, since, $\nu$ is such that $\varphi_{+}:=\varphi|_{(0,+\infty)} \in W^{2,1}(0,+\infty)$ and $\varphi_{-}:=\varphi|_{(-\infty,0)}\in W^{2,1}(-\infty,0)$. By Sobolev’s embedding (see Theorem 4.12 in [@AdaFou2003]), $\varphi^\prime_{\pm}$ are continuous functions and $$\varphi^\prime(z) = \left\{ \begin{array}{ll} \text{e}^z\displaystyle\int_{-\infty}^{z}\nu(dx) = \text{e}^z\nu((-\infty,z]),& z<0\\ -\text{e}^z\displaystyle\int^{+\infty}_{z}\nu(dx)= -\text{e}^z\nu([z,+\infty)),& z>0. \end{array} \right.$$ So, $z\mapsto \nu((-\infty,z])$ and $z\mapsto \nu([z,+\infty))$ are continuous functions. By Proposition 5.2 in [@KinMay2011], $\nu$ can be represented as $$\nu(dx) = \left\{ \begin{array}{ll} \displaystyle\frac{1+x^2}{x^2}\mu_-(dx),& x<0\\ \displaystyle\frac{1+x^2}{x^2(1+x\text{e}^x)}\mu_+(dx),& x>0, \end{array} \right.$$ where $\mu_+$ and $\mu_-$ are finite measures, defined in $(0,+\infty)$ and $(-\infty,0)$, respectively. This implies that, $z\mapsto \mu_-((-\infty,z])$ and $z\mapsto \mu_+([z,+\infty))$ are continuous functions, which implies that they are absolutely continuous with respect to the Lebesgue measure. See Lemma III.4.13 in [@DunSch1958]. So, there exist integrable functions $h_{\pm}$, such that $$h_-(x)dx = \mu_-(dx) \quad \mbox{ and }\quad h_+(x)dx = \mu_+(dx).$$ Define $h \in L^1({\mathbb{R}})$, such that $h|_{(-\infty,0)} = h_-$ and $h|_{(0,+\infty)} = h_+$. The map $h \in L^1({\mathbb{R}}) \longmapsto \varphi \in L^2({\mathbb{R}})$ is compact. Let the sequence $\{(h_{-,n},h_{+,n})\}_{n\in{\mathbb{N}}}$ converge weakly to some $(h_{-},h_{+})$ in $ L^1(-\infty,0)\times L^1(0,+\infty)$. Define $$\varphi_n(z) = \left\{ \begin{array}{ll} \displaystyle\int_{-\infty}^z(\text{e}^z-\text{e}^x)\frac{1+x^2}{x^2}h_{-,n}(x)dx,& z<0\\ \\ \displaystyle\int^{+\infty}_z(\text{e}^x-\text{e}^z)\frac{1+x^2}{x^2(1+x\text{e}^x)}h_{+,n}(x)dx,& z>0, \end{array} \right.$$ for each $n\in {\mathbb{N}}$ and $\varphi = \varphi(\mu_{-},\mu_{+})$ in the same way. It is easy to see that $\varphi_+ = \varphi|_{(0,+\infty)} \in W^{1,1}(0,+\infty)$ and $\varphi_- = \varphi|_{(-\infty,0)} \in W^{1,1}(-\infty,0)$. Note also that, by hypothesis, $\varphi_{+,n} \in W^{2,1}(0,+\infty)$ and $\varphi_{-,n}\in W^{2,1}(-\infty,0)$. So, by Sobolev’s embedding (see Theorem 4.12 in [@AdaFou2003]), $\varphi_{+,n}, \varphi_+ \in L^2(0,+\infty)$ and $\varphi_{-,n},\varphi_-\in L^2(-\infty,0)$. The estimate $$|\varphi_{-,n}(z) - \varphi_-(z)|^2 = \left|\displaystyle\int^z_{-\infty}(\text{e}^z-\text{e}^x)\frac{1+x^2}{x^2}(h_{-,n}-h_-)(x)dx\right|^2\rightarrow 0,$$ holds almost everywhere in $(-\infty,0)$, since $(\text{e}^z-\text{e}^x)\frac{1+x^2}{x^2}$ is in $L^\infty(0,+\infty)$. Similarly, $|\varphi_{+,n}(z) - \varphi_+(z)|\rightarrow 0$ almost everywhere. By the monotone convergence theorem the assertion follows. Since the map that associates $h$ to $\varphi$ is compact, it follows that the corresponding inverse problem is ill-posed. So, the procedure of obtaining $h$ by differentiating $\varphi$ is not stable. The inverse problem of finding the jump-size distribution from the double exponential tail is: [*Given the output of the splitting algorithm $\tilde \varphi \in L^2({\mathbb{R}})$, find $(h_-,h_+) \in \mathcal{V}_-\times \mathcal{V}_+$ satisfying*]{} $$\varphi(h_-,h_+) = \tilde\varphi, \label{eq:ip_phi}$$ where $\mathcal{V}_-\times \mathcal{V}_+$ is some subset of $L^1(-\infty,0)\times L^1(0,+\infty)$. If we apply Tikhonov-type regularization to such inverse problem, it can be rewritten as: find $(h_-,h_+) \in \mathcal{V}_-\times \mathcal{V}_+$ minimizing $$\mathcal G(h_-,h_+) = \| \varphi(h_-,h_+) - \tilde\varphi\|^2_{L^2({\mathbb{R}})} + \alpha f_{h_{-,0},h_{+,0}}(h_-,h_+),$$ with $(h_{-,0},h_{+,0})$ in $L^1(-\infty,0)\times L^1(0,+\infty)$ given. Let us assume that $ f_{\nu_{-,0},\nu_{+,0}}$ is weakly lower-semi-continuous and convex. If the level sets of $\mathcal G(h_-,h_+) $ are weakly compact in $\mathcal{V}_-\times \mathcal{V}_+$ (or $\mathcal{V}_-\times \mathcal{V}_+$ is weakly compact), then, as in Section \[sec:tikhonov\], there exists stable minimizers of $\mathcal G(h_-,h_+)$ in $\mathcal{V}_-\times \mathcal{V}_+$. Summing up, the Tikhonov-type regularization provides a stable approximation for the jump-size distribution $\nu$. Gradient Evaluation {#sec:gradient} ------------------- To implement a numerical gradient descent algorithm to minimize the Tikhonov-type functional with respect to each variable, as in @AlbAscZub2016, it is necessary to evaluate the directional derivatives of a numerical approximation of the data misfit function $\phi = \phi(a,\varphi)$. If $a^k$ and $\varphi^k$ denote the iterates of $a$ and $\varphi$ respectively in the gradient descent algorithm, evaluate $$\begin{aligned} a^k = a^{k-1} + \theta_k \frac{\partial}{\partial a} \mathcal F(a^{k-1},\tilde\varphi)\\ \varphi^k = \varphi^{k-1} + \beta_k \frac{\partial}{\partial \varphi} \mathcal F(\tilde a,\varphi^{k-1}),\end{aligned}$$ until some tolerance is reached, with $\tilde a$ and $\tilde\varphi$ fixed. To perform this task, we shall present the evaluation of such derivatives in the continuous setting. Since$$\frac{\partial}{\partial a} \mathcal F(a,\varphi) = \frac{\partial}{\partial a} \phi(a,\varphi) + \alpha_1 \partial g_{a_0}(a) ~\text{ and }~ \frac{\partial}{\partial \varphi} \mathcal F(a,\varphi) = \frac{\partial}{\partial \varphi} \phi(a,\varphi) + \alpha_2 \partial h_{\varphi_0}(\varphi),$$ to evaluate $\frac{\partial}{\partial a} \phi$ and $\frac{\partial}{\partial \varphi} \phi$, recall that the directional derivative of $F$ at $(a,\varphi)$ in the direction $(h,\gamma)$, with $(a+h,\varphi+\gamma) \in \mathcal{D}(F)$, is denoted by $v$ and is the unique solution of the PIDE $$\begin{gathered} v_\tau(\tau,y) - a(\tau,y) \left(v_{yy}(\tau,y) - v_{y}(\tau,y)\right) + rv_y(\tau,y) -I_\varphi(v_{yy} - v_y)(\tau,y)\\ = h\left(u_{yy}(\tau,y) - u_y(\tau,y)\right) + I_{\gamma}\left(u_{yy} - u_y\right)(\tau,y), \label{eq:gradient1}\end{gathered}$$ with homogeneous boundary and initial conditions, where $u$ is the solution of the PIDE problem -. Note that, $\frac{\partial}{\partial a} F(a,\varphi) h = v(h,\varphi+0)$, and $\frac{\partial}{\partial a} \phi(a,\varphi) = \frac{\partial}{\partial a} F(a,\varphi)^*P^*(Pu(a,\varphi)-u^\delta)$, so, for every $h\in Z$, $$\left\langle \frac{\partial}{\partial a} \phi,h\right\rangle = \left\langle \frac{\partial}{\partial a} F(a,\varphi)^*P^*(Pu(a,\varphi) - u^\delta),h\right\rangle = \left\langle Pu(a,\varphi) - u^\delta, P\frac{\partial}{\partial a} F(a,\varphi)h\right\rangle.$$ Since $P\frac{\partial}{\partial a} F(a,\varphi)h = P\mathcal{L}M_{u_{yy}-u_y}h$, where $M_{u_{yy}-u_y}$ is the multiplication by $u_{yy}-u_y$ operator and $\mathcal{L}$ is the operator that maps the source $h(u_{yy} - u_y)$ onto the solution of the PIDE with homogeneous boundary and initial conditions (with $\gamma = 0$), it follows that $$\left\langle Pu(a,\varphi) - u^\delta, P\frac{\partial}{\partial a} F(a,\varphi)h\right\rangle = \langle M_{u_{yy}-u_y} \mathcal L^*P^*(Pu(a,\varphi) - u^\delta),h \rangle = \langle (u_{yy}-u_y)w,h\rangle,$$ where $w$ is the solution of the adjoint PIDE: $$\begin{gathered} w_\tau(\tau,y) + \left(a w\right)_{yy}(\tau,y) + (aw)_y(\tau,y) - rw_y(\tau,y) = \\ \int_{{\mathbb{R}}}\varphi(x)\left(w_{yy}(\tau,x+y)+ w_y(\tau,x+y)\right)dx + P^*Pu(a,\nu) - P^*u^\delta, \label{eq:adjoint}\end{gathered}$$ with homogeneous boundary and terminal conditions. In a similar way, we evaluate $\frac{\partial}{\partial \varphi}\phi$ and find $$\frac{\partial}{\partial \varphi}\phi(z) = [H^*_u w] (z) := \int_0^T\int_{{\mathbb{R}}}\left(u_{yy}(\tau,y-z) - u_y(\tau,y-z)\right)w(\tau,y)dyd\tau,$$ where $u$ is the solution of the PIDE problem -. A Numerical Scheme {#sec:numerics} ================== Differently from [@ConVol2005a], we consider directly the case where the activity of jumps can be infinite. This is because we use the representation for the integral term in the PIDE problem -. Firstly, let us restrict the log-moneyness range where the PIDE problem - is defined to $[y_{\min},y_{\max}]$, with $y_{\min} < 0 < y_{\max}$, and then, $D = [0,\tau_{\max}]\times[y_{\min},y_{\max}]$. Outside $[y_{\min},y_{\max}]$, the numerical solution assumes the value of the payoff function at these points. Let $I,J \in {\mathbb{N}}$ be fixed. We consider the discretization $\tau_i = i \Delta \tau$, with $i = 0,1,2,...,I$, and $y_j = j \Delta y$, with $j = -J,-J+1,...,0,1,...,J$. Denote by $u^{i}_{j}:= u(\tau_i,y_j)$, $a^i_j := a(\tau_i,y_j)$, $\beta:= \Delta \tau/\Delta y$ and $\eta = \Delta \tau/\Delta y^2$. Define also: $$\varphi_j = \left\{ \begin{array}{ll} \displaystyle\int_{y_{\min}-\frac{\Delta y}{2}}^{y_j}(\text{e}^{y_j}-\text{e}^x)\nu(dx), & y_j < 0\\ \\ \displaystyle\int_{y_j}^{y_{\max}+\frac{\Delta y}{2}}(\text{e}^x-\text{e}^{y_j})\nu(dx), & y_j > 0, \end{array} \right.$$ where these integrals are approximated by the trapezoidal rule. The differential part of the PIDE problem - is approximated by the Crank-Nicolson scheme and the integral operator by the trapezoidal rule, leading to: $$\begin{gathered} u^{i}_{j} - \displaystyle\frac{1}{2}\eta a^{i}_{j}(u^{i}_{j+1} - 2u^{i}_{j} +u^{i}_{j-1}) + \frac{1}{4}\beta a^{i}_{j}(u^{i}_{j+1} - u^{i}_{j-1}) =\\ u^{i-1}_{j} + \displaystyle\frac{1}{2}\eta a^{i-1}_{j}(u^{i-1}_{j+1} - 2u^{i-1}_{j} +u^{i-1}_{j-1}) - \frac{1}{4}\beta a^{i-1}_{j}(u^{i-1}_{j+1} - u^{i-1}_{j-1}) + M^{i-1}_j, \label{cns}\end{gathered}$$ where $$M^{i-1}_j = \sum_{k=-J}^{J}\varphi_k\text{e}^{y_k}\left[\beta(u^{i-1}_{j+1-k} - 2u^{i-1}_{j-k} +u^{i-1}_{j-1-k})-\frac{1}{2}\Delta \tau(u^{i-1}_{j+1-k} - u^{i-1}_{j-1-k})\right].$$ In @KinMayAlbEng2008 a Crank-Nicholson-type algorithm was also used to solve the so-called direct problem. There, the authors were interested in the calibration of the local speed function, which here is set constant and equal to $1$. The numerical scheme for solving the adjoint PIDE with homogeneous boundary and terminal conditions is quite similar to the one in Equation . Following the same ideas presented in Section  \[sec:gradient\] we find the discrete version of the gradients of the data misfit function $\phi$. Numerical Validation {#sec:validation} -------------------- The purpose of this example is to illustrate the accuracy of the scheme in by comparing it with other techniques. Assume that $S_0=1$, $y_{\max} = 5$, $y_{\min} = -5$, $\tau_{\max} = 1$, $\Delta \tau = 0.005$, $\Delta y = 0.025$, $r = 0$ and the local volatility surface is constant with $a \equiv 0.0113$. Then, we evaluate European call prices in three different ways, the scheme of Section \[sec:numerics\], the implicit-explicit scheme from @ConVol2005a, and the Fourier transform method from @TanVol2009, which is based on the pricing formula presented in @CarMad1999. In the following synthetic examples the measure $\nu$ is assumed to be absolutely continuous w.r.t. the Lebesgue measure and given by $$\nu(dx) = \displaystyle\frac{0.1}{\sqrt{2\pi}}\text{e}^{-\frac{x^2}{2}}dx. \label{jumpsize}$$ We also consider a functional local volatility surface, given by $$\sigma(\tau,y) = \left\{ \begin{array}{ll} \displaystyle\frac{2}{5}-\frac{4}{25}\text{e}^{-\tau/2}\cos\left(\displaystyle\frac{4\pi y}{5}\right),& \text{ if } -2/5 \leq y \leq 2/5\\ \\ 2/5,& \text{ otherwise.} \end{array} \right. \label{vol}$$ and compare the results given by the scheme with the one presented in @ConVol2005a. To measure the accuracy, we consider implied volatilities instead of prices. Let us denote by: - $\Sigma_{CN}$ the set of implied volatilities corresponding to the prices evaluated with the schemes from Equation . - $\Sigma_{CV}$ the set of implied volatilities corresponding to the prices evaluated with the schemes from @ConVol2005a. - $\Sigma_{Fourier}$ the set of implied volatilities corresponding to the prices evaluated with the schemes from @TanVol2009. We estimate the normalized $\ell_2-$distance between them as follows: $$\|\Sigma_{CN} - \Sigma_{CV}\|/\|\Sigma_{CV}\| ~\text{ or }~ \|\Sigma_{CN} - \Sigma_{Fourier}\|/\|\Sigma_{Fourier}\|,$$ We also estimate the mean and standard deviation of the absolute relative error (abs. rel. error), which is evaluated at each node as follows: $$|\Sigma_{CN}(\tau_i,y_j) - \Sigma_{CV}(\tau_i,y_j)|/|\Sigma_{CV}(\tau_i,y_j)| ~\text{ or }~ |\Sigma_{CN}(\tau_i,y_j) - \Sigma_{Fourier}(\tau_i,y_j)|/|\Sigma_{Fourier}(\tau_i,y_j)|.$$ Such results can be seen in Table \[tab:example1\]. A comparison between implied volatilities with constant and non-constant local volatility surface can be found in Figures \[fig:impvol1a\] and \[fig:impvol1b\], respectively. -- -- -------- -------- ----------- Mean Std. Dev. 0.0064 0.0070 0.0072 0.0862 0.0923 0.0699 0.0064 0.0064 0.0038 -- -- -------- -------- ----------- : Normalized distance and absolute relative error.[]{data-label="tab:example1"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T1 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T2 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T3 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T4 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T5 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T6 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T7 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T8 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T9 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is constant.[]{data-label="fig:impvol1a"}](a_impvol_T10 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T1 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T2 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T3 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T4 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T5 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T6 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T7 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T8 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T9 "fig:"){width="24.90000%"} ![Implied volatility when local volatility surface is not constant.[]{data-label="fig:impvol1b"}](b_impvol_T10 "fig:"){width="24.90000%"} As we can see, CN implied volatilities matched the CV ones with constant and a non-constant local volatility surface. When comparing with implied volatilities corresponding to the Fourier prices, the adherence of CN volatilities was not exact, but the result was satisfactory, since the relative error and the normalized distance are below $10\%$ of the norm of $\Sigma_{Fourier}$. These results illustrate the accuracy of the present scheme. Numerical Examples {#sec:examples} ================== We shall now perform a set of illustrative numerical examples. Local Volatility Calibration {#sec:vol_estimation} ---------------------------- This example is aimed to illustrate that, if $\nu$ is known, it is possible to calibrate the local volatility surface, as in [@AndAnd2000]. The European call prices are generated by the difference scheme , with local volatility surface and jump-size density , at the nodes $(\tau_i,y_j) = (i\cdot 0.1,j\cdot0.05)$, with $i=1,..,10$ and $j=-10,-9,...,0,1,...,10$. This is a sparse grid in comparison with the one where the direct problem is solved, see the beginning of Section \[sec:validation\]. Under a discrete setting, set in the functional the parameters as $\alpha_2 = 0$, $\alpha_1 = 10^{-4}$, and define the penalty functional $$f_{a_0}(a) = \|a-a_0\|^2 + \|\partial_{\tau,\Delta} a\|^2 + 100\|\partial_{y,\Delta} a\|^2,$$ where $\|\cdot\|$ denotes the $\ell_2$-norm and the operators $\partial_{\tau,\Delta}$ and $\partial_{y,\Delta}$ denote the forward finite difference approximation of the first derivatives w.r.t. $\tau$ and $y$, respectively. The choice of the weights in the penalty functional is made heuristically and some hints about this choice can be found in @AlbAscZub2016. The minimization problem is solved by a gradient-descent method, the step sizes are chosen by a rule inspired by the steepest decent method and the iterations cease whenever the normalized $\ell_2$-residual $$\|F(a)-u^\delta\|/\|u^\delta\|,$$ is less than $0.01$. For more details, see @AlbAscZub2016. The mesh step sizes used to evaluate the local volatility $a$ are the same as those used to generate the data. So, we use the following rule to evaluate the local volatility surface in the whole domain $$a(\tau,y) = \begin{cases} a(\tau,-0.5) & {\rm if~} \tau > 0.1, \ y \leq -0.5, (\rm{deep~in~the~money}) \\ a(\tau,0.5) & {\rm if~} \tau > 0.1, \ y\geq 0.5, (\rm{deep~out~of~the~money}) \\ a(0.1,y) & {\rm if~} \tau \leq 0.1 , \end{cases}$$ combined with bilinear interpolation. ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T1 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T2 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T3 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T4 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T5 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T6 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T7 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T8 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T9 "fig:"){width="25.00000%"} ![Implied volatilities of data and the calibrated local volatility.[]{data-label="fig:impvol2"}](c_implvol_T10 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T1 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T2 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T3 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T4 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T5 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T6 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T7 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T8 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T9 "fig:"){width="25.00000%"} ![Original and Calibrated Local volatility surfaces.[]{data-label="fig:localvol2"}](c_localvol_T10 "fig:"){width="25.00000%"} The normalized $\ell_2$-distance between the implied volatility of the data and the prices obtained with the calibrated local volatility was $0.0065$, the mean and standard deviation of the associated absolute relative error at each node were $0.0043$ and $0.0036$, respectively. With respect to the original and the calibrated local volatility surfaces, the normalized $\ell_2$-distance was $0.0701$. The mean and the standard deviation of the corresponding absolute relative error at each node were $0.0583$ and $0.0399$, respectively. The accuracy of our methodology can be also observed in Figures \[fig:impvol2\]-\[fig:localvol2\] where the implied volatilities of the model matched the data ones, and the reconstructed local volatility was quite similar to the original one. In both figures, “Calib.” stands for the calibrated local volatility and “Data” stands for the original one. Note that, the calibration was not perfect, since the data is collected in a sparse grid. Calibration of jump-size distribution {#sec:nu_calibbration} ------------------------------------- Assuming that the local volatility surface is given, the double-exponential tail and the jump-size distribution are calibrated form observed prices. For this example, the same synthetic data and parameters presented in Section \[sec:vol\_estimation\] are used. Define $$\nu_j = \int_{y_j-\frac{\Delta y}{2}}^{y_j+\frac{\Delta y}{2}}\nu(dy).$$ Firstly, we calibrate $\varphi$, and then, $\nu$ is reconstructed from $\varphi$, by minimizing the functional: $$\sum^{M}_{j=-M}(\varphi_j - \varphi(\nu)_j)^2 + \alpha\sum^{M}_{j=-M}\left[\nu_j\log(\nu_j/\nu_{j,0}) - (\nu_{j,0} - \nu_j)\right],$$ where $\varphi(\nu)_j$ is given by $$\varphi(\nu)_j = \left\{ \begin{array}{ll} \displaystyle\sum_{l=-M}^{j}(\text{e}^{y_j}-\text{e}^{y_l})\nu_l, & y_j < 0\\ \displaystyle\sum_{l=j}^{M}(\text{e}^{y_l}-\text{e}^{y_j})\nu_l, & y_j > 0. \end{array} \right.$$ The regularization parameter is set as $\alpha = 1\times 10^{-5}$. ![Left: true (line with crosses) and reconstructed (line with squares) double-exponential tail functions. Right: true (line with crosses) and reconstructed (line with squares) jump-size distributions.[]{data-label="fig:psinu"}](sint_psi "fig:"){width="40.00000%"} ![Left: true (line with crosses) and reconstructed (line with squares) double-exponential tail functions. Right: true (line with crosses) and reconstructed (line with squares) jump-size distributions.[]{data-label="fig:psinu"}](sint_nu "fig:"){width="40.00000%"} As we can see in Figure \[fig:psinu\], the reconstructed double-exponential tail $\varphi$ matched the true one. The calibrated jump-size distribution $\nu$ is also adherent to the original one except around zero, probably due to the discontinuity of $\varphi$ at zero. The normalized distance between the true and reconstructed double-exponential tail functions was $2.14\times 10^{-4}$, and the mean and standard deviation of the associated absolute relative error at each node were $0.002$ and $0.0059$, respectively. The normalized $\ell_2$-distance between the true and the calibrated jump-size distributions was $0.59$, and the mean and standard deviation of the associated absolute relative error at each node were $0.0946$ and $0.2369$, respectively. If we exclude the points $y = 0,~0.05$, the values of the normalized distance, the mean and standard deviation become $2.73\times 10^{-5}$, $0.0022$ and $0.0061$, respectively. So, excluding these two points, the calibration was perfect. The normalized residual was $1.34 \times 10^{-10}$. This is probably due to the discontinuity of $\varphi$ at zero, which introduces some noise into the reconstruction. So, if the local volatility surface is given, the calibration of $\varphi$ and $\nu$ are quite satisfactory even with scarce data. These results are comparable to the ones obtained in [@ConTan2004; @ConTan2006]. Testing the Splitting Algorithm {#sec:splitting_ex} ------------------------------- The goal of the present example is to illustrate that the splitting algorithm is able to calibrate simultaneously the local volatility function and the double exponential tail. The call prices are given at the nodes $(\tau_i,y_j) = (i\cdot 0.1,j\cdot 0.05)$, with $i= 1,...,10$ and $j= -90,-89,...,0,...,10$. This represents $2.5\%$ of the mesh where the direct problem is solved. The algorithm was initialized with the minimization of the Tikhonov functional w.r.t. the volatility parameter. The initial states of the local volatility surface and double exponential tail, as well as $a_0$ and $\varphi_0$ in the penalty functional, were set as $a_0(\tau,x) = 0.08$ and $$\nu_0(dx) = \left(0.5\exp(-0.5x^2-0.5x)\mathcal{X}_{[0,5]} + 0.5\exp(-0.5x^2 -0.5|x|)\mathcal{X}_{[-5,0)}\right)dx,$$ respectively. Here, $\mathcal{X}_{[0,5]}$ is the characteristic or indicator function of the set $[0,5]$. The minimization w.r.t. the local volatility surface was performed as in Section \[sec:vol\_estimation\]. However, to proceed with the minimization w.r.t. the double exponential tail, firstly, we made the change of variable $\Gamma = \log(\phi)$ and considered the decomposition $\Gamma(y) = \Gamma(y)\mathcal{X}_{(-5,0)} + \Gamma(y)\mathcal{X}_{(0,5)}$. Since the $y$-domain now is bounded, $\Gamma_-(y) = \Gamma(y)\mathcal{X}_{(-5,0)}$ and $\Gamma_+(y) = \Gamma(y)\mathcal{X}_{(0,+5)}$ can be expressed in terms of Fourier series. So, we truncate its series at the third term and minimize the Tikhonov functional w.r.t. the Fourier coefficients. In this example the Kullback-Leibler divergence in the definition of the penalty functional in Section \[sec:calibexptail\] was replaced by the square of $\ell_2$-norm. After two steps of the splitting algorithm, the normalized $\ell_2$-residual was $0.0017$, below the tolerance which was set as $0.002$. The normalized $\ell_2$-distances between the reconstructed and true parameters were, $0.165$ for the local volatility surface and $0.641$ for the double exponential tail. Figure \[fig:lvol\] presents the true and the reconstructed local volatility surfaces at the first and second steps of the splitting algorithm. The comparison between the double-exponential tails is done in Figure \[fig:detail\]. ![Reconstruction of the local volatility surface: original (left), after one step (center) and after two steps (right).[]{data-label="fig:lvol"}](sint_orig_vol "fig:"){width="32.00000%"} ![Reconstruction of the local volatility surface: original (left), after one step (center) and after two steps (right).[]{data-label="fig:lvol"}](sint_reconst_vol_1 "fig:"){width="32.00000%"} ![Reconstruction of the local volatility surface: original (left), after one step (center) and after two steps (right).[]{data-label="fig:lvol"}](sint_reconst_vol_2 "fig:"){width="32.00000%"} ![Reconstruction of the double exponential tail: after one step (left) and after two steps (right). Continuous line: true. Dashed line: reconstruction.[]{data-label="fig:detail"}](sint_psi_1 "fig:"){width="40.00000%"} ![Reconstruction of the double exponential tail: after one step (left) and after two steps (right). Continuous line: true. Dashed line: reconstruction.[]{data-label="fig:detail"}](sint_psi_2 "fig:"){width="40.00000%"} If the reconstructions of each parameter are analyzed separately, it seems that the results were not as accurate as in the previous examples. However, this was expected since the amount of unknowns in this test is much larger than before. In addition, it is well known that the distribution of small jump-sizes and volatility are closely related. See [@ConTan2003]. This means that in simultaneous reconstructions, it is difficult to separate one from another. So, based on such observations, the results were satisfactorily accurate, since the main features of both parameters were incorporated by the reconstructions, as illustrated in Figures \[fig:lvol\]-\[fig:detail\]. Pricing Exotic Options ---------------------- To provide another illustration of the accuracy of the splitting algorithm, we evaluate the so-called Lookback call and put options, which have the following payoff functions: $$LB_{\mbox{call}}(\tau_i) = \max\left\{0,S_{\tau_i}-\min_{0\leq k \leq N}S_{t_k}\right\} ~~\mbox{and}~~ LB_{\mbox{put}}(\tau_i) = \max\left\{0,\max_{0\leq k \leq N}S_{t_k}\right\},$$ respectively, where the time-to-maturity of the options are $\tau_i = 0.1, 0.2, 0.3, 0.5$, the current time is given by $t_k = k\cdot \Delta t$, with $ k = 0,1,...,N$, $\Delta t = \tau_i/N$, and $N$ is the number of time steps, set to $N=100$. The price of the options are approximated by a Monte Carlo integration as in $$LB_{\mbox{call}}(0) = \mbox{e}^{-r\tau_i}\mathbb{E}\left[LB_{\mbox{call}}(\tau_i)\right] \approx \mbox{e}^{-r\tau_i}\frac{1}{N_r}\sum_{l = 1}^{N_r}LB_{\mbox{call}}(\tau_i)^{(l)},$$ where $LB_{\mbox{call}}(\tau_i)^{(l)}$ is the $l$-th realization of the random variable $LB_{\mbox{call}}(\tau_i)$, and $N_r$ is the total amount of realizations, which is set to $N_r = 10\,000$. The realizations of $LB_{\mbox{call}}(\tau_i)$ and $LB_{\mbox{put}}(\tau_i)$ are generated by Dupire model and the jump-diffusion model in . The Dupire model is solved by Euler-Naruyama method with local volatility calibrated from the European call price dataset of Section \[sec:splitting\_ex\]. The normalized residual in local volatility calibration is approximately the same achieved by the jump-diffusion calibration in Section \[sec:splitting\_ex\]. The jump-diffusion model is solved by the method in [@GieTenWei2017] with the local volatility and the jump-size distribution calibrated in Section \[sec:splitting\_ex\]. The samples of the jump-sizes are given by inverse transform sampling, where the inverse of the cumulative distribution of jump-sizes was evaluated by least-squares. The ground truth prices are given by jump-diffusion model with the true local volatility and true jump-size distribution of Section \[sec:splitting\_ex\]. $\tau$ $0.1$ $0.2$ $0.3$ $0.4$ -------- ------------- -------------- --------------- ------------ Jumps $ 0.0509 $ $ 0.0692 $ $ 0.0855 $ $ 0.1059$ Dupire $ 0.0728 $ $ 0.1019 $ $ 0.1270 $ $ 0.1620$ True $ 0.0577 $ $ 0.0828 $ $ 0.1040 $ $ 0.1363$ : Lookback call prices[]{data-label="tab:exotic1"} $\tau$ $0.1$ $0.2$ $0.3$ $0.4$ -------- ----------------- --------------- -------------- ------------- Jumps $ 0.0690 $ $ 0.1060 $ $ 0.1357 $ $ 0.1907$ Dupire $ 0.0776 $ $ 0.1112 $ $ 0.1368 $ $ 0.1821$ True $ 0.0662$ $ 0.0993 $ $ 0.1269 $ $ 0.1780$ : Lookback put prices[]{data-label="tab:exotic2"} $\tau$ $0.1$ $0.2$ $0.3$ $0.4$ -------- ---------------- ------------- -------------- ------------- Jumps $ 0.1185 $ $ 0.1494 $ $ 0.1640 $ $ 0.1919$ Dupire $ 0.2618 $ $ 0.2409 $ $ 0.2309 $ $ 0.2112$ : Normalized error in lookback call prices[]{data-label="tab:exotic3"} $\tau$ $0.1$ $0.2$ $0.3$ $0.4$ -------- ---------------- -------------- -------------- ------------- Jumps $ 0.0425 $ $ 0.0596 $ $ 0.0648 $ $ 0.0680$ Dupire $ 0.1725 $ $ 0.1360 $ $ 0.1053 $ $ 0.0630$ : Normalized error for the lookback put prices[]{data-label="tab:exotic4"} Tables \[tab:exotic1\] and \[tab:exotic2\] present the prices of the lookback call and put options, respectively. The error in the prices can be found in Tables \[tab:exotic3\] and \[tab:exotic4\]. In these tables, the word [*Jumps*]{} stands for jump-diffusion model, whereas the word [*Dupire*]{} stands for Dupire model and [*True*]{} stands for the ground truth prices. Based on these results, we can see that the jump-diffusion model with parameters calibrated by the splitting algorithm is more precise than the Dupire model with calibrated local volatility. The Splitting Algorithm with DAX Options ---------------------------------------- This experiment is aimed to illustrate that the splitting algorithm can be used with market data. The tests are performed with end-of-the-day DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018. The mesh step lengths used here were $\Delta y = 0.05$ and $\Delta \tau \approx 0.003$. The penalty term of the Tikhonov functional was the same used in Section \[sec:splitting\_ex\], with $\alpha = 10^{-5}$. We used the same initial states for the local volatility surface and double exponential, as well as, the [*a priori*]{} parameters of Section \[sec:splitting\_ex\]. The interest rate was taken as 0, and $S_0 = 12814.79$ USD. The data was given in the sparse mesh defined by transforming the market strikes into log-moneyness, and considering the time to maturity in years. Only three iterations of the splitting algorithm were needed until the data misfit function was below the tolerance, set as $tol = 0.0069$. To reconstruct the jump-size distribution and the local volatility surface, we used the same parameters of Section \[sec:splitting\_ex\]. ![Reconstructions from Dax options of local volatility surface (left), double exponential tail (center) and jump-size density function (right).[]{data-label="fig:dax1"}](dax_lvol_1 "fig:"){width="32.00000%"} ![Reconstructions from Dax options of local volatility surface (left), double exponential tail (center) and jump-size density function (right).[]{data-label="fig:dax1"}](dax_psi_1 "fig:"){width="32.00000%"} ![Reconstructions from Dax options of local volatility surface (left), double exponential tail (center) and jump-size density function (right).[]{data-label="fig:dax1"}](dax_nu_1 "fig:"){width="32.00000%"} ![Market (squares) and model (continuous line) implied volatility for DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018 (from left to right). []{data-label="fig:dax2"}](dax_ivolb_T1_1 "fig:"){width="32.00000%"} ![Market (squares) and model (continuous line) implied volatility for DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018 (from left to right). []{data-label="fig:dax2"}](dax_ivolb_T2_1 "fig:"){width="32.00000%"} ![Market (squares) and model (continuous line) implied volatility for DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018 (from left to right). []{data-label="fig:dax2"}](dax_ivolb_T3_1 "fig:"){width="32.00000%"} ![Market (squares) and model (continuous line) implied volatility for DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018 (from left to right). []{data-label="fig:dax2"}](dax_ivolb_T4_1 "fig:"){width="32.00000%"} ![Market (squares) and model (continuous line) implied volatility for DAX European call prices traded on 20-Jun-2017, and maturing on 21-Jun-2017, 18-Aug-2017, 15-Sep-2017, 15-Dec-2017, and 16-Mar-2018 (from left to right). []{data-label="fig:dax2"}](dax_ivolb_T5_1 "fig:"){width="32.00000%"} Figure \[fig:dax1\] presents the calibrated local volatility surface, double exponential tail and jump-size density function. The corresponding implied volatilities of market data and of the model can be found in Figure \[fig:dax2\]. As can be observed from these figures, the local volatility surfaces have a nice smile adherence, especially close to the at-the-money strikes ($y=0$). Conclusion {#sec:conclusion} ========== In the present paper, we have explored the inverse problem of simultaneous calibration of the local volatility surface and the jump-size distribution from quoted European vanilla options when stock prices are modeled as jump-diffusion processes. This is a difficult task, since the complexity is higher than that of the calibration problem involving purely diffusive prices, as in the local volatility calibration studied by [@Cre2003a], [@Cre2003b], [@EggEng2005], [@AlbAscYanZub2015], and others. Tikhonov-type regularization combined with a splitting strategy was applied to solve this inverse problem. We provided theoretical results showing that this methodology works for theoretical problem and it could be used with the specific problem under consideration. Numerical examples illustrated the effectiveness of this technique and provided stable approximations to the true local volatility and jump-size distribution with synthetic and real data. [34]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} R. Adams and J. Fournier. *Sobolev Spaces*. Elsevier, second edition, 2003. V. Albani and J. P. Zubelli. . *Appl. Anal. Discrete Math.*, 8, 2014. [doi: ]{}[10.2298/AADM140811012A]{}. V. Albani, U. Ascher, X. Yang, and J. Zubelli. . *Inverse Problems and Imaging*, 110 (5):0 799–823, 2017. [doi: ]{}[10.3934/ipi.2017038]{}. URL <http://arxiv.org/abs/1512.07660>. V. Albani, U. Ascher, and J. Zubelli. . *Journal of Computational Finance*, 21:0 1–33, 2018. [doi: ]{}[10.21314/JCF.2018.345]{}. URL <http://arxiv.org/abs/1602.04372>. L. Andersen and J. Andreasen. . *Review of Derivatives Research*, 40 (3):0 231–262, 2000. [doi: ]{}[10.1023/A:1011354913068]{}. URL <http://link.springer.com/article/10.1023/A:1011354913068>. G. Barles and C. Imbert. . *Ann. Inst. H. Poincar[é]{} - Anal. Non Lin[é]{}aire*, 250 (3):0 567–585, 2008. [doi: ]{}[10.1016/j.anihpc.2007.02.007]{}. URL <http://archive.numdam.org/ARCHIVE/AIHPC/AIHPC_2008__25_3/AIHPC_2008__25_3_567_0/AIHPC_2008__25_3_567_0.pdf>. A. Bentata and R. Cont. . *Finance Stoch*, 19:0 617–651, 2015. [doi: ]{}[10.1007/s00780-015-0265-z]{}. URL <http://link.springer.com/article/10.1007/s00780-015-0265-z>. P. Carr and D. Madan. . *Journal of computational finance*, 20 (4):0 61–73, 1999. URL <http://portal.tugraz.at/portal/page/portal/Files/i5060/files/staff/mueller/FinanzSeminar2012/CarrMadan_OptionValuationUsingtheFastFourierTransform_1999.pdf>. I. Cioranescu. *Geometry of Banach spaces, duality mappings and nonlinear problems*, volume 62 of *Mathematics and its Applications*. Kluwer Academic Publishers Group, Dordrecht, 1990. R. Cont and P. Tankov. *[Financial Modelling with Jump Processes]{}*. . Chapman and Hall, 2003. R. Cont and P. Tankov. . *J. Comput. Finance*, 70 (3):0 1–49, 2004. URL <https://hal.archives-ouvertes.fr/hal-00002694/>. R. Cont and P. Tankov. . *SIAM J. Control Optim.*, 450 (1):0 1–25, 2006. [doi: ]{}[10.1137/040616267]{}. URL <http://epubs.siam.org/doi/abs/10.1137/040616267>. R. Cont and E. Voltchkova. . *SIAM J. Numer. Anal.*, 430 (4):0 1596–1626, 2005. [doi: ]{}[10.1137/S0036142903436186]{}. URL <http://epubs.siam.org/doi/abs/10.1137/S0036142903436186>. R. Cont and E. Voltchkova. . *Finance Stoch*, 90 (3):0 299–325, 2005. [doi: ]{}[10.1007/s00780-005-0153-z]{}. URL <http://link.springer.com/article/10.1007/s00780-005-0153-z>. S. Cr[é]{}pey. . *SIAM J. Math. Anal.*, 34:0 1183–1206, 2003. [doi: ]{}[10.1137/S0036141001400202]{}. S. Cr[é]{}pey. . *Inverse Problems*, 190 (1):0 91–127, 2003. ISSN 0266-5611. [doi: ]{}[10.1137/S0036141001400202]{}. N. Dunford and J. T. Schwartz. *Linear Operators Part I: General Theory*. Interscience Publishers, 1958. B. Dupire. . *Risk Magazine*, 7:0 18–20, 1994. H. Egger and H. Engl. . *Inverse Problems*, 21:0 1027–1045, 2005. H. Engl, M. Hanke, and A. Neubauer. *[Regularization of [I]{}nverse [P]{}roblems]{}*, volume 375 of *[Mathematics and its Applications]{}*. Kluwer Academic Publishers Group, Dordrecht, 1996. M.-G. Garroni and J.-L. Menaldi. *[Second order elliptic integro-differential problems]{}*. CRC Press, 2002. J. Gatheral. *[[T]{}he [V]{}olatility [S]{}urface: [A]{} [P]{}ractitioner’s [G]{}uide]{}*. . John Wiley & Sons, 2006. K. Giesecke, G. Teng, and Y. Wei. Numerical solution of jump-diffusion sdes. 2017. R. Iorio and V. Iorio. *[Fourier [A]{}nalysis and [P]{}artial [D]{}ifferential [E]{}quations]{}*, volume 70 of *[Cambridge Studies in Advanced Mathematics]{}*. Cambridge University Press, 2001. S. Kindermann and P. Mayer. . *Finance Stoch*, 150 (4):0 685–724, 2011. [doi: ]{}[10.1007/s00780-011-0159-7]{}. URL <http://link.springer.com/article/10.1007/s00780-011-0159-7>. S. Kindermann, P. Mayer, H. Albrecher, and H. Engl. . *J. Integral Equations Applications*, 200 (2):0 161–200, 2008. [doi: ]{}[10.1216/JIE-2008-20-2-161]{}. URL <http://projecteuclid.org/euclid.jiea/1212765417>. O. Ladyzenskaja, V. Solonnikov, and N. Ural’ceva. *[Linear and Quasi-linear Equations of Parabolic Type]{}*. . AMS, 1968. F. Margotti and A. Rieder. . *Journal of Inverse and Ill-posed Problems*, 230 (4):0 373–392, 2014. [doi: ]{}[10.1515/jiip-2014-0035]{}. E. Resmerita and R. Anderssen. . *Math. Methods Appl. Sci.*, 30:0 1527–1544, 2007. [doi: ]{}[10.1002/mma.855]{}. R. T. Rockafellar and R. J.-B. Wets. *Variational Analysis*. Springer, 2009. O. Scherzer, M. Grasmair, H. Grossauer, M. Haltmeier, and F. Lenzen. *[Variational [M]{}ethods in [I]{}maging]{}*, volume 167 of *[Applied Mathematical Sciences]{}*. Springer, New York, 2008. E. Somersalo and J. Kapio. *[[S]{}tatistical and [C]{}omputational [I]{}nverse [P]{}roblems]{}*, volume 160 of *[Applied Mathematical Sciences]{}*. Springer, 2004. P. Tankov and E. Voltchkova. . *Banques et March[é]{}s*, 2009. URL <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.543.6669&rep=rep1&type=pdf>. M. E. Taylor. *Partial Differential Equations I: Basic Theory*. Springer, second edition, 2011. [^1]: Federal University of Santa Catarina, 88.040-900 Florianopolis, Brazil, <v.albani@ufsc.br> [^2]: IMPA, Rio de Janeiro, RJ 22460-320, Brazil, <zubelli@impa.br>
--- abstract: 'To quantitatively assess the impact of an eV-mass sterile neutrino on the neutrinoless double-beta ($0\nu \beta \beta$) decays, we calculate the posterior probability distribution of the relevant effective neutrino mass $|m^\prime_{ee}|$ in the (3+1)$\nu$ mixing scenario, following the Bayesian statistical approach. The latest global-fit analysis of neutrino oscillation data, the cosmological bound on the sum of three active neutrino masses from [*Planck*]{}, and the constraints from current $0\nu\beta\beta$ decay experiments are taken into account in our calculations. Based on the resultant posterior distributions, we find that the average value of the effective neutrino mass is shifted from $\overline{|m^{}_{ee}|} = 3.37\times 10^{-3}~{\rm eV}$ (or $7.71\times 10^{-3}~{\rm eV}$) in the standard 3$\nu$ mixing scenario to $\overline{|m^{\prime}_{ee}|}=2.54\times 10^{-2}~{\rm eV}$ (or $2.56\times 10^{-2}~{\rm eV}$) in the (3+1)$\nu$ mixing scenario, with the logarithmically uniform prior on the lightest neutrino mass (or on the sum of three active neutrino masses). Therefore, a null signal from the future $0\nu\beta\beta$ decay experiment with a sensitivity to $|m^{}_{ee}| \approx \mathcal{O}(10^{-2}_{})~{\rm eV}$ will be able to set a very stringent constraint on the sterile neutrino mass and the active-sterile mixing angle.' --- [**Impact of an eV-mass sterile neutrino on the neutrinoless double-beta decays: a Bayesian analysis**]{} [**Guo-yuan Huang**]{} [^1], [^2]\ Introduction ============ Whether massive neutrinos are Majorana or Dirac particles is one of the most important problems in particle physics [@Majorana:1937vz; @Racah:1937qq; @Tanabashi:2018oca]. Quite a number of neutrinoless double-beta ($0\nu\beta\beta$) decay experiments are devoted to answering this question [@Furry:1939qr; @Rodejohann:2011mu; @Bilenky:2012qi; @Rodejohann:2012xd; @Bilenky:2014uka; @Pas:2015eia; @DellOro:2016tmg]. If massive neutrinos are Majorana particles and thus lepton number violation exists in nature, then the $0\nu\beta\beta$ decays $A(Z, N) \to A(Z+2, N-2) + 2e^-$ could take place in some even-even nuclei, namely, both the proton number $Z$ and the neutron number $N$ for the nuclear isotope $A(Z, N)$ are even. Assuming the exchange of light Majorana neutrinos to be responsible for the $0\nu\beta\beta$ decays, one can find that the half-life of the relevant nuclear isotope is given by [@Rodejohann:2011mu] $$\begin{aligned} (T^{0\nu}_{1/2})^{-1} = G^{}_{0\nu}|\mathcal{M}^{}_{0\nu}|^2 \frac{|m^{}_{ee}|^2}{m^{2}_{e}}\;, \label{eq:halflife} $$ where $G^{}_{0\nu}$ is the phase-space factor, $\mathcal{M}^{}_{0\nu}$ is the nuclear matrix element (NME), and $m^{}_{e}$ is the electron mass. In Eq. (\[eq:halflife\]), the effective neutrino mass $|m^{}_{ee}|$ collects the contributions from light Majorana neutrinos involved in the $0\nu\beta\beta$ decays. In the standard three neutrino ($3\nu$) mixing scenario, the effective neutrino mass is defined as $|m^{}_{ee}| \equiv |m^{}_{1} U^2_{e1}+m^{}_{2}U^2_{e2}+m^{}_{3}U^2_{e3}|$, where the absolute neutrino masses $m^{}_i$ and the lepton flavor mixing matrix elements $U^{}_{ei}$ (for $i = 1, 2, 3$) appear. When the conventional parametrization of the flavor mixing matrix $U$ is adopted [@Tanabashi:2018oca], i.e., $U^{}_{e1} = \cos \theta^{}_{13} \cos \theta^{}_{12} e^{{\rm i}\rho/2}$, $U^{}_{e2} = \cos \theta^{}_{13} \sin \theta^{}_{12}$ and $U^{}_{e3} = \sin \theta^{}_{13} e^{{\rm i}\sigma/2}$, we have $$\begin{aligned} m^{}_{ee} \equiv m^{}_{1} \cos^2\theta^{}_{13} \cos^2\theta^{}_{12} e^{{\rm i} \rho} + m^{}_{2} \cos^2\theta^{}_{13} \sin^2\theta^{}_{12} + m^{}_{3} \sin^2\theta^{}_{13} e^{{\rm i} \sigma} \;, \label{eq:mee} $$ where $\{\theta^{}_{12}, \theta^{}_{13}\}$ are two of three neutrino mixing angles, and $\{\rho, \sigma\}$ are the Majorana-type CP-violating phases. Note that $m^{}_2$ is nonzero no matter whether the normal neutrino mass ordering (NO) with $m^{}_1 < m^{}_2 < m^{}_3$ or the inverted neutrino mass ordering (IO) with $m^{}_3 < m^{}_1 < m^{}_2$ is considered. Therefore, such a parametrization is favorable in the discussions about the limiting case of $m^{}_1 \to 0$ (for NO) or $m^{}_3 \to 0$ (for IO), for which one of two Majorana-type CP violating phases just disappears together with the lightest neutrino mass. However, if the eV-mass sterile neutrino indeed exists as a solution to the anomalies in the short-baseline neutrino experiments [@Giunti:2019aiy; @Aguilar:2001ty; @AguilarArevalo:2008rc; @Aguilar-Arevalo:2018gpe; @Giunti:2010zu; @Mention:2011rk; @Abdurashitov:2009tn; @Kaether:2010ag], it will contribute as well to the $0\nu \beta \beta$ decays. In this case, the effective neutrino mass is given by $|m^{\prime}_{ee}| \equiv |m^{}_{1} V^2_{e1} + m^{}_{2} V^2_{e2} + m^{}_{3} V^2_{e3} + m^{}_{4} V^2_{e4}|$ with $m^{}_{4}$ being the mass of the sterile neutrino and $V^{}_{ei}$ (for $i=1,2,3,4$) being the first-row elements of the mixing matrix in the (3+1)$\nu$ mixing scenario. Adopting the standard parametrization of the mixing matrix, one can express the effective neutrino mass as $$\begin{aligned} |m^{\prime}_{ee}| & \equiv & | m^{}_{ee} \cos^2\theta^{}_{14} + m^{}_{4}\sin^2\theta^{}_{14} e^{i \omega}|\;, \label{eq:meeprime} $$ where $m^{}_{ee}$ takes the same form as in Eq. (\[eq:mee\]), $\theta^{}_{14}$ is the active-sterile neutrino mixing angle, and $\omega$ is the additional Majorana-type CP-violating phase. Using the best-fit values $\Delta m^2_{41} \equiv m^2_4 - m^2_1 = 1.7~{\rm eV}^2$ and $\sin^2 \theta^{}_{14} = 0.019$ from the global-fit analysis of the short-baseline neutrino oscillation data [@Gariazzo:2017fdh; @Dentler:2018sju], one can find that the contribution from the sterile neutrino $|m^{}_4 \sin^2 \theta^{}_{14}| \approx 2.5\times 10^{-2}~{\rm eV}$ can be comparable to that from active neutrinos $|m^{}_{ee}| \lesssim 0.1~{\rm eV}$, which is constrained by the cosmological observations [@Aghanim:2018eyx] and current $0\nu\beta\beta$ decay experiments [@Albert:2014awa; @KamLAND-Zen:2016pfg; @Agostini:2017iyd; @Alduino:2017ehq; @Aalseth:2017btx; @Agostini:2018tnm]. With a ton-scale target mass, the future $0\nu\beta\beta$ experiments will be able to probe $|m^{}_{ee}|$ to the $\mathcal{O}(10^{-2})~{\rm eV}$ level [@Agostini:2017jim], covering the whole range of $|m^{}_{ee}|$ in the IO case. However, in the NO case, the effective neutrino mass can be as small as $|m^{}_{ee}| \approx (1.6 \cdots 3.6) \times 10^{-3} ~{\rm eV}$ when the lightest neutrino mass $m^{}_{1}$ is vanishing, or even vanishing in the contrived region of parameter space when the cancellation among the contributions from different neutrino mass eigenstates occurs [@Xing:2003jf; @Xing:2015zha; @Xing:2016ymd]. Moreover, the latest global-fit analysis of neutrino oscillation data [@deSalas:2017kay; @Capozzi:2018ubv; @Esteban:2018azc] does show a preference for the NO at the $3\sigma$ confidence level (C.L.), it is worrisome that $|m^{}_{ee}|$ may be out of the reach of the next generation $0\nu \beta\beta$ decay experiments. To quantitatively assess how likely $|m^{}_{ee}|$ is small, the authors of Refs. [@Agostini:2017jim; @Caldwell:2017mqu] have carried out a Bayesian analysis and obtained the posterior distribution of $|m^{}_{ee}|$, given the neutrino oscillation data, current experimental upper bounds on $|m^{}_{ee}|$ and the cosmological bound on the sum of three neutrino masses. For the earlier relevant works, see Refs. [@Benato:2015via; @Zhang:2015kaa; @Ge:2016tfx]. Although the impact of an eV-mass sterile neutrino on the effective neutrino mass $|m^\prime_{ee}|$ has been considered in Refs.  [@Goswami:2005ng; @Goswami:2007kv; @Barry:2011wb; @Li:2011ss; @Girardi:2013zra; @Guzowski:2015saa; @Giunti:2015kza; @Ge:2017erv; @Liu:2017ago], a statistical assessment is still lacking. Therefore, we are motivated to perform a Bayesian analysis of $|m^\prime_{ee}|$ in this work by using the global-fit results of neutrino oscillation data and other available information on the absolute neutrino masses. The rest of the present paper is organized as follows. In Section 2, we describe the necessary information for the Bayesian analysis. The prior information can be extracted from the global-fit analysis of neutrino oscillation data [@Capozzi:2018ubv; @Gariazzo:2017fdh], the cosmological observations [@Aghanim:2018eyx] and the existing $0\nu\beta\beta$ decay experiments [@Albert:2014awa; @KamLAND-Zen:2016pfg; @Agostini:2017iyd; @Alduino:2017ehq]. Then, the posterior distribution of the standard effective neutrino mass $|m^{}_{ee}|$ and that of $|m^\prime_{ee}|$ are presented in Section 3. Two-dimensional posterior probability densities in the $|m^{\prime}_{ee}|$-$m^{}_{\rm L}$ plane and those in the $|m^{\prime}_{ee}|$-$\rho$ plane have also been given, where $m^{}_{\rm L}$ denotes the lightest neutrino mass. Finally, we make some concluding remarks in Section 4. The Bayesian Analysis ===================== The Bayesian analysis provides us with a reasonable statistical framework to update the probability distribution of model parameters in light of the new experimental data. The posterior distribution of model parameters can be obtained according to the Bayesian theorem [@Skilling:book] $$\begin{aligned} \label{eq:Bayesian} P(\Theta,\mathcal{H}^{}_{i}|\mathcal{D}) = \frac{\mathcal{L}(\mathcal{D}|\Theta, \mathcal{H}^{}_{i})\mathcal{\pi}(\Theta,\mathcal{H}^{}_{i})}{\sum^{}_{i}\mathcal{Z}^{}_{i}}\;, $$ where $\Theta$ denotes the set of model parameters, $\mathcal{D}$ stands for the available experimental data, and $\{\mathcal{H}^{}_{i}\}$ are the hypotheses or models with $i$ being the model index. Here $\mathcal{L}(\mathcal{D}|\Theta, \mathcal{H}^{}_{i})$ is the likelihood of the data $\mathcal{D}$, assuming the model $\mathcal{H}^{}_{i}$ with the parameters $\Theta$, $\mathcal{\pi}(\Theta,\mathcal{H}^{}_{i})$ is the prior distribution of $\Theta$, and $\mathcal{Z}^{}_{i}$ is the evidence. The evidence $\mathcal{Z}^{}_{i}$ is given by $$\begin{aligned} \label{eq:ZEvidence} \mathcal{Z}^{}_{i} = \int \mathcal{L}(\mathcal{D}|\Theta, \mathcal{H}^{}_{i})\mathcal{\pi}(\Theta,\mathcal{H}^{}_{i}) d^{N}\Theta\;, $$ which measures the compatibility of the model with the data, and $N$ is just the dimension of the parameter space. The hypotheses relevant for our analysis are $\mathcal{H}^{}_{\rm NO}$ for the NO and $\mathcal{H}^{}_{\rm IO}$ for the IO in the $3\nu$ or (3+1)$\nu$ mixing scenario. The model parameters in the (3+1)$\nu$ mixing scenario include: (i) the involved neutrino oscillation parameters $\{ \sin^2\theta^{}_{13}, \sin^2\theta^{}_{12}, \sin^2\theta^{}_{14},\Delta m^{2}_{\rm sol},\Delta m^{2}_{\rm atm},\Delta m^{2}_{41} \}$, where $\Delta m^2_{\rm sol} \equiv m^2_2 - m^2_1$ and $\Delta m^2_{\rm atm} \equiv m^2_3 - (m^2_2 + m^2_1)/2$ are two mass-squared differences of ordinary neutrinos; (ii) the lightest neutrino mass $m^{}_{\rm L}$, which is $m^{}_{1}$ for $\mathcal{H}^{}_{\rm NO}$ and $m^{}_{3}$ for $\mathcal{H}^{}_{\rm IO}$; (iii) the Majorana-type CP-violating phases $\{\rho,\sigma,\omega\}$; (iv) the phase-space factor and the nuclear matrix element $\{ G^{}_{0\nu}, |\mathcal{M}^{}_{0\nu}| \}$ for the $0\nu\beta\beta$ decays. The overall likelihood function can be constructed as $\mathcal{L} = \mathcal{L}^{}_{\rm 3\nu} \times \mathcal{L}^{}_{\rm cosmo} \times \mathcal{L}^{}_{\rm 0\nu\beta\beta} \times \mathcal{L}^{}_{\rm sterile}$, and the details of the individual likelihood function are summarized as follows. - $\mathcal{L}^{}_{\rm 3\nu}$: the likelihood function of the $3\nu$ mixing parameters $\{ \sin^2\theta^{}_{13}, \sin^2\theta^{}_{12}, \Delta m^{2}_{\rm sol}, \Delta m^{2}_{\rm atm} \}$. Given the $\Delta \chi^2$ function from the global-fit analysis in Ref. [@Capozzi:2018ubv], we can fix the likelihood function $\mathcal{L}^{}_{\rm 3\nu} = \exp(-\Delta \chi^2/2)$, where $\Delta \chi^2$ is defined as $$\begin{aligned} \Delta \chi^2 \equiv \sum^{}_{i} \frac{(\Theta^{}_{i}-\Theta^{\rm bf}_{i})^2}{\sigma^{2}_{i}}\;,\end{aligned}$$ with $\Theta^{}_{i}$ running over $\{ \sin^2\theta^{}_{13}, \sin^2 \theta^{}_{12}, \Delta m^{2}_{\rm sol}, \Delta m^{2}_{\rm atm} \}$, $\Theta^{\rm bf}_{i}$ the corresponding best-fit value from the global analysis, and $\sigma^{}_{i}$ the symmetrized $1\sigma$ error. See Table. 1 of Ref. [@Capozzi:2018ubv] for more details about the global-fit results of neutrino oscillation data. To be explicit, we list the best-fit values and the corresponding symmetrized $1\sigma$ errors as below $$\begin{aligned} && \sin^2\theta^{}_{12} = (3.04\pm 0.14) \times 10^{-1} \; , \quad \Delta m^2_{\rm sol} = (7.34 \pm 0.16) \times 10^{-5} ~{\rm eV}^2 \; , \nonumber \\ && \sin^2\theta^{}_{13} = (2.14 \pm 0.08) \times 10^{-2}\; , ~~\quad \Delta m^2_{\rm atm} = (2.455 \pm 0.034) \times 10^{-3} ~{\rm eV}^2 \; ,\hspace{0.6cm} $$ for $\mathcal{H}^{}_{\rm NO}$; and $$\begin{aligned} && \sin^2\theta^{}_{12} = (3.03\pm 0.14) \times 10^{-1} \; , \quad \Delta m^2_{\rm sol} = (7.34 \pm 0.16) \times 10^{-5} ~{\rm eV}^2 \; , \nonumber \\ && \sin^2\theta^{}_{13} = (2.18 \pm 0.08)\times 10^{-2} \; , \quad \Delta m^2_{\rm atm} = (-2.441 \pm 0.034) \times 10^{-3} ~{\rm eV}^2 \; ,\hspace{0.6cm} $$ for $\mathcal{H}^{}_{\rm IO}$. The latest neutrino oscillation data favor the NO over the IO at the $3\sigma$ level, i.e., the difference between the minima of $\chi^2$ in these two cases is $\Delta \chi^2_{\rm min} \equiv \chi^{\rm IO}_{\rm min} - \chi^{\rm NO}_{\rm min} \approx 9$. The preference for the NO arises mainly from two different data sets. First, the excess of $\nu^{}_e$-like events in the multi-GeV energy range in Super-Kamiokande atmospheric neutrino data can be accommodated by the resonant enhancement of the oscillation probability in the $\nu^{}_\mu \to \nu^{}_e$ channel, leading to $\Delta \chi^2_{\rm min} \approx 4$. Second, the running long-baseline accelerator experiments T2K and NO$\nu$A prefer the value of $\theta^{}_{13}$ that is slightly larger than the precisely measured value from reactor neutrino experiments. Such a tension between accelerator and reactor neutrino experiments will be relieved in the NO case, contributing another $\Delta \chi^2_{\rm min} \approx 4$ to the mass ordering discrimination. To be conservative, we will take $\Delta\chi^2_{\rm min} = 4$ as the preference for the NO over the IO from neutrino oscillation data. ![The likelihood function $\mathcal{L}^{}_{\rm cosmo}$ for the sum of three neutrino masses $\Sigma \equiv m^{}_1 + m^{}_2 + m^{}_3$ from cosmological observations, which has been derived by combining the $Planck ~{\rm TT}, {\rm TE}, {\rm EE} + {\rm lowE} + {\rm lensing} + {\rm BAO}$ data sets [@Aghanim:2018eyx].[]{data-label="fig:1"}](lh_sum.pdf){width="48.00000%"} - $\mathcal{L}^{}_{\rm cosmo}$: the likelihood function for the cosmological observations on the sum of three neutrino masses $\Sigma \equiv m^{}_{1}+m^{}_{2}+m^{}_{3}$. After combining several different sets of cosmological data ($Planck ~{\rm TT}, {\rm TE}, {\rm EE} + {\rm lowE} + {\rm lensing} + {\rm BAO}$), the [*Planck*]{} Collaboration has recently updated the upper limit on the sum of neutrino masses as $\Sigma < 0.12~{\rm eV}$ at the $95\%$ C.L. [@Aghanim:2018eyx]. We obtain the likelihood information by making use of the Markov chain file available from the Planck Legacy Archive (PLA) [^3]. The likelihood function of $\Sigma$ is produced and shown in Fig. \[fig:1\] by marginalizing over the other cosmological parameters. Although the sampling file given by PLA has assumed a degenerate mass spectrum of neutrinos, a more solid analysis with the realistic neutrino mass spectrum should not change the result much [@Hannestad:2016fog]. For this reason, the likelihood shown in Fig.  \[fig:1\] will be used in the following discussions. - $\mathcal{L}^{}_{\rm 0\nu\beta\beta}$: the likelihood function derived from the experimental constraints on the effective neutrino mass $|m^{}_{ee}|$ or $|m^{\prime}_{ee}|$ due to the existing searches for $0\nu\beta\beta$ decays. For simplicity, we implement the likelihood function available from Refs. [@Caldwell:2017mqu; @Alduino:2017ehq] in our analysis. Although both $\mathcal{L}^{}_{\rm 0\nu\beta\beta}$ and $\mathcal{L}^{}_{\rm cosmo}$ contain the information about the absolute scale of neutrino masses, the constraint on $|m^{}_{ee}|$ from the $0\nu\beta\beta$ decays suffers from a large theoretical uncertainty in the prediction for the NME. For instance, the tightest bound comes from the KamLAND-Zen experiment [@KamLAND-Zen:2016pfg], namely, $|m^{}_{ee}| \lesssim (61\cdots 165)~{\rm meV}$. Given further uncertainties from the mixing parameters and the unknown Majorana CP-violating phases, the $0\nu\beta\beta$ decays are not so informative about the absolute scale of neutrino masses when compared to the cosmological observations. - $\mathcal{L}^{}_{\rm sterile}$: the likelihood function encoding the global-fit analysis of sterile neutrino mass and mixing parameters $\{ \theta^{}_{14}, \Delta m^{2}_{41}\}$. In practice, we determine the likelihood function as $\mathcal{L}^{}_{\rm sterile} = \exp[ -\Delta \chi^2_{\rm sterile}(\theta^{}_{14}, \Delta m^{2}_{41}) /2]$ by using the $\Delta \chi^2$ distribution in Fig. 9 of Ref. [@Gariazzo:2017fdh]. The result of the so-called pragmatic 3+1 global fit “PrGlo17” will be utilized [@Gariazzo:2017fdh], where the tension between appearance and disappearance oscillation data can be somewhat relaxed by ignoring the excess of low-energy ${\nu}^{}_{e}$-like events observed in the MiniBooNE experiment. After having the likelihood functions constructed from various experimental observations, we need to make clear the prior probability distributions of the model parameters, which reflect our knowledge about them prior to any experimental data. First, neutrino mass-squared differences and mixing angles $\{ \sin^2\theta^{}_{13}, \sin^2\theta^{}_{12}, \sin^2\theta^{}_{14}, \Delta m^{2}_{\rm sol}, \Delta m^{2}_{\rm atm}, \Delta m^{2}_{41} \}$ are assumed to be uniformly distributed in their allowed ranges that are wide enough to cover their global fit results. Since the oscillation data are rather informative, different choices of prior distributions of these parameters do not have much impact on the final posterior distributions. Second, the Majorana CP-violating phases are completely unknown, so it is reasonable to adopt the flat priors in the range of $[0\cdots 2\pi)$. In addition, we have to mention that the prior distributions for the following relevant parameters are by no means unique but will be incorporated into our calculations for practical purposes. - As indicated in Eq. (\[eq:halflife\]), the phase-space factor $G^{}_{0\nu}$ and the NME $|\mathcal{M}^{}_{0\nu}|$ are needed when we try to translate the experimental constraint on the half-life into that on the effective neutrino mass. The phase-space factors for different nuclear isotopes have been computed in Refs. [@Rodejohann:2011mu; @Suhonen:1998ck; @Kotila:2012zza], and we use the central values from Ref. [@Kotila:2012zza], e.g., $G^{}_{0\nu}({}^{76}{\rm Ge}) = 6.15 \times 10^{-15}~{\rm yr}^{-1}$, $G^{}_{0\nu}({}^{130}{\rm Te}) = 3.70 \times 10^{-14}~{\rm yr}^{-1}$ and $G^{}_{0\nu}({}^{136}{\rm Xe}) = 3.79 \times 10^{-14}~{\rm yr}^{-1}$, which have been obtained with the axial vector coupling constant $g^{}_{\rm A} = 1.27$. We assume that $G^{}_{0\nu}$ can be described by the Gaussian distribution with the aforementioned central value and a relative error of $7\%$. On the other hand, the NME for a specific nuclear isotope encoding the information about the nuclear structure has been theoretically calculated in a variety of nuclear models. The differences among these calculations can be treated as the theoretical uncertainty. We define this uncertainty as $\sigma^{}_{\rm NME} \equiv \sum^{}_{i}(|\mathcal{M}^{i}_{0\nu}|-\overline{|\mathcal{M}^{}_{0\nu}|})^2/n^{}_{\rm NME}$, where $|\mathcal{M}^{i}_{0\nu}|$ is the NME value of the $i$th model, $\overline{|\mathcal{M}^{}_{0\nu}|}$ is the averaged NME value of all models, and $n^{}_{\rm NME}$ is the total number of models. Using the tabulated NME values in Ref. [@Guzowski:2015saa], we find that $\overline{|\mathcal{M}^{}_{0\nu}|}({}^{76}{\rm Ge},{}^{130}{\rm Te},{}^{136}{\rm Xe}) = (4.88,3.94,2.73) $ and $\sigma^{}_{\rm NME}({}^{76}{\rm Ge},{}^{130}{\rm Te},{}^{136}{\rm Xe}) = (1.14,0.90,0.80)$. Then the Gaussian distribution with the central value $\overline{|\mathcal{M}^{}_{0\nu}|}$ and the standard deviation $\sigma^{}_{\rm NME}$ is assumed for each nuclear isotope. - For the prior of the lightest neutrino mass $m^{}_{\rm L}$, a more careful study should be performed. Four kinds of prior distributions for $m^{}_{\rm L}$ are usually considered: (i) a logarithmic prior on $m^{}_{\rm L}$ with an adjustable lower cutoff that we choose to be $10^{-4}~{\rm eV}$; (ii) a logarithmic prior on $\Sigma$ with a natural lower cutoff at $0.06~{\rm eV}$ for NO or at $0.1~{\rm eV}$ for IO, as required by neutrino oscillation experiments; (iii) a flat prior on $m^{}_{\rm L}$; (iv) a flat prior on $\Sigma$. The prior probability distributions have been plotted with respect to $\log_{10}(m^{}_{\rm L}/{\rm eV})$ in the left panel of Fig. \[fig:2\], where one can see that the flat priors on $m^{}_{\rm L}$ (gray solid curve) and $\Sigma$ (gray dashed curve) lead to nearly the same distribution. After incorporating the experimental limits from [*Planck*]{} 2018 and the $0\nu\beta\beta$ decays, as shown in the right panel of Fig. \[fig:2\], we observe that the logarithmic prior on $m^{}_{\rm L}$ (red solid curve) gives rise to a posterior distribution that is very different from those in the other scenarios. This is because a large weight has been given to very small neutrino masses in the former case. In the following discussions, we focus only on two different prior distributions, i.e., the logarithmic prior on $m^{}_{\rm L}$ and the logarithmic prior on $\Sigma$, both of which are scale invariant. Since the posterior distribution of $m^{}_{\rm L}$ with logarithmic prior on $\Sigma$ is very similar to those with two flat priors, the posterior distribution of the effective neutrino mass in the former case should also be roughly applicable to those in the latter two cases. Finally, we make some comments on the current experimental hint on neutrino mass ordering by combining the data sets of neutrino oscillation experiments, cosmological observations and the $0\nu\beta\beta$ decays, for which the likelihood functions are given by $\mathcal{L}^{}_{\rm 3\nu}$, $\mathcal{L}^{}_{\rm cosmo}$ and $\mathcal{L}^{}_{\rm 0\nu\beta\beta}$, respectively. The preference odds for NO over IO can be represented by the Bayes factor, i.e., $\mathcal{B} \equiv \mathcal{Z}^{}_{\rm NO}/\mathcal{Z}^{}_{\rm IO}$. With the help of Eq. (\[eq:ZEvidence\]), one can calculate the evidences for NO and IO and thus their ratio. The dependence of $\mathcal{B}$ on the choice of the $m^{}_{\rm L}$ prior distribution is found to be very weak. Given identical prior information on both mass orderings, we consider only the cosmological observations $\mathcal{L}^{}_{\rm cosmo}$ and obtain the logarithm of the Bayes factor as $\log(\mathcal{B}^{}_{\rm cosmo}) \approx 0.85$ [^4], corresponding to ${\cal B}^{}_{\rm cosmo} \approx 2.34$, which is in concordance with the results from Refs. [@Hannestad:2016fog; @Gerbino:2016ehw; @Vagnozzi:2017ovm; @Capozzi:2017ipn]. If only the $0\nu\beta\beta$ decay experiments are considered, then we get $\log(\mathcal{B}^{}_{ 0\nu\beta\beta}) \approx 0.2$. A combination of the cosmological observations and $0\nu\beta\beta$ decay data leads to $\log(\mathcal{B}^{}_{ {\rm cosmo} + 0\nu\beta\beta}) \approx 1.1$. Regarding the three-flavor neutrino oscillation data, if we take the conservative choice of $\Delta \chi^2_{\rm min} \approx 4$ for two neutrino mass orderings, which has been used to construct ${\cal L}^{}_{3\nu}$, the logarithm of the Bayes factor turns out to be $\log(\mathcal{B}^{}_{ 3\nu})= 2$. Combining ${\cal L}^{}_{\rm cosmo}$, ${\cal L}^{}_{0\nu\beta\beta}$ and ${\cal L}^{}_{3\nu}$ together, one can find the total Bayes factor $\mathcal{B}^{}_{\rm tot} \approx 22$. As we have mentioned before, the global-fit analysis of all the neutrino oscillation data gives rise to a $3\sigma$ preference for the NO, corresponding to ${\cal B}^{}_{3\nu} \approx 90$. If such a stronger preference for the NO is implemented instead of the conservative one, the total Bayes factor from all the data sets becomes $\mathcal{B}^{}_{\rm tot} \approx 270$, showing a strong evidence for the NO according to the Jeffreys scale [@Trotta:2008qt]. The addition of $\mathcal{L}^{}_{\rm sterile}$ into the analysis does not alter the above conclusions, since the short-baseline neutrino oscillation experiments are insensitive to the mass ordering of three ordinary neutrinos. Posterior Distributions ======================= After specifying the likelihood functions for the relevant experimental data and fixing the prior probability distributions of model parameters in the previous section, we are ready to compute the posterior distributions of the derived parameters $|m^{}_{ee}|$ and $|m^{\prime}_{ee}|$ by using Eq. (\[eq:Bayesian\]). In fact, the posterior probability distribution in Eq. (\[eq:Bayesian\]) for the model parameters is calculated via the Monte Carlo sampling, which has been done with the help of the MultiNest routine [@Feroz:2007kg; @Feroz:2008xx; @Feroz:2013hea]. In Fig. \[fig:3\], we present the posterior sampling distributions in the $|m^{}_{ee}|$-$m^{}_{\rm L}$ plane for the standard 3$\nu$ mixing scenario (the upper row) or in the $|m^{\prime}_{ee}|$-$m^{}_{\rm L}$ plane for the (3+1)$\nu$ mixing scenario (the lower row). The scattered points stand for the sampling data, and one can read off the corresponding posterior probabilities from their colors. Now we explain how to practically do so. For a given point, one can first look at the color legend and find the value of its posterior density, which is denoted as $p$. Then, the posterior probability $P$ can be calculated by definition as the product of $p$ and the area $\mathcal{A}$ of a small region, in which the point is located. For instance, take a small square in the $|m^{}_{ee}|$-$m^{}_{\rm L}$ plane, and its area is thus given by $\mathrm{d}\mathcal{A} \equiv \mathrm{d} \left[\log^{}_{10}(|m^{}_{ee}|/{\rm eV})\right] \times \mathrm{d}\left[\log^{}_{10}(m^{}_{\rm L}/{\rm eV})\right]$. Notice that the total posterior probability is normalized to one for each plot. Several comments on the numerical results in Fig. \[fig:3\] are helpful. 1. In the upper-left panel, the posterior distribution in the $|m^{}_{ee}|$-$m^{}_{\rm L}$ plane is shown for the standard 3$\nu$ mixing scenario, where the logarithmic prior on $m^{}_{\rm L}$ is assumed. The results for the logarithmic prior on $\Sigma$ are plotted in the upper-right panel. In both panels, the thin dot-dashed (or dashed) curves indicate the boundaries of the effective neutrino mass $|m^{}_{ee}|$ in the IO (or NO) case, where the best-fit values of neutrino mixing angles and mass-squared differences are input. Moreover, the current limit (taken from Ref. [@KamLAND-Zen:2016pfg] for the tightest one) on or the future sensitivity (of a ton-scale $0\nu\beta\beta$ decay experiment like nEXO [@Gerbino:2016ehw]) to $|m^{}_{ee}|$ is represented by three horizontal dotted lines. The wide range between the upper and lower lines can be ascribed to the NME uncertainty. Comparing the distributions in the left and right panels, one can observe that a larger weight has been given to smaller values of $m^{}_{\rm L}$ in the assumption of a logarithmic prior on $m^{}_{\rm L}$, as already emphasized in the previous section. 2. An urgent question is how likely $|m^{}_{ee}|$ is vanishingly small in the NO case, which has been quantitatively addressed in Refs. [@Agostini:2017jim; @Caldwell:2017mqu]. In order to draw a prior-independent conclusion from the posterior distributions, we treat the scenarios with different values of $m^{}_{\rm L}$ as different models. For each fixed $m^{}_{\rm L}$, the posterior distribution of $|m^{}_{ee}|$ can be derived with the help of the likelihood $\mathcal{L}^{}_{ 3\nu}$. Then, one can calculate the probability for the true value of the effective neutrino mass to be above a certain $|m^{}_{ee}|$. The probability contours are plotted as the blue curves in Fig. \[fig:3\], where several representative values, i.e., $68\%$, $95\%$, $99\%$ and $99.7\%$, are shown. It is evident that the probability for $|m^{}_{ee}|$ to be vanishingly small, e.g., $|m^{}_{ee}| < 10^{-4}~{\rm eV}$, is tiny (less than $0.3\%$). This conclusion is independent of the priors on $m^{}_{\rm L}$, as it should be. In particular, the probability for $|m^{}_{ee}| > 10^{-3}~{\rm eV}$ is larger than $95\%$ even when $m^{}_{\rm L}$ is located in the regime where the destructive cancellation caused by the unknown Majorana CP phases occurs. 3. In the two panels in the lower row of Fig. \[fig:3\], the posterior probability distributions in the (3+1)$\nu$ mixing scenario have been presented, where the notations and conventions for the curves are the same as those in the plots in the upper row. It is straightforward to observe that the presence of the eV-mass sterile neutrino shifts the effective neutrino mass to higher values. As the future ton-scale $0\nu\beta\beta$ decay experiments are able to explore the effective neutrino mass to the level of $\mathcal{O}(10^{-2})~{\rm eV}$, the inclusion of the sterile neutrino can raise the effective mass to the level that is within the reach of the next-generation experiments even for a very small $m^{}_{\rm L}$. If the sensitivity at the $\mathcal{O}(10^{-2})~{\rm eV}$ level is achieved, more than $99.7\%$ of the region of $|m^{\prime}_{ee}|$ can be covered for $m^{}_{\rm L} \lesssim 10^{-2}~{\rm eV}$. When $m^{}_{\rm L} \gtrsim 10^{-2}~{\rm eV}$, the chance for $|m^{\prime}_{ee}|$ to fall into the cancellation region increases. However, even in this case, at least $95\%$ of the $|m^\prime_{ee}|$ range can be probed. Therefore, in the statistical sense, it is quite promising to check the (3+1)$\nu$ mixing scenario with an eV-mass sterile neutrino in the future $0\nu\beta\beta$ decay experiments. In Fig. \[fig:4\], we present the posterior distributions in the $|m^{}_{ee}|$-$\rho$ (the upper row) or $|m^\prime_{ee}|$-$\rho$ plane (the lower row) by marginalizing over the lightest neutrino mass $m^{}_{\rm L}$ instead of the Majorana CP phase $\rho$. The notations and conventions are the same as those in Fig. \[fig:3\]. The area in the $|m^{}_{ee}|$-$\rho$ plane is defined as $\mathrm{d}\mathcal{A} \equiv \mathrm{d} \left[\log^{}_{10}(|m^{}_{ee}|/{\rm eV})\right] \times \mathrm{d}\left[\rho/{\rm rad}\right]$ in the $3\nu$ mixing scenario, and likewise for the (3+1)$\nu$ mixing scenario. Now the blue solid curves in Fig. \[fig:4\] stand for the contours of the probability for the effective neutrino mass to be above a certain $|m^{}_{ee}|$ or $|m^\prime_{ee}|$. These contours become dependent on the $m^{}_{\rm L}$ priors, because the prior information of $m^{}_{\rm L}$ has been integrated into the posterior distribution. It is worthwhile to notice that the dependence of posterior distributions on $\rho$ is very weak for the (3+1)$\nu$ mixing scenario. In the $3\nu$ mixing scenario, the fine structure around $\rho \approx \pi$ due to the cancellation can be observed. Therefore, it seems difficult to determine the Majorana CP phase $\rho$ if $|m^{}_{ee}|$ takes the value far away from that in the cancellation region. As the effective neutrino mass can be directly extracted from the experimental data on $0\nu\beta\beta$ decays, it is interesting to see the posterior distribution of $|m^{}_{ee}|$ or $|m^\prime_{ee}|$, which can be obtained by marginalizing over both $m^{}_{\rm L}$ and $\rho$. The final results can be found in Fig. \[fig:5\]. For the standard $3\nu$ case in the left panel, if we choose the logarithmic prior on $m^{}_{\rm L}$ for NO (red solid curve), a large fraction (about $92\%$) of the probable range of $|m^{}_{ee}|$ is unreachable for the future ton-scale $0\nu \beta\beta$ decay experiments. With a logarithmic prior on $\Sigma$ (blue solid curve), the next generation experiments can cover about $ 41\%$ of the range. As we have observed before, adding an eV-mass sterile neutrino can greatly enhance the probability of the effective neutrino mass $|m^\prime_{ee}|$ to larger values. The future $0\nu\beta\beta$ decay experiments with a sensitivity to the effective neutrino mass of $\mathcal{O}(10^{-2})~{\rm eV}$ can cover around $99.4\%$ ($97.4\%$) of the posterior space for the logarithmic prior on $m^{}_{\rm L}$ (the logarithmic prior on $\Sigma$) in the (3+1)$\nu$ mixing scenario. According to the posterior distributions in Fig. \[fig:5\], we find that the average value of the effective neutrino mass is shifted from $\overline{|m^{}_{ee}|} = 3.37\times 10^{-3}~{\rm eV}$ (or $7.71\times 10^{-3}~{\rm eV}$) in the standard $3\nu$ mixing scenario to $\overline{|m^{\prime}_{ee}|}=2.54\times 10^{-2}~{\rm eV}$ (or $2.56\times 10^{-2}~{\rm eV}$) in the (3+1)$\nu$ mixing scenario, with the logarithmic prior on $m^{}_{\rm L}$ (or on $\Sigma$). Therefore, a null signal from the future $0\nu\beta\beta$ decay experiments will be able to set a very stringent constraint on the sterile neutrino mass and mixing angle. Concluding Remarks ================== In this short note, we have carried out a Bayesian analysis of the effective neutrino mass in the $0\nu\beta\beta$ decays in both the standard $3\nu$ mixing scenario and the (3+1)$\nu$ mixing scenario. With the latest experimental information, including the global-fit analysis of neutrino oscillation data, the cosmological observations from the [*Planck*]{} satellite and the current limits from the $0\nu\beta\beta$ decay experiments, the posterior probability distributions of the effective neutrino mass $|m^{}_{ee}|$ in the standard $3\nu$ mixing scenario and $|m^\prime_{ee}|$ in the (3+1)$\nu$ mixing scenario have been updated. Our main results of the posterior distributions have been summarized in Fig. \[fig:3\] and Fig. \[fig:5\]. Adding an eV-mass sterile neutrino slightly mixing with ordinary neutrinos is likely to enhance the effective neutrino mass to the level of ${\cal O}(10^{-2})~{\rm eV}$, which is within the reach of the next generation $0\nu\beta\beta$ decay experiments, regardless of the prior information on the absolute mass scale of ordinary neutrinos. In other words, if a null signal is observed in future ton-scale $0\nu\beta\beta$ decay experiments, we can place very strong limits on the parameter space of the (3+1)$\nu$ mixing scenario, assuming that massive neutrinos are of Majorana nature. The sensitivity of future $0\nu\beta\beta$ decay experiments to the sterile neutrino mass and mixing angle deserves a dedicated study, which will be left for the upcoming works. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported in part by the National Natural Science Foundation of China under grant No. 11775232 and No. 11835013, and by the CAS Center for Excellence in Particle Physics. [99]{} E. Majorana, “Teoria simmetrica dell’elettrone e del positrone,” Nuovo Cim.  [**14**]{}, 171 (1937). G. Racah, “On the symmetry of particle and antiparticle,” Nuovo Cim.  [**14**]{}, 322 (1937). M. Tanabashi [*et al.*]{} \[Particle Data Group\], “Review of Particle Physics,” Phys. Rev. D [**98**]{}, no. 3, 030001 (2018). W. H. Furry, “On transition probabilities in double beta-disintegration,” Phys. Rev.  [**56**]{}, 1184 (1939). W. Rodejohann, “Neutrino-less Double Beta Decay and Particle Physics,” Int. J. Mod. Phys. E [**20**]{}, 1833 (2011) \[arXiv:1106.1334\]. S. M. Bilenky and C. Giunti, “Neutrinoless double-beta decay: A brief review,” Mod. Phys. Lett. A [**27**]{}, 1230015 (2012) \[arXiv:1203.5250\]. W. Rodejohann, “Neutrinoless double beta decay and neutrino physics,” J. Phys. G [**39**]{}, 124008 (2012) \[arXiv:1206.2560\]. S. M. Bilenky and C. Giunti, “Neutrinoless Double-Beta Decay: a Probe of Physics Beyond the Standard Model,” Int. J. Mod. Phys. A [**30**]{}, no. 04n05, 1530001 (2015) \[arXiv:1411.4791\]. H. Päs and W. Rodejohann, “Neutrinoless Double Beta Decay,” New J. Phys.  [**17**]{}, no. 11, 115010 (2015) \[arXiv:1507.00170\]. S. Dell’Oro, S. Marcocci, M. Viel and F. Vissani, “Neutrinoless double beta decay: 2015 review,” Adv. High Energy Phys.  [**2016**]{}, 2162659 (2016) \[arXiv:1601.07512\]. C. Giunti and T. Lasserre, “eV-scale Sterile Neutrinos,” arXiv:1901.08330. A. Aguilar-Arevalo [*et al.*]{} \[LSND Collaboration\], “Evidence for neutrino oscillations from the observation of anti-neutrino(electron) appearance in a anti-neutrino(muon) beam,” Phys. Rev. D [**64**]{}, 112007 (2001) \[hep-ex/0104049\]. A. A. Aguilar-Arevalo [*et al.*]{} \[MiniBooNE Collaboration\], “Unexplained Excess of Electron-Like Events From a 1-GeV Neutrino Beam,” Phys. Rev. Lett.  [**102**]{}, 101802 (2009). A. A. Aguilar-Arevalo [*et al.*]{} \[MiniBooNE Collaboration\], “Significant Excess of ElectronLike Events in the MiniBooNE Short-Baseline Neutrino Experiment,” arXiv:1805.12028. C. Giunti and M. Laveder, “Statistical Significance of the Gallium Anomaly,” Phys. Rev. C [**83**]{}, 065504 (2011) \[arXiv:1006.3244\]. G. Mention, M. Fechner, T. Lasserre, T. A. Mueller, D. Lhuillier, M. Cribier and A. Letourneau, “The Reactor Antineutrino Anomaly,” Phys. Rev. D [**83**]{}, 073006 (2011) \[arXiv:1101.2755\]. J. N. Abdurashitov [*et al.*]{} \[SAGE Collaboration\], “Measurement of the solar neutrino capture rate with gallium metal. III: Results for the 2002–2007 data-taking period,” Phys. Rev. C [**80**]{}, 015807 (2009) \[arXiv:0901.2200\]. F. Kaether, W. Hampel, G. Heusser, J. Kiko and T. Kirsten, “Reanalysis of the GALLEX solar neutrino flux and source experiments,” Phys. Lett. B [**685**]{}, 47 (2010) \[arXiv:1001.2731\]. S. Gariazzo, C. Giunti, M. Laveder and Y. F. Li, “Updated Global 3+1 Analysis of Short-BaseLine Neutrino Oscillations,” JHEP [**1706**]{}, 135 (2017) \[arXiv:1703.00860\]. M. Dentler, Á. Hernández-Cabezudo, J. Kopp, P. A. N. Machado, M. Maltoni, I. Martinez-Soler and T. Schwetz, “Updated Global Analysis of Neutrino Oscillations in the Presence of eV-Scale Sterile Neutrinos,” JHEP [**1808**]{}, 010 (2018) \[arXiv:1803.10661\]. N. Aghanim [*et al.*]{} \[Planck Collaboration\], “Planck 2018 results. VI. Cosmological parameters,” arXiv:1807.06209. J. B. Albert [*et al.*]{} \[EXO-200 Collaboration\], “Search for Majorana neutrinos with the first two years of EXO-200 data,” Nature [**510**]{}, 229 (2014) \[arXiv:1402.6956\]. A. Gando [*et al.*]{} \[KamLAND-Zen Collaboration\], “Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen,” Phys. Rev. Lett.  [**117**]{}, no. 8, 082503 (2016) Addendum: \[Phys. Rev. Lett.  [**117**]{}, no. 10, 109903 (2016)\] \[arXiv:1605.02889\]. M. Agostini [*et al.*]{}, “Background-free search for neutrinoless double-$\beta$ decay of $^{76}$Ge with GERDA,” Nature [**544**]{}, 47 (2017) \[arXiv:1703.00570\]. C. Alduino [*et al.*]{} \[CUORE Collaboration\], “First Results from CUORE: A Search for Lepton Number Violation via $0\nu\beta\beta$ Decay of $^{130}$Te,” Phys. Rev. Lett.  [**120**]{}, no. 13, 132501 (2018) \[arXiv:1710.07988\]. C. E. Aalseth [*et al.*]{} \[Majorana Collaboration\], “Search for Neutrinoless Double-β Decay in $^{76}$Ge with the Majorana Demonstrator,” Phys. Rev. Lett.  [**120**]{}, no. 13, 132502 (2018) \[arXiv:1710.11608\]. M. Agostini [*et al.*]{} \[GERDA Collaboration\], “Improved Limit on Neutrinoless Double-$\beta$ Decay of $^{76}$Ge from GERDA Phase II,” Phys. Rev. Lett.  [**120**]{}, no. 13, 132503 (2018) \[arXiv:1803.11100\]. M. Agostini, G. Benato and J. Detwiler, “Discovery probability of next-generation neutrinoless double- β decay experiments,” Phys. Rev. D [**96**]{}, no. 5, 053001 (2017) \[arXiv:1705.02996\]. Z. z. Xing, “Vanishing effective mass of the neutrinoless double beta decay?,” Phys. Rev. D [**68**]{}, 053002 (2003) \[hep-ph/0305195\]. Z. z. Xing, Z. h. Zhao and Y. L. Zhou, “How to interpret a discovery or null result of the $0\nu 2\beta$ decay,” Eur. Phys. J. C [**75**]{}, no. 9, 423 (2015) \[arXiv:1504.05820\]. Z. z. Xing and Z. h. Zhao, “The effective neutrino mass of neutrinoless double-beta decays: how possible to fall into a well,” Eur. Phys. J. C [**77**]{}, no. 3, 192 (2017) \[arXiv:1612.08538\]. P. F. de Salas, D. V. Forero, C. A. Ternes, M. Tortola and J. W. F. Valle, “Status of neutrino oscillations 2018: 3$\sigma$ hint for normal mass ordering and improved CP sensitivity,” Phys. Lett. B [**782**]{}, 633 (2018) \[arXiv:1708.01186\]. F. Capozzi, E. Lisi, A. Marrone and A. Palazzo, “Current unknowns in the three neutrino framework,” Prog. Part. Nucl. Phys.  [**102**]{}, 48 (2018) \[arXiv:1804.09678\]. I. Esteban, M. C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni and T. Schwetz, “Global analysis of three-flavour neutrino oscillations: synergies and tensions in the determination of $\theta^{}_{23}, \delta^{}_{\rm CP}$, and the mass ordering,” arXiv:1811.05487. A. Caldwell, A. Merle, O. Schulz and M. Totzauer, “Global Bayesian analysis of neutrino mass data,” Phys. Rev. D [**96**]{}, no. 7, 073001 (2017) \[arXiv:1705.01945\]. G. Benato, “Effective Majorana Mass and Neutrinoless Double Beta Decay,” Eur. Phys. J. C [**75**]{}, no. 11, 563 (2015) \[arXiv:1510.01089\]. J. Zhang and S. Zhou, “Determination of neutrino mass ordering in future $^{76}$Ge-based neutrinoless double-beta decay experiments,” Phys. Rev. D [**93**]{}, no. 1, 016008 (2016) \[arXiv:1508.05472\]. S. F. Ge and M. Lindner, “Extracting Majorana properties from strong bounds on neutrinoless double beta decay,” Phys. Rev. D [**95**]{}, no. 3, 033003 (2017) \[arXiv:1608.01618\]. S. Goswami and W. Rodejohann, “Constraining mass spectra with sterile neutrinos from neutrinoless double beta decay, tritium beta decay and cosmology,” Phys. Rev. D [**73**]{}, 113003 (2006) \[hep-ph/0512234\]. S. Goswami and W. Rodejohann, “MiniBooNE results and neutrino schemes with 2 sterile neutrinos: Possible mass orderings and observables related to neutrino masses,” JHEP [**0710**]{}, 073 (2007) \[arXiv:0706.1462\]. J. Barry, W. Rodejohann and H. Zhang, “Light Sterile Neutrinos: Models and Phenomenology,” JHEP [**1107**]{}, 091 (2011) \[arXiv:1105.3911\]. Y. F. Li and S. s. Liu, “Vanishing effective mass of the neutrinoless double beta decay including light sterile neutrinos,” Phys. Lett. B [**706**]{}, 406 (2012) \[arXiv:1110.5795\]. I. Girardi, A. Meroni and S. T. Petcov, “Neutrinoless Double Beta Decay in the Presence of Light Sterile Neutrinos,” JHEP [**1311**]{}, 146 (2013) \[arXiv:1308.5802\]. P. Guzowski, L. Barnes, J. Evans, G. Karagiorgi, N. McCabe and S. Soldner-Rembold, “Combined limit on the neutrino mass from neutrinoless double-β decay and constraints on sterile Majorana neutrinos,” Phys. Rev. D [**92**]{}, no. 1, 012002 (2015) \[arXiv:1504.03600\]. C. Giunti and E. M. Zavanin, “Predictions for Neutrinoless Double-Beta Decay in the 3+1 Sterile Neutrino Scenario,” JHEP [**1507**]{}, 171 (2015) \[arXiv:1505.00978\]. S. F. Ge, W. Rodejohann and K. Zuber, “Half-life Expectations for Neutrinoless Double Beta Decay in Standard and Non-Standard Scenarios,” Phys. Rev. D [**96**]{}, no. 5, 055019 (2017) \[arXiv:1707.07904\]. J. H. Liu and S. Zhou, “Another look at the impact of an eV-mass sterile neutrino on the effective neutrino mass of neutrinoless double-beta decays,” Int. J. Mod. Phys. A [**33**]{}, no. 02, 1850014 (2018) \[arXiv:1710.10359\]. D. Silva and J. Skilling, “Data Analysis: A Bayesian Tutorial,” Oxford University Press, Oxford, 2006. S. Hannestad and T. Schwetz, “Cosmology and the neutrino mass ordering,” JCAP [**1611**]{}, no. 11, 035 (2016) \[arXiv:1606.04691\]. J. Suhonen and O. Civitarese, “Weak-interaction and nuclear-structure aspects of nuclear double beta decay,” Phys. Rept.  [**300**]{}, 123 (1998). J. Kotila and F. Iachello, “Phase space factors for double-$\beta$ decay,” Phys. Rev. C [**85**]{}, 034316 (2012) \[arXiv:1209.5722\]. M. Gerbino, M. Lattanzi, O. Mena and K. Freese, “A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling,” Phys. Lett. B [**775**]{}, 239 (2017) \[arXiv:1611.07847\]. S. Vagnozzi, E. Giusarma, O. Mena, K. Freese, M. Gerbino, S. Ho and M. Lattanzi, “Unveiling $\nu$ secrets with cosmological data: neutrino masses and mass hierarchy,” Phys. Rev. D [**96**]{}, no. 12, 123503 (2017) \[arXiv:1701.08172\]. F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri and A. Palazzo, “Global constraints on absolute neutrino masses and their ordering,” Phys. Rev. D [**95**]{}, no. 9, 096014 (2017) \[arXiv:1703.04471\]. R. Trotta, “Bayes in the sky: Bayesian inference and model selection in cosmology,” Contemp. Phys.  [**49**]{}, 71 (2008) \[arXiv:0803.4089\]. F. Feroz and M. P. Hobson, “Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis,” Mon. Not. Roy. Astron. Soc.  [**384**]{}, 449 (2008) \[arXiv:0704.3704\]. F. Feroz, M. P. Hobson and M. Bridges, “MultiNest: an efficient and robust Bayesian inference tool for cosmology and particle physics,” Mon. Not. Roy. Astron. Soc.  [**398**]{}, 1601 (2009) \[arXiv:0809.3437\]. F. Feroz, M. P. Hobson, E. Cameron and A. N. Pettitt, “Importance Nested Sampling and the MultiNest Algorithm,” arXiv:1306.2144. [^1]: E-mail: huanggy@ihep.ac.cn [^2]: E-mail: zhoush@ihep.ac.cn [^3]: This is based on the observations with [*Planck*]{} (<http://www.esa.int/Planck>), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. [^4]: Note that the subscript of the Bayes factor ${\cal B}^{}_{\rm cosmo}$ herein refers to the cosmological data that have been used in the calculations, and likewise for the Bayes factors from other data sets and their combinations.
--- abstract: 'In this paper, several variants of two-stream architectures for temporal action proposal generation in long, untrimmed videos are presented. Inspired by the recent advances in the field of human action recognition utilizing 3D convolutions in combination with two-stream networks and based on the Single-Stream Temporal Action Proposals (SST) architecture [@sst], four different two-stream architectures utilizing sequences of images on one stream and sequences of images of optical flow on the other stream are subsequently investigated. The four architectures fuse the two separate streams at different depths in the model; for each of them, a broad range of parameters is investigated systematically as well as an optimal parametrization is empirically determined. The experiments on the THUMOS’14 [@THUMOS14] dataset show that all four two-stream architectures are able to outperform the original single-stream SST and achieve state of the art results. Additional experiments revealed that the improvements are not restricted to a single method of calculating optical flow by exchanging the formerly used method of Brox [@brox2004high] with FlowNet2 [@ilg2017flownet] and still achieving improvements.' author: - | Patrick Schlosser\ Fraunhofer IOSB\ - | David Münch\ Fraunhofer IOSB\ [david.muench@iosb.fraunhofer.de]{} - | Michael Arens\ Fraunhofer IOSB\ bibliography: - 'egbib.bib' title: | Investigation on Combining 3D Convolution of Image Data and\ Optical Flow to Generate Temporal Action Proposals --- Introduction ============ Computer vision plays a major role in sports, beginning with an automatic semantic annotation of the observed scene to enhanced viewing experience. One major research field in computer vision is the recognition of actions and activities in videos, with a special interest in actions and activities performed by humans. The recognition of specific actions usually takes place in videos of limited length, called trimmed videos, by assigning a single action class to each video. Being well known, short videos containing only a single action are rather an artificial construct, being produced by special recordings of only a single action or by previously cutting the short video out of a larger one. More naturally, untrimmed videos usually feature no specific or limited length and contain more than a single action – a video of the Summer Olympics may contain for example the sporting activities ‘high jump’, ‘hammer throwing’, and ‘fencing’, but also non sporting parts, such as the commentators talking, interviews and shots of the crowd as well. Figure \[fig:eyecatcher\] depicts an example. Another application field is the area of video surveillance, where often only a few time segments of large videos contain actions of interest, such as theft. Independent of the concrete application, time segments containing actions have to be identified from the whole video as accurate as possible in addition to the classification of the different actions taking place. As traditional approaches for solving this problem mostly use an expensive combination of sliding window mechanisms and classification, temporal action proposals generation was introduced as preprocessing step, searching for high-quality time segments first, which are thought to contain an action of interest with both high probability and good temporal localization. Thus, classification has to be performed only on the temporal action proposals. A recent state of the art approach based on deep neural networks is the ‘Single-Stream Temporal Action Proposal’ (SST) model [@sst], processing videos utilizing 3D convolutions and a recurrent architecture. To the best of our knowledge, we are the first who investigate different positions and ways of fusion in two-stream architectures that utilize 3D convolutions on optical flow and image data for temporal action proposals generation. Our main contributions are: (1) The development of four two-stream model architectures for temporal action proposals generation originating from the SST model [@sst]. (2) Investigation and fine-tuning of the hyperparametrizations of the models. (3) Quantitative evaluation on the THUMOS’14 [@THUMOS14] dataset. (4) Showing the independence of a specific optical flow calculation method. Related Work ============ *Action recognition* is a task to associate a single action class to a video. From this field, a lot of relevant innovation emerged. Two-stream convolutional neural networks [@simonyan2014two] were designed to process image data on the first stream and stacked optical flow fields on the second stream. The additional usage of stacked optical flow fields contributes temporal dynamical information of motion. Another approach was the extension of two-dimensional kernels used by the classical CNNs into the third dimension, therefore operating on 3D volumes defined by consecutive frames. The prominent C3D (Convolutional 3D) network [@c3d] employs this approach by processing videos divided into blocks of 16 consecutive frames. This is another way of utilizing temporal information. More recent approaches combine the two previous ideas: temporal information is utilized by applying 3D convolution on two streams, one using image data and the other one using optical flow. Among others [@khong2018improving; @varol2018long], the I3D (Inflated 3D ConvNet) network [@carreira2017quo] is a prominent example of that approach, coming to the conclusion that 3D convolutional neural networks also profit from a two-stream architecture. This insight in the field of action recognition serves in this work as inspiration to transfer that approach to the field of temporal action proposals generation. The need for *temporal action proposals* comes from the task of the temporal localization of actions in long, untrimmed videos and the classification of said actions. Before temporal action proposals, this problem was tackled with sliding window approaches: The extraction of overlapping time segments with varied length. Subsequently, a classification of each time segment was done to find the action in time. As this process was very time-consuming with a lot of time segments to be classified, temporal action proposals were invented to reduce the number of time segments that have to be classified. There exists early work [@caba2016fast] on temporal action proposals relying on traditional approaches. Among recent successful work [@escorcia2016daps; @sst; @gao2017cascaded; @gao2017turn; @lin2017temporal] it is instead common to take advantage of deep neural networks. Several works [@escorcia2016daps; @sst; @gao2017cascaded; @gao2017turn] are utilizing 3D convolutional neural networks (3D ConvNets) for the generation of temporal action proposals – an approach already known from the field of action recognition, see above. Being another prominent approach from the field of action recognition, two-stream networks with 2D kernels are used as well [@lin2017temporal; @gao2017cascaded], taking advantage of optical flow on the second stream. Despite being successfully used in action recognition, the combination of 3D convolutions with a two-stream network has not made it to common practice in the field of temporal action proposals generation yet. In the field of *temporal action localization* – both the temporal localization and classification of actions in long, untrimmed videos – the combination of 3D convolutions with two-stream networks found use recently in [@chao2018rethinking; @nguyen2018weakly]. In most works, the temporal action proposal generation is a sub-task of the overall approach. However, there also exist end-to-end approaches [@buch2017end]. Methodic approach ================= In this work, we follow the general approach presented by Buch [@sst] for the SST model. Just like there, each video is divided into non-overlapping blocks of 16 frames and features are extracted using the C3D network [@c3d] as a feature extractor. Those features serve as input for a recurrent neural network, producing confidence scores for 32 possible time segments in each step. After post-processing with a score threshold and non-maxima suppression, a reduced set of temporal action proposals is generated. We stick to this approach and utilize the existing architectures, but extend them to a two-stream model architecture by introducing a second stream working on the corresponding images of optical flow, with the optical flow corresponding to image $j$ being calculated from image $j-1$ and $j$. Applying 3D convolutions on the optical flow allows efficiently making use of the dynamics of motion. We design four variants of this new architecture, differing in the position and way the separate streams are fused before continuing in a common stream. In the following, we will have a closer look at the four designed two-stream model architectures. All of them have in common that they process videos by dividing them into subsequent blocks of 16 images without overlap and processing them sequentially. For each block of 16 original images on the first stream – called video stream – there are the 16 corresponding images of optical flow on the second stream – called flow stream. These blocks of 16 images are then first processed in parallel before the streams get fused later in the architecture. The position and the way of the fusion differ in the four model architectures. In the following, we will highlight the C3D network apart from the SST network which uses it as feature extractor. #### \#1: Mid fusion by concatenation (2S-Mid+) The first of the designed two-stream variants is fusing the two separate streams by concatenating features extracted by two separate C3D networks before being used as input into the SST network. This approach is inspired by Khong [@khong2018improving] from the field of human action recognition. One of their investigated two-stream models utilizes C3D features extracted from the *fc6*-layer of two separate C3D networks, one of them operating on the original images and the other one operating on optical flow. Among other processing steps, the two separate C3D feature vectors get concatenated there before being fed into a linear support vector machine (SVM) for classification. The idea of fusing two streams by concatenating C3D features serves as a basis for our variant 2S-Mid+ of the designed two-stream networks. Two separate C3D networks get employed: one operating on the original images and one operating on the corresponding images of optical flow. The two streams stay separate until the end of the C3D networks where separate feature vectors are extracted. Optional processing of these feature vectors, like applying $L2$-normalization or principal component analysis, takes place after the extraction. The next performed step is the concatenation of the separate feature vectors. For block $i$ of a video, $f_{\text{\textit{v,i}}}$ denotes the feature vector from the video stream and $f_{\text{\textit{f,i}}}$ denotes the feature vector from the flow stream, which are concatenated and result in the concatenated feature vector $f_{\text{\textit{c,i}}}$. $$f_{\text{\textit{c,i}}} = [f_{\text{\textit{v,i}}}^T, f_{\text{\textit{f,i}}}^T]^T$$ The concatenated feature vector $f_{\text{\textit{c,i}}}$ serves then as an input for the SST network, which determines confidence scores for temporal windows. A schematic representation of the resulting network is shown in Figure \[fig:var\_1\_and\_2\_scheme\]. Training: Just as with the original combination of C3D and SST network the C3D networks will be trained separately from the SST network on the task of action recognition. The SST network will be trained afterward based upon the extracted and combined C3D feature vectors of the pretrained C3D networks on the task of temporal action proposal generation. #### \#2: Mid fusion by ***fc*** layer (2S-Mid***fc***) {#chap:var2_model} The first variant 2S-Mid+ fuses the streams ‘by hand’, as the fusion is not learned by the neural network but is performed by concatenation instead. A possible logical consequence is therefore to let the neural network learn how to fuse the two streams by combining the separate C3D networks in one of their later, fully connected layers, as it will be done in 2S-Mid*fc*. This idea is supported by the work of Varol [@varol2018long] on the field of action recognition, who use two separate C3D networks for original images and optical flow that are fused using a shared *fc6* layer. 2S-Mid*fc* uses – just as 2S-Mid+ – two separate C3D networks, one operating on the original images and one operating on the corresponding images of optical flow. In contrast, the two streams stay only separate up to the *fc6* layers. For block $i$ of a video, $a_{\text{\textit{v-fc6,i}}}$ denotes the activations that are put out by the *fc6* layer of the video stream and $a_{\text{\textit{f-fc6,i}}}$ denotes the activations that are put out by the *fc6* layer of the flow stream accordingly. Both have $4096$ elements and serve as common input into the *fc7* layer, thus delivering $8192$ elements. The shared *fc7* layer fuses both streams, producing the output $a_{\text{\textit{c-fc7,i}}}$ with 4096 elements. In equation \[eq:var2\_activation\], $R$ denotes the ReLU activation function. $$\label{eq:var2_activation} a_{\text{\textit{c-fc7,i}}} = R(W_{\text{\textit{fc7}}} \cdot [a_{\text{\textit{v-fc6,i}}}^T, a_{\text{\textit{f-fc6,i}}}^T]^T + b_{\text{\textit{fc7}}})$$ The activation $a_{\text{\textit{c-fc7,i}}}$ is used as feature representation and optional post-processing can be applied before being used as input to the SST network. A schematic representation of the resulting network can be seen in Figure \[fig:var\_1\_and\_2\_scheme\]. Training: Two single-stream C3D networks are to be trained up front. The estimated weights are used to initialize the layers up to *fc6*. As the dimension of the *fc7* layer changed, it cannot be trained up front with a single-stream C3D network, so the two-stream C3D network with preinitialized weights up to *fc6* has to be trained again on the task of action recognition. The network trained that way is then used to extract features, which are used to train the SST network on the task of temporal action proposals generation. #### \#3: Late fusion by weighted average (2S-LateAvg) In the third variant 2S-LateAvg the fusion is moved to the very end of the network by forming a weighted average of two separate confidence score vectors. The idea is inspired by the temporal segment network (TSN) from Wang [@tsn] for action recognition which fuses the separate streams by a weighted average of class scores. 2S-LateAvg utilizes two separate streams, each consisting of a full C3D and SST network. One stream operates on the original images, the other one on the corresponding images of optical flow. Both separate C3D networks extract separate C3D feature vectors, which are used as input into two separate SST networks. The SST networks are then used to generate separate vectors with confidence scores for the same time windows. For block $i$, the confidence score vectors of the video stream and the flow stream are called $c_{\text{\textit{v,i}}}$ and $c_{\text{\textit{f,i}}}$. The streams get fused by calculating the weighted average over these separate confidence scores with the weight factor $\alpha$, 0 $\leq$ $\alpha$ $\leq$ 1, and result in the common confidence score vector $c_{\text{\textit{c,i}}}$. $$c_{\text{\textit{c,i}}} = (1 - \alpha) \cdot c_{\text{\textit{v,i}}} + \alpha \cdot c_{\text{\textit{f,i}}}$$ A schematic representation of the resulting network architecture can be seen in Figure \[fig:var\_3\_and\_4\_scheme\]. Training: The two separate C3D networks are pretrained on the task of action recognition. The two separate SST networks are then trained on basis of the extracted C3D feature vectors, one of them on C3D feature vectors extracted from the original images and the other one on C3D feature vectors extracted from images of optical flow. Training of the separate SST networks together based on the performance of the weighted average of the confidence score vectors is possible, but not mandatory. #### \#4: Late fusion by ***fc*** layer (2S-Late***fc***) {#chap:var4_architecture} For 2S-LateAvg, the fusion of the separate streams is just as with 2S-Mid+ done ‘by hand’, as the fusion is not learned by the network but done by calculating the weighted average over the confidence score vectors. Therefore, it seems logical to let the network learn how to fuse the two separate streams, which will be done in 2S-Late*fc* by utilizing the fully connected layer at the end of the SST network. 2S-Mid*fc*, where the second fully connected layer *fc7* of the C3D network was used for the fusion, serves as inspiration. 2S-Late*fc* utilizes two separate C3D networks, one operating on the original images and one on the corresponding images of optical flow. Both are used to extract separate C3D feature vectors. They serve as input into two separate SST networks, one for the C3D feature vectors derived from the original images and one for the C3D feature vectors derived from the images of optical flow. Both SST networks stay separate until the end of the sequence encoders – the recurrent part before the fully connected layer. The output vectors of the separate sequence encoders – for block $i$ denoted as $s_{\text{\textit{v,i}}}$ for the video stream and $s_{\text{\textit{f,i}}}$ for the flow stream – are used as input for a shared fully connected layer, which utilizes a logistic sigmoid function $\sigma$ to calculate the common confidence vector $c_{\text{\textit{c,i}}}$ in each step. $$c_{\text{\textit{c,i}}} = \sigma(W_{\text{\textit{fc}}} \cdot [s_{\text{\textit{v,i}}}^T, s_{\text{\textit{f,i}}}^T]^T + b_{\text{\textit{fc}}})$$ An outline of the resulting network is shown in Figure \[fig:var\_3\_and\_4\_scheme\]. Training: The separate C3D networks are to be pretrained just as in 2S-LateAvg, the same applies to the two separate SST networks. In contrast to 2S-LateAvg, the weights determined for the separate SST networks can only be used to initialize the two fused SST networks up to the end of the sequence encoder, as the dimension of the shared fully connected layer has changed. Therefore, the fused SST networks are to be trained again to calculate the weights for the fully connected layer, before they can be used for confidence score calculation. Evaluation ========== In this section, a quantitative evaluation will be performed. First of all, we will investigate experiments regarding the hyperparametrization of the flow stream, followed by the evaluation of the four designed two-stream model architectures in comparison to the single-stream variants. The best configurations of fusion will be determined, as well as the improvement to the single-stream networks. Evaluation and training for temporal action proposal generation will be performed on the THUMOS’14 [@THUMOS14] dataset. The validation split will be used for the training as it is common practice on this dataset, while the test split remains for the evaluation. If the training of the C3D network is necessary, the UCF101 [@UCF101] dataset will be utilized. We are building upon a implementation[^1] of the SST network in TensorFlow, coming with already extracted features for the original video data of THUMOS’14. If not stated otherwise, the method of Brox [@brox2004high] is used for optical flow calculation. Flow stream experiments ----------------------- As an initial step for the evaluation of the designed two-stream models, the hyperparametrization of the flow stream was investigated, working on images of optical flow. The parameters of the C3D network used for feature extraction remain untouched. For the SST network, several changes for parameters are investigated. C3D features from the *fc6* layer of the C3D network are compared with features from the *fc7* layer. One time the training of the C3D network is stopped early, the features from that C3D network are referred as early C3D features and compared with features from the C3D network when training is not stopped early; these features are referred as late C3D features. Two different preprocessing steps of the C3D features are investigated: *L2*-normalization and principal component analysis for reducing the size of the feature vector from 4096 to 500 elements. Apart from these different inputs into the SST network, the parameters of the network itself are investigated: Different learning rates, different numbers of neurons per GRU layer, different numbers of GRU layers and different dropout rates. For the initial configuration, the parameter values of the video stream delivered with the used implementation are utilized. First, each parameter value got altered independently, afterward combinations of parameter changes on the basis of previous experiments were investigated. The initial parameter values, as well as the best configuration in the conducted experiments, can be seen in Table \[tab:param\_flow\_stream\]. Not all parameter changes worked well, Figure \[fig:flow\_stream\_res\_comp\] shows the best results of the best configurations in comparison to the best results of the worst. ![image](pictures/dummy.png){width="\textwidth"} Parameter Initial Value Best Value ----------------------- --------------- -------------------- C3D features late, *fc7* early, *L2*, *fc7* Learning rate 1e-3 1e-2 Dropout rate 0.3 0.3 Neurons per GRU layer 128 256 Number of GRU layers 2 1 : Configuration of the SST network operating on features from images of optical flow. The initial (left) and experimentally determined best (right) values are displayed. *L2* stands for *L2* feature normalization prior to usage.[]{data-label="tab:param_flow_stream"} 2S-Mid+ evaluation ------------------ For these experiments, the already extracted features for the original images delivered with the TensorFlow implementation and the already extracted features for the images of optical flow extracted during the flow stream experiments are used. If being used, preprocessing steps are applied before those features are concatenated. Experiments are conducted similar to the flow stream experiments, with the difference, that a reduced set of parameter values is explored based upon successful parameter values from the flow stream experiments. For each parameter setup, a new SST network is trained and evaluated on the concatenated features, as no pretrained model exists for the concatenated C3D feature vectors. Per configuration two SST networks were trained and evaluated. The best results were achieved with the configuration in Table \[tab:param\_var1\_stream\]. In Figure \[fig:var\_1\_and\_2\_res\_comp\] the comparison with the single-stream networks – the original SST network and the TensorFlow implementation of the SST network – shows that the additional usage of optical flow leads to improvements for major parts of both metrics, while for minor parts in both metrics results of a comparable level were achieved. Parameter Shared Stream ----------------------- -------------------- -- C3D features (images) *L2*, *fc6* C3D features (flow) *L2*, early, *fc7* Learning rate 1e-2 Dropout rate 0.3 Neurons per GRU layer 256 Number of GRU layers 2 : Parameters and their experimentally determined best values for the SST network of 2S-Mid+ that operates on the concatenated feature vectors.[]{data-label="tab:param_var1_stream"} ![image](pictures/dummy.png){width="\textwidth"} 2S-Mid***fc*** evaluation ------------------------- The two streams are fused inside the feature extractor, thus, already extracted features cannot be used for the experiments. Instead, the existing weights for the feature extraction on image data and the weights determined during the flow stream experiments for feature extraction on images of optical flow are used for initialization. Training is done as described above using the UCF101 dataset for training the fused C3D networks. The weights used for initialization remain fixed in a first training phase used to determine the not preinitialized weights; an optional subsequent fine-tuning where no weights remain fixed is investigated as well. Solely the *fc7* layer is investigated for feature extraction, as the streams are separate before that layer. After the extraction of the C3D feature vectors, the training and subsequent evaluation of the SST network takes place. As with the experiments for the previous variant, different parameter configuration derived from successfull parameter values of the flow stream experiments are investigated, with two SST networks being trained and evaluated per configuration. Among all experiments, the configuration in Table \[tab:param\_var2\_stream\] produced the best results. The parametrization is, apart from the obvious deviation in the used C3D features caused by the design of the two-stream network, identical to the one which produced the best results for 2S-Mid+. The comparison of the performance concerning the two metrics can be found in Figure \[fig:var\_1\_and\_2\_res\_comp\]. Concerning both metrics, 2S-Mid*fc* achieves in comparison with the single-stream networks improvements for major parts of both metrics as well but produces slightly worse results than 2S-Mid+. Parameter Shared Stream ----------------------- ---------------------------- -- C3D features *L2*, no finetuning, *fc7* Learning rate 1e-2 Dropout rate 0.3 Neurons per GRU layer 256 Number of GRU layers 2 : Parameters and their experimentally determined best values for the SST network of 2S-Mid*fc*. []{data-label="tab:param_var2_stream"} 2S-LateAvg evaluation --------------------- Because fusion takes place right after the confidence scores of each stream were created, the pretrained SST network and the SST networks from the flow stream experiments can be used. For $\alpha$ the values 1/3, 1/2, and 2/3 are investigated. Different sets of parameter values are explored for the hyperparameters of the SST network of the flow stream. The parameter values of hyperparameters of the SST network of the video stream remain untouched. To produce first results no further training is needed as the whole two-stream model can be initialized with pretrained weights, but an optional common fine-tuning of the two SST networks based upon the weighted average of the confidence scores is investigated as well. The parametrization that delivered the best results among all experiments for 2S-LateAvg is displayed in Table \[tab:param\_var3\_stream\]. The comparison with the single-stream networks concerning the two known metrics is displayed in Figure \[fig:var\_3\_and\_4\_res\_comp\]. For major parts – even for small tIoU – improvements are achieved. Parameter Flow Stream Image Stream ----------------------------- -------------------- -------------- C3D features early, *L2*, *fc7* *fc6* Learning rate 1e-2 1e-3 Dropout rate 0.3 0.3 Neurons per GRU layer 256 128 Number of GRU layers 1 2 Flow stream weight $\alpha$ 0.5 0.5 Common Finetuning : Parameters and their experimentally determined best values for the two separate SST networks utilized by 2S-LateAvg. Different parametrizations were only examined for SST network of the flow stream.[]{data-label="tab:param_var3_stream"} ![image](pictures/dummy.png){width="\textwidth"} 2S-Late***fc*** evaluation -------------------------- Feature extraction is performed as in 2S-LateAvg, but already trained SST networks cannot be employed totally, as the fusion is done by a shared fully connected layer of both separate SST networks. Therefore, weights of those networks can only be used for initialization up to the point of fusion. Training is done as described above. Weights used for the initialization remain fixed while training the fully connected layer, but an optional fine-tuning of all weights of the fused SST networks is investigated as well. Similar to the experiments above the hyperparameters for the part belonging to the video stream remain fixed whereas for the flow stream they are explored. For each configuration two training and evaluation procedures are performed for the fused SST networks. The parametrization producing the best results can be seen in Table \[tab:param\_var4\_stream\]. It can be seen that the values of the parameters that are common to 2S-LateAvg and 2S-Late*fc* are the same, therefore showing consistency. A comparison with the single-stream networks concerning both known metrics is shown in Figure \[fig:var\_3\_and\_4\_res\_comp\]. Again, improvements are achieved for major parts of both metrics in comparison with the single-stream networks. Parameter Flow Stream Image Stream ------------------------ -------------------- -------------- C3D features early, *L2*, *fc7* *fc6* Separate learning rate 1e-2 1e-3 Common learning rate 1e-3 1e-3 Dropout rate 0.3 0.3 Neurons per GRU layer 256 128 Number of GRU layers 1 2 Common Finetuning : Parameters and values for the two separate SST networks in 2S-Late*fc*. The ‘separate learning rate’ denotes the learning rate used to pretrain the two separate SST networks, whose weights are used to initialize the separate sequence encoders. The ‘common learning rate’ denotes the learning rate used to train the common *fc* layer after the preinitialized sequence encoders.[]{data-label="tab:param_var4_stream"} Optical flow experiments ------------------------ Until now experiments were conducted using Brox [@brox2004high] for optical flow. To investigate if the observed improvements can hold when the method of calculating optical flow is changed, for 2S-LateAvgFN FlowNet2 [@ilg2017flownet] is used. This method uses a neural network for supervised learning of optical flow in contrast to the traditional optimization approach of Brox . A C3D network and a single-stream SST network are trained the same way as before, using the best configuration from 2S-LateAvg. The determined weights are used for initialization of the flow stream of 2S-LateAvgFN. Experiments with this parametrization and these weights are conducted just as when using optical flow calculated with Brox . The results are slightly worse compared to the case where Brox is used, but remain on a comparable level, achieving improvements in comparison to the single-stream networks. Summary ------- All four two-stream models lead to improvements compared to the single-stream networks. This indicates that the utilization 3D convolutions in a two-stream setup makes sense for the task of temporal action proposal generation. A tabular comparison is shown in Table \[tab:res\_summary\]. 2S-Mid+ and 2S-LateAvg perform best, with negligible differences in performance. They have in common that the fusion of both streams takes place outside of the actual neural networks, thus does not get learned. Conclusion ========== In this work, four different two-stream model architectures with different fusions utilizing sequences of images on one stream and images of optical flow on the other stream were investigated for the purpose of temporal action proposal generation. Utilizing sequences of images of optical flow on the second stream in addition to sequences of the original images on the first and processing them using 3D convolutions on both streams, improvements where achieved for all explored two-stream models in comparison to the single-stream models omitting a second stream. It was also shown that the improvement is not bound to using a certain method of calculating optical flow by investigating another one and achieving improvements as well. Apart from showing that the general approach of combining a two-stream architecture with 3D convolutions is beneficial for the task of temporal action proposal generation, a suitable basis for further work on the larger field of action localization has been created. Network Score ------------------------------------------ -------- -- Original SST network 0.6025 TensorFlow Implementation of SST network 0.6295 SST network (images of optical flow) 0.6320 2S-Mid+ 0.6497 2S-Mid*fc* 0.6438 2S-LateAvg 0.6495 2S-Late*fc* 0.6466 2S-LateAvgFN 0.6436 : Comparison of the single-stream networks with the different two-stream models. The displayed score refers to the metric ‘average recall at average 1000 proposals’. The scores for the two-stream networks and the single-stream network with optical flow come from the best experiments presented in this work. It can be seen that all the single-stream variants of the SST networks are surpassed by every single two-stream model, even if the calculation method of optical flow is changed. Best results are achieved with 2S-Mid+ and 2S-LateAvg.[]{data-label="tab:res_summary"} [^1]: https://github.com/JaywongWang/SST-Tensorflow
--- abstract: | ------------------------------------------------------------------------ We extend the renormalization of the NN interaction with Chiral Two Pion Exchange Potential to the calculation of non-central partial wave phase shifts with total angular momentum $j \le 5 $. The short distance singularity structure of the potential as well as the requirement of orthogonality conditions on the wave functions determines exactly the number of undetermined parameters after renormalization. author: - 'M. Pavón Valderrama' - 'E. Ruiz Arriola' title: 'Renormalization of NN Interaction with Chiral Two Pion Exchange Potential. Non-Central Phases. ' --- Introduction ============ The original proposal by Weinberg [@Weinberg:1990rz; @Weinberg:1991um] and carried out for the first time by Ray, Ordoñez and van Kolck [@Ordonez:1995rz], of making model independent predictions for NN scattering using Chiral Perturbation Theory (ChPT) has been followed by a wealth of works  [@Rijken:1995pu; @Kaiser:1997mw; @Kaiser:1998wa; @Epelbaum:1998ka; @Epelbaum:1999dj; @Rentmeester:1999vw; @Friar:1999sj; @Richardson:1999hj; @Kaiser:1999ff; @Kaiser:1999jg; @Kaiser:2001at; @Kaiser:2001pc; @Kaiser:2001dm; @Entem:2001cg; @Entem:2002sf; @Rentmeester:2003mf; @Epelbaum:2003gr; @Epelbaum:2003xx; @Entem:2003cs; @Higa:2003jk; @Higa:2003sz; @Higa:2004cr; @Birse:2003nz; @Entem:2003ft; @Epelbaum:2004fk] (for a review see e.g. Ref. [@Bedaque:2002mn]). The renormalized potential as given in Refs. [@Ordonez:1995rz; @Kaiser:1997mw; @Rentmeester:1999vw] in configuration space is expanded taking $m^2 / 16 \pi^2 f^2 $ and $ m/M $ as small parameters ($m$ and $M$ are the pion and nucleon masses respectively and $f$ is the pion weak decay constant), with $ m r $ fixed. In this counting for the potential and in a given partial wave (coupled) channel with good total angular momentum the reduced potential can schematically be written as $$\begin{aligned} U (r) &=& M m \Big\{ \frac{m^2}{f^2} W^{(0)} (mr ) + \frac{m^4}{f^4} W^{(2)} (mr ) \nonumber \\ &+& \frac{m^4}{f^4} \frac{m}{M} W^{(3)} (mr ) + \dots \Big\} \,, \label{eq:pot_chpt}\end{aligned}$$ where $W^{(n)} $ are known dimensionless functions which are everywhere finite except for the origin and depend on the axial coupling constant. $W^{(3)} $ depends also on three additional low energy constants $ \bar c_1 = c_1 M $, $ \bar c_3 = c_3 M $ and $ \bar c_4 = c_4 M$ which have been determined from $\pi N$ scattering ChPT studies in a number of works [@Fettes:1998ud; @Buettiker:1999ap; @GomezNicola:2000wk; @Nicola:2003zi]. At the level of approximation of Eq. (\[eq:pot\_chpt\]) these potentials are local and energy independent and become singular at the origin. Thus, non-perturbative renormalization methods must be applied to give a precise meaning to the scattering amplitude [@Case:1950] (for a comprehensive review in the one channel case see e.g. Ref. [@Frank:1971] and Ref. [@Beane:2000wh] for a modern perspective). Several methods have been proposed to study the LO term in Eq. (\[eq:pot\_chpt\]) for central [@Frederico:1999ps; @Beane:2001bc; @PavonValderrama:2003np; @PavonValderrama:2004nb; @PavonValderrama:2005gu] and non-central [@Nogga:2005hy] waves. Recently [@PavonValderrama:2005gu; @Valderrama:2005wv] we have shown how a renormalization program can be carried out for the NN interaction for the One Pion Exchange (OPE) and chiral Two Pion Exchange (TPE) potentials in the central $^1S_0$ and $^3S_1-^3D_1$ waves and its implications for the deuteron and pion-deuteron scattering [@Valderrama:2006np]. In the present work we extend our analysis to all remaining partial waves with $j \le 5 $ both for the OPE as well as for the chiral TPE potentials. As we showed in Refs. [@PavonValderrama:2005gu; @Valderrama:2005wv] the short distance behaviour of the chiral NN potential, Eq. (\[eq:pot\_chpt\]), determines [*exactly*]{} how many counterterms are needed in order to generate renormalized and finite, i.e. cut-off independent, phase shifts. These counterterms can be determined by fixing some low energy parameters while the cut-off is removed. It has been [*assumed* ]{} that dimensional power counting in the counterterms can be made [*independently*]{} on the short distance singularity of the potential. This yields conflicts between naive dimensional power counting and renormalization which have been reported recently even for low partial waves [@Nogga:2005hy]. So, one is led to an alternative: either one keeps the power counting and a finite cut-off or one removes the cut-off at the expense of modifying the power counting of the short distance interaction. The finite cut-off route has been explored in great detail in the past [@Rentmeester:1999vw; @Epelbaum:1999dj; @Epelbaum:2003gr; @Epelbaum:2003xx; @Entem:2003ft; @Rentmeester:2003mf]. In this paper we explore further the possibility of taking the alternative suggested by renormalization and the tight constraints imposed by finiteness. The analysis becomes rather transparent in coordinate space, where the counterterms can be mapped into boundary conditions [@PavonValderrama:2003np; @PavonValderrama:2004nb; @PavonValderrama:2004td] at the origin. In practice renormalization may be carried out in several ways. In coordinate space it seems natural to exploit the locality of the long distance (renormalized) potentials and then to renormalize the full scattering problem. In the present work we adhere to this two-steps renormalization which has the additional advantage of making possible to determine [*a priori*]{} and based on simple analytical arguments the existence of the renormalized limit and how many independent renormalization conditions (counterterms) are compatible with this limit. In this regard, let us remind that the main advantage of renormalization is that [*identical*]{} finite and unique results should be obtained regardless of the method of calculation (coordinate or momentum space) and regularization provided the same input physical data are used to eliminate the divergencies. In particular we also expect independence on the way how the limit is taken. The origin of the conflict can be traced back to the question whether for a given energy independent local potential, such as Eq. (\[eq:pot\_chpt\]), one can assume any short distance physics regardless on the form of the long range potential. Renormalization group invariance, however, requires that any physical parameter sits on a renormalization trajectory and the corresponding evolution on the renormalization scale is dictated by the form of the long distance potential at [*all*]{} distances. The precise trajectory is uniquely fixed by a renormalization condition at very long distances. Thus, the separation between the short and long distance contribution is not only scale dependent but also potential dependent [@PavonValderrama:2003np; @PavonValderrama:2004nb]. Renormalization conditions are physical and do not exhibit this dependence. Finiteness of the scattering amplitude and orthogonality of scattering (and eventually bound state) wave functions impose very tight constraints on the allowed number of counterterms and their possible scale dependence [@PavonValderrama:2005gu; @Valderrama:2005wv]. The discussion becomes rather straightforward in coordinate space and in terms of boundary conditions for ordinary differential equations. In addition, unlike momentum space treatments, a very natural hierarchy of the renormalization problem takes place in configuration space  [@PavonValderrama:2005gu; @Valderrama:2005wv]. More specifically, orthogonality of different energy solutions requires an energy independent boundary condition on the wave function for the long distance local and energy independent potentials as it is the case for Eq. (\[eq:pot\_chpt\]) valid to NNLO, so that in [*all cases*]{} the effective range, and higher order threshold parameters cannot be taken as independent input parameters [^1]. The results found in Refs. [@PavonValderrama:2005gu; @Valderrama:2005wv] can be concisely summarized as follows in the one channel case. For a regular potential, i.e., diverging less strongly than the inverse square potential, $r^2 |U(r)| < \infty $, one may [*choose*]{} between the regular and irregular solution. In the first case the scattering length is predicted while in the second case the scattering length becomes an input of the calculation. Singular potentials at the origin, i.e. fulfilling, $r^2 |U(r)| \to \infty $, do not allow this choice. If the potential is repulsive, the scattering length depends on the potential while for an attractive potential the scattering length must be chosen as an independent parameter. In the coupled channel situation one must look at the strongest singularity of the potential eigenvalues at the origin, and apply the single channel results. In our formulation of the NN renormalization problem threshold parameters play an essential role. Unfortunately, scattering threshold parameters for higher partial waves other than the S-waves have never been considered in the context of chiral potentials [@Rentmeester:1999vw; @Epelbaum:1999dj; @Epelbaum:2003gr; @Epelbaum:2003xx; @Entem:2003ft; @Rentmeester:2003mf]. Instead, some calculations adjust their counterterms to fit the phase shifts in the region above threshold to the Nijmegen database [@Stoks:1993tb; @Stoks:1994wp]. In a recent work we have filled the gap by carrying out a complete determination of these threshold parameters for the Reid93 and NijmII potentials [@PavonValderrama:2004se]. On the light of this new information it is quite possible that the good fits in the intermediate energy region imply a somewhat less accurate description in the threshold region. This issue will become relevant in the description of some partial waves. The paper is organized as follows. In Sect. \[sec:form\] we review the formalism for coupled channel scattering in the presence of singular potentials at the origin. For completeness we list the potentials in Appendix \[sec:potentials\]. Based on the short distance behaviour of those potentials (see Appendix \[sec:short\]) and the requirement of orthogonality we determine the number of independent parameters for any partial wave with $j \le 5$. In Sect. \[sec:phases\] we present our results for the phase shifts. Specifically, we make a thorough analysis of cut-off dependence in all partial waves both for the OPE as well as for the chiral TPE potential. We also discuss the perturbative nature of peripheral waves within the present non-perturbative approach. Finally, in Sect. \[sec:concl\] we present our conclusions. Formalism {#sec:form} ========= We solve the coupled channel Schrödinger equation for the relative motion which in compact notation reads, $$\begin{aligned} -\u '' (r) + \left[ \U (r) + \frac{{\bf l}^2}{r^2} \right] \u (r) = k^2 \u (r) \, , \label{eq:sch_cp} \end{aligned}$$ where $\U (r)= 2 \mu_{np} {\bf V}(r)$ is the coupled channel matrix reduced potential with $\mu_{np}=M_p M_n /(M_p+M_n)$ the reduced proton-neutron mass which for $j> 0$ can be written as, $$\begin{aligned} \U^{0j} (r) &=& U_{jj}^{0j} \, , \nonumber \\ \\ \U^{1j} (r) &=& \begin{pmatrix} U_{j-1,j-1}^{1j} (r) & 0 & U_{j-1,j+1}^{1j} (r) \\ 0 & U_{jj}^{1j} (r) & 0 \\ U_{j-1,j+1}^{1j} (r) & 0 & U_{j+1,j+1}^{1j} (r) \end{pmatrix} \, . \nonumber \end{aligned}$$ In Eq. (\[eq:sch\_cp\]) $ {\bf l}^2 = {\rm diag} ( l_1 (l_1+1), \dots, l_N (l_N +1) )$ is the orbital angular momentum, $\u(r)$ is the reduced matrix wave function, $k$ the C.M. momentum and $j$ the total angular momentum. In our case $N=1$ for the spin singlet channel with $l=j$ and $N=3$ for the spin triplet channel with $l_1=j-1$, $l_2=j$ and $l_3=j+1$. The potentials used in this paper were obtained in Refs. [@Ordonez:1995rz; @Kaiser:1997mw; @Rentmeester:1999vw] in coordinate space and are listed in Appendix \[sec:potentials\] for completeness. Long distance behaviour ----------------------- At long distances, we assume the usual asymptotic normalization condition $$\begin{aligned} \u (r) \to \hat \h^{(-)} (r) - \hat \h^{(+)} (r) \S \, , \label{eq:asym}\end{aligned}$$ with $\S$ the coupled channel unitary S-matrix. The corresponding out-going and in-going free spherical waves are given by $$\begin{aligned} \hat \h^{(\pm)} (r) &=& {\rm diag} ( \hat h^\pm_{l_1} ( k r) , \dots , \hat h^\pm_{l_N} (k r) ) \, ,\end{aligned}$$ with $ \hat h^{\pm}_l ( x) $ the reduced Hankel functions of order $l$, $ \hat h_l^{\pm} (x) = x H_{l+1/2}^{\pm} (x) $ ( $ \hat h_0^{\pm} (x) = e^{ \pm i x}$ ), and satisfy the free Schrödinger’s equation for a free particle. For the spin singlet state, $s=0$, one has $l=j$ and hence the state is uncoupled $$\begin{aligned} S_{jj}^{0j} = e^{ 2 i \delta_{j}^{0j} } \, ,\end{aligned}$$ whereas for the spin triplet state $s=1$, one has the uncoupled $ l=j$ state $$\begin{aligned} S_{jj}^{1j} &=& e^{ 2 i \delta_{j}^{1j} } \, ,\end{aligned}$$ and the two channel coupled $l,l'=j \pm 1$ states for which we use Stapp-Ypsilantis-Metropolis (SYM or Nuclear bar) [@stapp] parameterization $$\begin{aligned} S^{1j} &=& \left( \begin{array}{cc} S_{j-1 \, j-1}^{1j} & S_{j-1 \, j+1}^{1j} \\ S_{j+1 \, j-1}^{1j} & S_{j+1 \, j+1}^{1j} \end{array} \right) \nonumber \\ &=& \left( \begin{array}{cc} \cos{(2 \bar \epsilon_j)} e^{2 i \bar \delta^{1j}_{j-1}} & i \sin{(2 \bar \epsilon_j)} e^{i (\bar \delta^{1j}_{j-1} +\bar \delta^{1j}_{j+1})} \\ i \sin{(2 \bar \epsilon_j)} e^{i (\bar \delta^{1j}_{j-1} + \bar \delta^{1j}_{j+1})} & \cos{(2 \bar \epsilon_j)} e^{2 i \bar \delta^{1j}_{j+1}} \end{array} \right) \nonumber\end{aligned}$$ In the discussion of low energy properties we also use the Blatt-Biedenharn (BB or Eigen phase) parameterization [@Bl52] defined by $$\begin{aligned} S^{1j} &=& \begin{pmatrix} \cos \epsilon_j & -\sin \epsilon_j \\ \sin \epsilon_j & \cos \epsilon_j \end{pmatrix} \begin{pmatrix} e^{2 {\rm i} \delta^{1j}_{j-1}} & 0 \\ 0 & e^{2 {\rm i} \delta_{j+1}^{1j}} \end{pmatrix} \nonumber \\ &\times& \begin{pmatrix} \cos \epsilon_j & \sin \epsilon_j \\ -\sin \epsilon_j & \cos \epsilon_j \end{pmatrix} \, . \label{eq:BB} \end{aligned}$$ The relation between the BB and SYM phase shifts is $$\begin{aligned} \bar \delta_{j+1}^{1j} + \bar \delta_{j-1}^{1j} &=& \delta_{j+1}^{1j} + \delta_{j-1}^{1j} \, , \\ \sin( \bar \delta_{j-1}^{1j} - \bar \delta_{j+1}^{1j}) &=& \frac{\tan( 2\bar \epsilon_j )}{\tan(2\epsilon_j )} \, .\end{aligned}$$ In the present paper zero energy scattering parameters play an essential role since they are often used (see below) as input parameters of the calculation of phase shifts. Due to unitarity of the S-matrix in the low energy limit, $ k\to 0$, we have $$\begin{aligned} \left(\S - \E \right)_{l',l}=- 2 {\rm i} \alpha_{l', l} k^{l'+l+1} + \dots \, , \end{aligned}$$ with $\alpha_{l' l} $ the (hermitian) scattering length matrix [^2]. The threshold behaviour acquires its simplest form in the SYM representation, $$\begin{aligned} \delta^{0j}_{j} &\to& - \alpha^{0j}_{j} k^{2j+1} \, , \\ \delta^{1j}_{j} &\to& - \alpha^{1j}_{j} k^{2j+1} \, , \\ \bar \delta^{1j}_{j-1} &\to& - \bar \alpha^{1j}_{j-1} k^{2j-1} \, , \\ \bar \delta^{1j}_{j+1} &\to& - \bar \alpha^{1j}_{j+1} k^{2j+3} \, , \\ \bar \epsilon_j &\to& - \bar \alpha^{1j}_{j} k^{2j+1} \, . \label{eq:phase-thres}\end{aligned}$$ In the BB form one has similar behaviours for the $\delta$’s but for $\epsilon_j$ which behaves as $k^{2j}$ instead of $k^{2j+1}$ $$\begin{aligned} \delta^{1j}_{j-1} &\to& - \bar \alpha^{1j}_{j-1}\,k^{2j-1} \, , \\ \delta^{1j}_{j+1} &\to& - (\bar \alpha^{1j}_{j+1} - \frac{{({\bar \alpha}^{1j}_{j})}^2}{\bar \alpha^{1j}_{j-1}})\,k^{2j+3} \, , \\ \epsilon_j &\to& \frac{\bar \alpha^{1j}_{j}}{\bar \alpha^{1j}_{j-1}} \, k^{2j} \, . \label{eq:phase-thres-BB}\end{aligned}$$ Short distance behaviour ------------------------ The form of the wave functions at the origin is uniquely determined by the form of the potential at short distances (see e.g. [@Case:1950; @Frank:1971] for the case of one channel and [@PavonValderrama:2005gu; @Valderrama:2005wv] for coupled channels). For the chiral NN potential, Eq. (\[eq:pot\_chpt\]), one has $$\begin{aligned} \U_{\rm LO} (r) &\to& \frac{M {\bf C}_{3,LO}}{r^3} \, , \nonumber \\ \U_{\rm NLO} (r) &\to& \frac{M {\bf C}_{5,NLO}}{r^5} \, ,\nonumber \\ \U_{\rm NNLO} (r) &\to& \frac{M {\bf C}_{6,NNLO}}{r^6} \, , \nonumber \\ \label{eq:singLONLONNNLO} \end{aligned}$$ where LO includes the first term in Eq. (\[eq:pot\_chpt\]), NLO the first two terms and so on. Note that higher order potentials become increasingly singular at the origin. For a potential diverging at the origin as an inverse power law $$\begin{aligned} \U (r) \to \frac{M {\bf C}_n}{r^n} \, , \label{eq:singular} \end{aligned}$$ with ${\bf C}_n$ a matrix of generalized van der Waals coefficients and $n > 2$ an integer. One diagonalizes the matrix ${\bf C}_n $ by a constant unitary transformation, ${\bf G}$, yielding $$\begin{aligned} M {\bf C}_n = {\bf G} \, {\rm diag} ( \pm R_1^{n-2}, \dots , \pm R_N^{n-2} ) \, {\bf G}^{-1} \, , \end{aligned}$$ with $R_i$ constants with length dimension. The plus sign corresponds to the case with a positive eigenvalue (repulsive) and the minus sign to the case of a negative eigenvalue (attractive). Then, at short distances one has the solutions $$\begin{aligned} \u (r) \to {\bf G} \begin{pmatrix} u_{1,\pm} (r) \cr \cdots \\ u_{N,\pm} (r) \end{pmatrix} \, , \label{eq:eigen_wf}\end{aligned}$$ where for the attractive and repulsive cases one has $$\begin{aligned} u_{i,-} (r) &\to & C_{i,-} \left(\frac{r}{R_i}\right)^{n/4} \sin\left[ \frac{2}{n-2} \left(\frac{R_i}{r}\right)^{\frac{n}2-1} + \varphi_i \right] \, ,\nonumber \\ \label{eq:uA} \\ u_{i,+} (r) & \to & C_{i,+} \left(\frac{r}{R_i}\right)^{n/4} \exp \left[- \frac{2}{n-2} \left(\frac{R_i}{r}\right)^{\frac{n}2-1} \right] \, ,\label{eq:uR} \label{eq:short_wf}\end{aligned}$$ respectively. This behaviour of the wave functions near the origin is valid regardless of the energy, provided the distances are small enough [^3]. Here, $\varphi_i$ are arbitrary short distance phases which in general depend on the energy. There are as many short distance phases as short distance attractive eigenpotentials. Orthogonality of the wave functions at the origin yield the relation $$\begin{aligned} \sum_{i=1}^N \left[ {u_{k,i}}^* u_{p,i}'- {u_{k,i}'}^* u_{p,i} \right]\Big|_{r=0} = \sum_{i=1}^A \cos(\varphi_i (k) - \varphi_i(p) ) \, , \nonumber \\ \end{aligned}$$ where $A \le N$ is the number of the short distance attractive eigenpotentials. ---------------------------------------------------------------------------------------------------------------- Set Source $c_1 ({\rm GeV}^{-1}) $ $c_3 ({\rm GeV}^{-1}) $ $c_4 ({\rm GeV}^{-1}) $ --------- ----------------------------- ------------------------- ------------------------- -------------------- Set I $\pi N$ [@Buettiker:1999ap] -0.81 -4.69 3.40 Set II $NN $ [@Rentmeester:1999vw] -0.76 -5.08 4.70 Set III $NN $ [@Epelbaum:2003xx] -0.81 -3.40 3.40 Set IV $NN$ [@Entem:2003ft] -0.81 -3.20 5.40 ---------------------------------------------------------------------------------------------------------------- : \[tab:table1\] Sets of chiral coefficients considered in this work. The simplest choice to fix relative phases for a positive energy scattering state is to take the zero energy state $p=0$ as a reference state, and the zero energy short distance phase. In the particular case where only one eigenvalue is negative the short distance phase is energy independent. This may happen both in the singlet as well as in the triplet channels with $j=l$. The short distance phase is then fixed by reproducing the scattering length in the singlet channel and one of the three scattering lengths in the triplet channel. In the case where one has two negative, i.e. attractive, eigenvalues (this can only happen in triplet channels) there are two undetermined short distance phases which can be fixed by using the corresponding three scattering lengths. The case of two positive, i.e. repulsive, eigenvalues does not allow to fix any scattering length. The case with two different signs for the eigenvalues fixes one scattering length only. Note that in this construction and for two coupled channels there is no intermediate situation where the solution is specified by just two scattering lengths; one has either zero, one or three. Although our arguments are entirely based on analytical calculations, one should mention that our conclusions are in agreement with the findings of Ref. [@Nogga:2005hy] for the OPE case. There, counterterms beyond the ones dictated by Weinberg’s power counting are included in the $^3P_0$, $^3P_2-{}^3F_2$ and $^3D_2$ waves to ensure renormalizability on numerical grounds. As we will see below, our renormalized phase shifts for special case OPE reproduce essentially their results, although our TPE non-perturbatively renormalized amplitudes go beyond these results. Another issue is that of the establishment of a theoretically compelling and mathematically consistent power counting which also provides phenomenological success. This has been the goal of much the EFT activity in recent years. Despite the fact that our OPE is mathematically identical to the one in Ref. [@Nogga:2005hy] where a strong emphasis on power counting has been made, our motivation is slightly different. Actually, these authors argue that a consistent scheme for TPE might be achieved within a perturbative framework, using the non-perturbative OPE distorted amplitudes as the leading order approximation. This is theoretically appealing and the issue was thoroughly discussed within the coordinate space approach in our previous paper on the central waves [@Valderrama:2005wv]. There, it was pointed out that with enough counterterms such a program could be pursued although orthogonality was violated and results did not exhibit a clear improvement as compared to the fully iterated potentials. The reason was the appearance of non-analytical dependences on the would-be dimensional power counting parameter, a situation that has not been foreseen in the standard EFT set up. This suggests that discussion on power counting and the systematics of EFT is not yet over. Therefore, and as we did in our previous work, we focus more on establishing long range model independent correlations, leaving the possible establishment of a satisfactory power counting for future studies. Regularization methods ---------------------- In principle, it is possible to implement the short distance behaviour of the wave functions, Eq. (\[eq:short\_wf\]), if one goes to sufficiently small distances, or if the short distance behaviour of the the wave function is improved [@PavonValderrama:2005gu]. Computationally, the implementation of short distance regulators is mostly straightforward. The attractive or repulsive nature of the potentials at short distances requires different choices of regulators [@PavonValderrama:2005gu; @Valderrama:2005wv]. For a one-channel repulsive singular potential we use the regulator $$\begin{aligned} \frac{u_k' (a)}{u_k(a)}= \frac{l+1}{a} \, ,\end{aligned}$$ This condition ensures orthogonality of wave functions with different energy. For the attractive singular case, we integrate in from infinity at zero energy down to a given boundary radius, $a$, impose orthogonality at the boundary by matching logarithmic derivatives $$\begin{aligned} \frac{u_k' (a)}{u_k(a)}= \frac{u_0' (a)}{u_0(a)} \, ,\end{aligned}$$ and then integrate out at finite energy. In the coupled channel case we extend the method by applying the one channel regularization to the short distance eigen functions, Eq. (\[eq:eigen\_wf\]). Fixing of parameters and renormalization conditions --------------------------------------------------- Fixing of the short distance phases requires some renormalization condition. As we have said, an appealing choice is to impose this condition at zero energy. The way to proceed in practice is quite straightforward, although tedious given the large number (27) of partial waves considered in this work. In the singlet channel case and for an attractive short distance singularity, one starts at zero energy and integrates in from large distances $\sim 15 {\rm fm}$ with a given scattering length until a short boundary radius $\sim 0.1 {\rm fm} $. At finite energy one integrates out matching the wave function to the zero energy solution at the short distance boundary generating a phase shift from a given prescribed scattering length. Of course, in this method one has to check for cut-off independence (taking $r=0.1-0.2 {\rm fm}$ proves enough). For the coupled channel case one proceeds along similar lines and the procedure has been described in great detail in our previous works [@PavonValderrama:2005gu; @Valderrama:2005wv] for the $j=1$ channel. The method relies heavily on the superposition principle of boundary conditions and we use here the extension of that method to higher partial waves. One of the advantages of our approach is that we rarely have to make a fit to the data; any phase shift has [*by construction* ]{} the right threshold behaviour in the case where the potential at short distances is attractive. For the repulsive potential case the scattering length is predicted entirely from the potential. In any case, discrepancies with the data can be attributed to the potential. Wave $\alpha$ NijmII (Reid93) LO NLO NNLO ---------- -------------------------- ------- ------- ------- $^1S_0 $ -23.727(-23.735) Input Input Input $^3P_0 $ -2.468(-2.469) Input — Input $^1P_1 $ 2.797(2.736) — — — $^3P_1 $ 1.529(1.530) — Input Input $^3S_1 $ 5.418(5.422) Input — Input $^3D_1 $ 6.505(6.453) — — Input $E_1 $ 1.647(1.645) — — Input $^1D_2 $ -1.389(-1.377) — Input Input $^3D_2 $ -7.405(-7.411) Input Input Input $^3P_2 $ -0.2844(-0.2892) Input Input — $^3F_2 $ -0.9763(-0.9698) — — — $E_2 $ 1.609(1.600) — — — $^1F_3 $ 8.383(8.365) — — — $^3F_3 $ 2.703(2.686) — Input Input $^3D_3 $ -0.1449(-0.1770) Input — Input $^3G_3 $ 4.880(4.874) — — Input $E_3 $ -9.695(-9.683) — — Input $^1G_4 $ -3.229(-3.210) — Input Input $^3G_4 $ -19.17(-19.14) Input Input Input $^3F_4 $ -0.01045(-0.01053) Input Input — $^3H_4 $ -1.250(-1.240) — — — $E_4 $ 3.609(3.586) — — — $^1H_5 $ 28.61(28.57) — — — $^3H_5 $ 6.128(6.082) — Input Input $^3G_5 $ -0.0090(-0.010) Input — Input $^3I_5 $ 10.68(10.66) — — Input $E_5 $ -31.34(-31.29) — — Input : \[tab:table2\] The number of independent parameters for different orders of approximation of the potential. The scattering lengths are in ${\rm fm}^{l+l'+1}$ and are taken from NijmII and Reid93 potentials [@Stoks:1994wp] in Ref. [@PavonValderrama:2004se]. We use the (SYM-nuclear bar) convention, Eq. (\[eq:phase-thres\]). Inspection of Table \[tab:table2\] illustrates the situation for the LO, NLO, and NNLO approximations to the potential. We show the scattering lengths in all partial waves as determined in our previous work [@PavonValderrama:2004se] together with the corresponding eigenvalues for the leading short distance coefficients in the LO (OPE), NLO and NNLO approximations to the potential. In the NNLO one must also specify the values of the chiral constants $c_1$, $c_3$ and $c_4$. We use for definiteness the values of Ref. [@Entem:2003ft], since as we saw in Ref. [@Valderrama:2005wv] they provide a reasonable description of deuteron properties. Details on the numerical procedure ---------------------------------- The integration of the coupled differential equations requires some care, particularly in the vicinity of the short distance singularities. In the case of attractive singularities due to the increasing oscillations the wave function has to be sampled with great detail at a rate similar to the size of the oscillations. For the repulsive case, one must stop at sufficiently large distances due to the exponential suppression of the wave function. Another important condition has to do with preservation of in and out reversibility of the integration. This last requirement guarantees that for attractive channels, where the scattering length is supplied as an input parameter, the threshold behaviour of the phase shift is consistent with that given scattering length. Another problem one has to face for high partial waves is related to the practical influence of the scattering length on the calculated phase shifts. In principle, and for an attractive singular potential, the scattering length needs to be specified. For the one channel case, this is done by integrating in the zero energy large distance solution, valid for $r \gg 2/m_\pi $ $$\begin{aligned} u (r) \to r^{-l} - \frac{r^{l+1}}{\alpha_l} \, ,\end{aligned}$$ The long distance irregular solution dominates, unless $\alpha_l $ is anomalously large , i.e. $\alpha_l (m_\pi/2)^{(2l+1)} \gg 1 $ , so that when integrating in much of the regular solution will be lost and the result will be rather insensitive to the value of $\alpha_l$ provided it is of normal size. This fact becomes relevant in the numerical calculations if the long distance cut-off is taken to be exceedingly large. To avoid this situation we take typically $R_{\rm max} = 15 {\rm fm}$ for large $l$. Set $\gamma\,\,({\rm fm}^{-1})$ $\eta$ $A_S\,\,({\rm fm}^{-1/2})$ $r_m\,\,({\rm fm})$ $Q_d ({\rm fm}^2)$ $P_D$ (%) $\alpha_0\,({\rm fm})$ $\alpha_{02} ({\rm fm}^3)$ $\alpha_2 ({\rm fm}^5)$ -------- ----------------------------- ------------ ---------------------------- --------------------- -------------------- ----------- ------------------------ ---------------------------- ------------------------- OPE(0) 0.2274(4) 0.02564(4) 0.8568(10) 1.964(3) 0.2796(3) 7.208(12) Input 1.754(7) 6.770(7) OPE(B) Input 0.02633 0.8681(1) 1.9351(5) 0.2762(1) 7.31(1) 5.335(1) 1.673(1) 6.693(1) TPE(0) 0.2322(3) 0.02531(9) 0.8891(4) 1.968(3) 0.2723(3) 7.24(13) Input NijmII NijmII TPE(B) Input Input 0.884(4) 1.967(6) 0.276(3) 8(1) Input 1.67(4) 6.6(4) NijmII 0.231605 0.02521 0.8845 1.9675 0.2707 5.635 5.418 1.647 6.505 Reid93 0.231605 0.02514 0.8845 1.9686 0.2703 5.699 5.422 1.645 6.453 Exp. 0.231605 0.0256(4) 0.8846(9) 1.971(6) 0.2859(3) - 5.419(7) - - \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ **Results for the Phase Shifts** {#sec:phases} ================================ Numerical parameters -------------------- For our numerical calculations we take $f_\pi=92.4 {\rm MeV}$, $m=138.03 {\rm MeV}$, $ 2 \mu_{np}= M = M_p M_n /(M_p+M_n) = 938.918 {\rm MeV}$, $ g_A =1.29 $ in the OPE piece to account for the Goldberger-Treimann discrepancy and $g_A=1.26 $ in the TPE piece of the potential. The corresponding pion nucleon coupling constant takes then the value $ g_{\pi NN}=13.083$ (i.e. $g_A=1.29$) according to the Nijmegen phase shift analysis of NN scattering [@deSwart:1997ep]. The values of the coefficients $c_1$, $c_3$ and $c_4$ used along this paper can be looked up in Table \[tab:table1\] for completeness. The potentials in configuration space used in this paper are exactly those provided in Ref. [@Ordonez:1995rz; @Kaiser:1997mw; @Rentmeester:1999vw] but disregarding relativistic corrections, $M/E \to 1$ [^4]. The potentials are listed in Appendix \[sec:potentials\] for completeness. The short distance van der Waals coefficients for all channels studied in the present work are presented in Appendix \[sec:short\]. The output of such a channel by channel analysis is briefly summarized in Table \[tab:table2\] where we indicate which scattering lengths are used as input parameters according to the discussion given in Sect. \[sec:form\]. Low energy parameters for the high quality potentials [@Stoks:1993tb; @Stoks:1994wp] have been obtained in Ref. [@PavonValderrama:2004se]. We will use the Nijm II values, but to give an idea on the expected lower uncertainties on those parameters we also list the Reid93 values. Probably the real uncertainties are much larger since the actual value of these low energy parameter will depend upon which long range physics is included in the high quality potentials where explicit TPE effects have not been included, as we do in the present work [^5]. The deuteron channel revisited ------------------------------ Before embarking in the full fledged discussion of all partial waves, it is interesting to reanalyze first the $^3S_1-^3D_1$ channel already studied in our previous work on the deuteron [@PavonValderrama:2005gu; @Valderrama:2005wv]. There, we used the orthogonality to the deuteron bound state. The scattering lengths $\alpha_{02}= 1.67\,{\rm fm}^3$ and $\alpha_2= 6.6\,{\rm fm}^5$ were deduced from the experimental deuteron binding energy, the asymptotic $D/S$ ratio and the S-wave scattering length $\alpha_0$. These values turned out to be a bit off the values deduced from the NijmII and Reid93 potentials [@PavonValderrama:2004td] (see Table \[tab:table2\]). Nevertheless, the intermediate energy region turned out to be better described than the low energy behaviour suggested. In the present work we choose instead to build scattering states which are orthogonal to the zero energy states, so deuteron properties can be deduced, as done in table \[tab:deut\_prop\]. In Fig. \[fig:3C1\] we show the results when either the zero energy or the deuteron bound state are used as reference states. One obvious lesson from this comparison is that phase shifts, particularly the $E_1$ channel, may be better described in the intermediate energy region if the deuteron is used as a reference state, despite the fact that the threshold behaviour is a bit off. This is maybe explained by the observation that $\alpha_{02}$ and $\alpha_2$ encode higher energy information about the system than $\alpha_0$ or $\gamma$ [^6], so the latter parameters are more suited to obtain an effective description of the system. This feature will become evident in other partial waves. Cut-off dependence ------------------ In Figs. \[fig:fig-j=0\], \[fig:fig-j=1\], \[fig:fig-j=2\], \[fig:fig-j=3\], \[fig:fig-j=4\] and \[fig:fig-j=5\] we show the results of our calculation for all partial waves with $j \le 5 $ as a function of the nucleon LAB energy. For definiteness we use the chiral constants $c_1$, $c_3$ and $c_4$ of Ref. [@Entem:2003ft] (Set IV) which already provided a good description of deuteron properties after renormalization [@Valderrama:2005wv] at NNLO. This choice allows a more straightforward comparison to the N$^3$LO calculation of Ref. [@Entem:2003ft] with finite cut-offs. Unless otherwise stated, the needed low energy parameters for these figures are [*always*]{} taken to be those of Ref. [@PavonValderrama:2004se] for the NijmII potential (see Table \[tab:table2\]). In order to test the stability of the phase-shifts against changes in the short distance cut-off parameter, $R_S$, we show in Figs. \[fig:fig-j=0\], \[fig:fig-j=1\], \[fig:fig-j=2\], \[fig:fig-j=3\], \[fig:fig-j=4\] and \[fig:fig-j=5\], and similarly to the OPE study in momentum space of Ref. [@Nogga:2005hy], the cut-off dependence for fixed values of the lab energy both for the OPE as well as for the TPE potentials. This is done in the range $0.15 {\rm fm} \le R_S \le 1.5 {\rm fm}$. If we identify this short distance cut-off with the sharp momentum cut-off $\Lambda= \pi/2 R_S$ [@PavonValderrama:2004td], the smallest boundary radius, $\sim 0.15 {\rm fm}$, corresponds to a maximum cut-off $\Lambda \sim 2 {\rm GeV} $. This is much larger than the cut-offs used in Refs. [@Rentmeester:1999vw; @Epelbaum:1999dj; @Epelbaum:2003gr; @Epelbaum:2003xx; @Entem:2003ft; @Rentmeester:2003mf] but comparable to the exponential cut-off used in Ref. [@Nogga:2005hy] for the renormalization of the OPE potential  [^7]. Note that the limit $R_S \to 0$ may be taken independently for any different channel. The evolution of the increasingly oscillating wave function in the attractive case can be identified with the cycles (improperly called limit-cycles, see footnote 5 in Ref. [@PavonValderrama:2004nb]) described in Refs. [@Beane:2000wh; @Beane:2001bc; @PavonValderrama:2004nb; @PavonValderrama:2004td] by looking at suitable logarithmic combinations of the wave functions. The cycles documented in Ref. [@Nogga:2005hy] in momentum space can be mapped into the coordinate space cycles by relating the coordinate and momentum space cut-offs. Generally speaking, the inclusion of chiral TPE effects generates smoother limits as compared to the OPE results, as one would expect. We have checked that for short distance repulsive (eigen)channels results are not very sensitive to the choice of the regulator for small values of $R_S$. As we also see from the figures, the convergence depends both on the partial wave as well as on the energy. As expected, the needed value of the short distance cut-off $R_S$ for which stability is achieved is rather high for peripheral waves, $R_S \sim 1/m_\pi$. Another feature of the calculation are the observed stability plateaus for a number of partial waves. This trend has also been noted in previous works with finite cut-offs [@Epelbaum:2004fk] where there appear sequential cut-off windows. In coordinate space this is originated by the almost self-similar pattern of the short distance oscillations of the wave function which suggest a sequential and faster convergence modulo cycles [@PavonValderrama:2004nb]. Let us remark at this point that the existence of an $R_S \to 0$ limit does not necessarily mean a plateau-like approach to it. This is the case, for example, of the $^1S_0$ wave, which for OPE shows a linear dependence on the cut-off due to the mild $1/r$ singularity of the potential, generating a linear-like behaviour which corresponds to the ratio of regular ($\sim r$) and irregular ($\sim 1$) solutions at the origin [^8]. A similar behaviour can be found on other singlet waves in which the OPE potential also behaves as $1/r$, but highly attenuated by the influence of the centrifugal barrier. Finally, let us note that there are some channels where the phase shifts exhibit a very strong dependence on the regulator [^9]. Renormalized phase shifts ------------------------- ### LO (OPE) In Figs. \[fig:fig-j=0\], \[fig:fig-j=1\], \[fig:fig-j=2\], \[fig:fig-j=3\], \[fig:fig-j=4\] and \[fig:fig-j=5\] we also compare the OPE (LO), the NNLO TPE and the Nijmegen phase shift analysis [@Stoks:1993tb; @Stoks:1994wp]. As noted in Tab. \[tab:table2\] in some cases with attractive singular potentials some scattering lengths must be specified, in order to determine the phase shifts, but for repulsive singular potentials the scattering lengths and hence the phase shifts are fully determined from the potential. In the coupled channel case where only one parameter should be fixed we have chosen, as indicated in Table \[tab:table2\], to take the scattering length of the corresponding partial wave with the lower orbital angular momentum. As we see from Figs. \[fig:fig-j=0\], \[fig:fig-j=1\], \[fig:fig-j=2\], \[fig:fig-j=3\], \[fig:fig-j=4\] and \[fig:fig-j=5\], OPE does a relatively good job for the phases when compared to the NijmII results, up to a reasonable energy. This calculation extends our previous results [@PavonValderrama:2005gu] using the same regularization for the singlet $^1S_0$ and triplet $^3S_1-^3D_1$ channels. The LO results corresponding to static OPE potential have also been obtained recently in momentum space by a solution of the Lippmann-Schwinger equation in Ref. [@Nogga:2005hy] for $j \le 3$. These authors see that in the limit $\Lambda \to \infty $ (in practice $\Lambda = 4 {\rm GeV} ) $ it is always possible to adjust a counterterm in such a way that the phase shifts are cut-off independent. They also find that the needed counterterm does not correspond to the expectations based on Weinberg’s dimensional power counting argument, so that one is forced to promote counterterms which are of higher order in Weinberg’s counting to make the theory free of short distance ambiguities. This proposal not only fits quite naturally into our analysis of short distance boundary conditions, but can also be anticipated by just looking at the short distance behaviour of the potential. In general, we reproduce their results for the phase shifts using our boundary condition regularization (our shortest distance cut-off is typically $a=0.1 {\rm fm} $ for OPE). This is precisely one of the points of renormalization; different regularization methods should yield identical results when the regulator is removed provided the same renormalization conditions are imposed. Note that in our case whenever a scattering length must be provided we exactly construct the phase shift as to reproduce the threshold behaviour of the Nijmegen phases [@Stoks:1993tb; @Stoks:1994wp] by exactly fixing the scattering length (the renormalization condition). This requires solving the zero energy problem by integrating in with the given scattering length, and matching at short distances the finite energy problem to finally determine the phase shift by integrating out. In this approach we never make a fit. In the approach of Ref. [@Nogga:2005hy] counterterms are adjusted to fit the phases in the region around threshold. Although this is in spirit the same renormalization condition to fix the counterterms, we expect some numerical discrepancies, due to the fact that the threshold parameters in Ref. [@Nogga:2005hy] may be slightly different to ours. ### NLO (TPE) Regarding NLO we do not show the results as they fail completely to describe the data in triplet $^3S_1-^3D_1$ channel. The problem we found already [@Valderrama:2005wv] in the triplet $^3S_1-^3D_1$ channel persists in other channels; the short distance behaviour of the NLO potential corresponds to $1/r^5$ repulsive eigenpotentials. This feature explains the relatively small maximal cut-offs allowed in NLO calculations in momentum space. As stressed on our previous work there are at least two scenarios where the problem may be overcome. One possibility appeals to the role of the $\Delta$ resonance and the fact that its contribution to $c_3$ and $c_4$ scales as the inverse of the $N \Delta$ splitting $\Delta \sim 2 m_\pi $ as found in Ref. [@Ordonez:1995rz; @vanKolck:1994yi; @Kaiser:1998wa]. In the $\Delta$ counting the $c_3$ and $c_4 $ contributions to the NNLO deltaless potential become actually NLO contributions, and the short distance behaviour becomes a $1/r^6$ attractive singularity. The second scenario has to do with the influence of relativity beyond a truncated heavy baryon expansion, since according to Refs. [@Higa:2003jk; @Higa:2003sz; @Higa:2004cr] one has a relativistic $1/r^7$ van der Waals short distance behaviour with attractive-repulsive eigen potential meaning that as in the OPE case one has one free parameter. Calculations taking into account these effects in all partial waves are currently underway [@HPR2005]. ### NNLO (chiral-TPE) {#sec:NNLO-TPE} We turn now to the NNLO calculation which are the genuine predictions of ChPT because they contain the chiral constants $c_1$, $c_3$ and $c_4$ (see e.g. Table \[tab:table1\]) and for definiteness we will use mainly Set IV [@Entem:2003ft] in our analysis [^10]. Results for the TPE renormalized phase shifts are presented in Figs. \[fig:fig-j=0\], \[fig:fig-j=1\], \[fig:fig-j=2\], \[fig:fig-j=3\], \[fig:fig-j=4\] and \[fig:fig-j=5\]. Some expected features do indeed occur. Peripheral waves are slightly modified by going from OPE to the chiral TPE potential. On the other hand, low partial waves are also improved in the low energy region. For instance, the $^1S_0 $ phase has an attractive singular interaction, requiring fixing the scattering length. The difference in the curves is mainly related to the difference in the effective range which improves when going from OPE to TPE [@Valderrama:2005wv]. This is a rather general feature, the error at low energies is controlled by the low energy threshold parameters, like the effective range and others. If one looks at the $^3P_0$ channel, we see that there is improvement but not as dramatic as in the $^1S_0$ channel. As we have said, in singular repulsive channels, which at NNLO correspond to the $^1P_1$, $^1F_3$ and $^1H_5$ singlet states, and to the $^3P_2-{}^3F_2$ and $^3F_4-{}^3H_4$ triplet states, the phase shift and also the scattering length are entirely determined by the potential. So, these phases are a good place to study the influence of different values for the chiral constants, $c_1$, $c_3$ and $c_4$, presented in Table \[tab:table1\]. In Fig. \[fig:sing\_rep\] we show this dependence for these special partial waves. As we see, the $^1P_1$ phase exhibits a strong dependence on the parameter set, while $^1F_3$ and $^1H_5$ are less sensitive to this particular choice. The strong dependence in the $^1P_1$ channel suggests that this may be an ideal place to fit the chiral constants, since the scattering lengths are fixed. We will not attempt such determination of the chiral constants here because that would require realistic error estimates of the phase shifts. If we restrict to the spin singlet channels we see that there is very good agreement for higher peripheral waves, $^1H_5$, $^1G_4$ and $^1F_3$. This is expected from perturbative calculations. Note, however, that unlike perturbation theory we fix by construction the scattering lengths for the case of singular and attractive potentials. Some intermediate waves, such as $^1D_2$, which potential is singular attractive, are badly reproduced despite the fact that the threshold behaviour is in theory reproduced since we use the corresponding scattering length as input. Actually, for these waves the TPE result seems to worsen the OPE prediction. Presumably this is an indication either on the inadequacy of the (NijmII) scattering lengths used as input for NNLO or on the importance of N$^3$LO contributions. Let us note that the NijmII potential does not incorporate explicit TPE effects in their long range part. In fact, if we take a slightly different scattering length, $\alpha_2 = -1.666 {\rm fm}^5 $ instead of the values deduced in Ref. [@PavonValderrama:2004se], $\alpha_2=-1.389 {\rm fm}^5$ for the NijmII potential a rather good agreement with the Nijmegen analysis is obtained for the $^1D_2$ phase shift (see similar results for $^3P_0$ and $^3P_1$ waves in Fig. \[fig:alphas\]). Although the small difference between the fitted and experimental values for the scattering length could also be explained by N$^3$LO corrections, suggesting that they are not large, a definite conclusion cannot be drawn in the absence of a large scale fit [^11]. These general trends are confirmed in the triplet channels, where in high partial waves there is an overall improvement when going from OPE to TPE. In some cases, like in the $^3D_2$ , $\epsilon_2 $, $^3P_2$ and $^3F_2$ the improvement is rather satisfactory all over the energy range. However, the theory has notorious problems in the $^3P_1$ and $\epsilon_1$ and to a lesser extent in the $^3D_3 $ and $E_1$ channels if one insists on keeping the scattering lengths of the NijmII potential. As before, small changes in the scattering lengths allow for an overall improved description as can be deduced from Fig. \[fig:alphas\] in some particular cases (see also Fig \[fig:3C1\]). This suggests that higher orders in the potential may be needed. This fact was pointed out in our previous work on the central phases, where the NNLO potential made [*almost*]{} the effective range although there was a statistically significant discrepancy to the experimental number, which called for the inclusion of N$^3$LO terms. This may possibly happen also in some higher partial waves and it would be interesting to see whether improved long distance potentials might account for the observed discrepancies to phase shifts provided the scattering lengths are kept to their physical values. As we have shown (see Fig. \[fig:alphas\]), small changes in the scattering lengths indeed allow for a better description of the phases in the intermediate energy region. On the other hand, we would expect our description to become increasingly better for lower energies. This situation is a bit disconcerting. Given the similarity between the scattering lengths computed in Ref. [@PavonValderrama:2004se] for the NijmII and Reid93 potentials it seems unlikely that potential models yield a completely off value for the $\alpha$’s in non-central waves, but one must admit that errors will in general be larger than the difference between these two potential values suggests as already argued above. If one takes into account the fact that both potentials include similar long range physics, this means that the true error could be larger due to systematic uncertainties in their short range part (non OPE). Nevertheless, let us mention that current calculations involving chiral potentials not only ignore this possible disagreement at threshold but they in fact modify the corresponding scattering lengths since the counterterms are determined by a fit to the phase shifts in the region above threshold with no obvious control on the low energy parameters (see e.g. Ref. [@Nogga:2005hy]). The arguments above do not prove that taking slightly different scattering lengths than those suggested by the high quality potentials is a legitimate operation but at least shows that no more assumptions are made. From this viewpoint it might be profitable to study the impact on those calculations of either imposing exact threshold behaviour or alternatively evaluating the threshold parameters themselves. Remarks on the perturbative nature of peripheral waves ------------------------------------------------------ The numerical coincidence of our non-perturbative calculations with perturbation theory expectations [@Kaiser:1997mw; @Entem:2002sf], although quite natural on physical grounds, deserves some explanation on the basis of the formalism and the relevance of short distance singularities. Indeed, the attractive character of the singular NNLO potentials at the origin implies a non-trivial boundary condition of the form of Eq. (\[eq:uA\]), which cannot be reproduced to any given order in perturbation theory, at least without the inclusion of extra counterterms in the perturbative expansion, a point which will be further discussed at the end of this section. This point was previously illustrated in Ref. [@Beane:2000wh] for s-waves and also in our previous work on the renormalization of the OPE [@PavonValderrama:2005gu] by comparing the exact deuteron wave functions with the perturbative ones. There, one observes that the first order perturbative calculation provides finite results, but the expansion at second order produces divergent results due to the short distance non-normalizable $D-$wave component. Thus, observables cannot, strictly speaking, be analytical functions of the coupling (for the purpose of discussion we could visualize the problem by thinking of singularities of the sort $g^2 + g^4 \log g^2 $). This does not mean that for the physical range of couplings the non-analytical contribution is necessarily large numerically. For instance, in the deuteron channel the residual non-analytical higher order terms happens to be numerically sizeable even for a weakly bound deuteron. Based on the results of Ref. [@PavonValderrama:2005gu], there is no reason to expect that higher partial waves will not exhibit this failure of perturbation theory at some finite order. Nevertheless, the perturbative short distance behaviour of higher partial waves tames the singularity due to the kinematical $r^l$ suppression. This is a perturbative long distance feature where the centrifugal barrier dominates. The point is that this short distance behaviour is not invariant order by order in strict perturbation theory for a singular potential and, actually, one finds a short distance enhancement of the wave function even in perturbation theory. So, one expects that the perturbation theory on a singular potential will diverge at some finite order also for high partial waves. In Appendix \[sec:pert\] we show that this is indeed the case; for a singular potential diverging like $1/r^n$ ($n > 2$) and a partial wave with angular momentum $l$, the perturbative expansion diverges at $k-$th order in perturbation theory provided $k > (2l+1)/(n-2) $. This estimate provides the order at which, if desired, a long distance perturbation theory on boundary conditions might be applied as discussed previously for the deuteron channel [@PavonValderrama:2005gu]. Using the techniques developed in Ref. [@Valderrama:2005wv] to make perturbation theory on distorted OPE central waves it would be interesting to see, as claimed by renormalization arguments on the OPE [@Birse:2005um], whether such an expansion is indeed possible. Having established that perturbation theory will diverge at some finite order, we would like now to understand why it still can accurately represent the full non-perturbative solutions obtained numerically. The reason can be found in the very efficient way how the short distance singularity of the potential makes short distances to be inessential in the wave function for the regular non perturbative solution. For high angular momenta and attractive singular potential the wave function senses the singularity [*after*]{} tunneling through the barrier, an exponentially suppressed effect. In perturbation theory the effect is just substituted by the core provided by the centrifugal barrier. Conclusions {#sec:concl} =========== In the present paper we have analyzed the renormalization of non-central waves for NN scattering for the OPE and chiral TPE potentials. This calculation extends our previous studies on central phases and the deuteron for OPE and TPE potentials presented in Refs. [@PavonValderrama:2005gu; @Valderrama:2005wv] respectively. As already stressed in those works the requirement of finiteness of the scattering amplitude as well as the orthogonality of wave functions impose tight constraints on the allowed structure of counterterms for a given potential. Using the standard Weinberg counting for the potential, the counterterm structure is deduced and does not generally coincide with the naive expectations. In some cases forbidden counterterms in the Weinberg counting must be allowed [@PavonValderrama:2005gu; @Nogga:2005hy] whereas in some other cases allowed counterterms must be excluded [@Valderrama:2005wv]. Finite cut-off calculations based on the Weinberg counting allow to introduce counterterms which are usually readjusted to globally fit the data but are forbidden by finiteness and orthogonality, in renormalized calculations. The success of the original counting relies heavily on keeping finite the cut-off, while at the same time it is usually emphasized that low energy physics does not depend crucially on short distance details. As we have argued, these two facts are mutually contradicting; the standard Weinberg counting is incompatible with exact renormalization, i.e. removing the cut-off, as was suggested in Ref. [@Kaplan:1998tg] within a perturbative set up and shown in Ref. [@Nogga:2005hy] non-perturbatively, at least in the heavy baryon expansion and when only nucleons and pions are taken into account. This feature changes when relativistic effects and $\Delta$ degrees of freedom are taken into account, showing that perhaps renormalization, i.e. the independence on short distance details may be a strong condition on admissible potentials. In this regard we find that, as one would expect, the cut-off dependence is milder for the chiral TPE potential than for OPE potential. This suggests that higher order corrections become even more cut-off independent. Indeed, the finite cut-off N$^3$LO calculations of Ref. [@Epelbaum:2004fk] do exhibit this feature in spite of the strong cut-off dependence observed at lower orders. Using this modified Weinberg counting, the quality of the agreement and improvement depends on the particular partial wave. High partial peripheral waves, when treated non-perturbatively reproduce the data fairly well and deviations from OPE to TPE are small, as one would expect in a perturbative treatment. Nevertheless, we have also shown that regardless on the orbital angular momentum, there is always a limit to the order in perturbation theory for which finite results are obtained. The divergence is related to an indiscriminate use of the perturbative expansion, and not to an intrinsic deficiency in the definition of the scattering amplitude. Thus, also for peripheral waves the phase-shifts are perturbatively non renormalizable while they are non-perturbatively renormalizable. This result extends a similar observation for the deuteron [@PavonValderrama:2004nb; @PavonValderrama:2005gu]. Nevertheless, we have also argued why convergent perturbative calculations to finite order are useful and may even provide accurate descriptions when compared to the non-perturbative result. Unlike naive expectations, it is not always true that after renormalization the NNLO TPE phases improve over OPE ones if one [*insists*]{} on keeping the scattering lengths required by finiteness to the same physical values as those extracted [@PavonValderrama:2004se] from the high quality Nijmegen potentials[@Stoks:1994wp]. This renormalization condition at zero energy has been adopted to highlight the difference between these potentials [@Stoks:1994wp] and the chiral NNLO singular potentials [@Kaiser:1997mw]. Remarkably, using zero energy to fix the parameters has never been considered before within the chiral potentials approach to NN scattering, thus some of the problems we find and discuss have not even been identified so far. Actually, we find that some partial waves such as $^1D_2$ and $^3P_1$ are particularly sensitive to the value of the scattering length. In fact, it is found that small deviations of the scattering lengths at the few percent level in these partial waves improve dramatically the description in the intermediate energy region. The improvement can also be achieved in other partial waves by suitably tuning the scattering lengths in all the channels characterized by singular attractive interactions. This means that the absolute error is small up to $E_{\rm LAB} \sim 100 {\rm MeV }$. Three pion exchange effects should become relevant at about CM momentum of $k= 3m /2 $ which corresponds approximately to this LAB energy. The modification corresponds to change the renormalization condition to some finite energy, or maximizing the overlapp between the chiral phase shifts and the fitted ones in a given energy window, very much along the lines pursued in previous works. However, changing the scattering lengths produces large relative errors near the threshold. At this point the discussion on errors on the phase shifts becomes a crucial matter, particularly in the low energy region. In this regard, it seems likely that the difference in low energy threshold parameters determined in Ref. [@PavonValderrama:2004se] for the Reid93 and NijmII in all partial waves with $j \le 5 $ provides a lower bound for the true error. Obviously, a meticulous error analysis of these threshold parameters would be very helpful. We have also found that some partial waves, with repulsive singular interactions and where no free scattering lengths are allowed, are particularly sensitive to the choice of chiral constants $c_1$, $c_3$ and $c_4$. This suggests that a fit of the chiral constants to these partial waves may be possible. To do so, and again, a realistic estimate on the errors of the phase shifts would be mandatory. According to our findings on the deuteron for the chiral TPE potential [@Valderrama:2005wv] it is quite likely that, if such error estimate was reliably done, theoretical determinations for deuteron observables with unprecedented precision based on chiral potentials might be achieved. This issue is being currently under consideration and is left for future research [@Pavon2005]. From a practical viewpoint there is a potential disadvantage in requiring exact renormalization for the approximated long distance chiral potentials, due to the tight constraints imposed by finiteness on the short distance behaviour of the wave functions. To some extent, although the chiral potentials are motivated by the Effective Field Theory idea, these additional conditions remind also aspects of renormalization of fundamental theories. This is not entirely surprising since we expect the chirally based potentials to resemble the [*true*]{} NN potential, at least at sufficiently long distances. For instance OPE is a true long distance contribution. Full TPE would also be a true long distance part, which is known in an approximate manner within the current ChPT schemes based on dimensional power counting. Nevertheless, the essential difference is that non perturbative dimensional transmutation, i.e. the generation of dimension-full parameters not encoded in the potential, occurs due to the singular and attractive nature of long distance interactions already at the lowest order approximation consisting of OPE. This non-perturbative renormalizability is the essential feature that makes this problem particularly tough and so distinct from the previous experience of perturbative renormalization on Effective Field Theories or finite cut-off representations of the problem. The present work not only shows that the theoretical requirement of renormalizability can be implemented as a matter of principle and as a practical way of controlling short distance ambiguities in the predictions of Chiral Perturbation Theory for the study of NN scattering, but also that interesting physical and phenomenological insights are gathered from such an investigation. We have shown under what conditions such a program can successfully be carried out as a possible alternative and model independent way of describing the data by using very indirect, but essential, information on the implications of chiral symmetry for the NN problem below the pion production threshold. We have profited from lively and stimulating discussions with the participants at “Nuclear Forces and QCD: Never the Twain Shall Meet ? “ at ECT$^*$ in Trento. We would also like to thank Andreas Nogga for pointing out an error in the $^1S_0$ phase shift plot. This work is supported in part by funds provided by the Spanish DGI and FEDER funds with grant no. FIS2005-00810, Junta de Andalucía grant No. FQM-225, and EU RTN Contract CT2002-0311 (EURIDICE). Potentials {#sec:potentials} ========== For completeness we list here the potentials found in Ref. [@Kaiser:1997mw], and used in this paper. In coordinate space the general form of the potential is written as $$\begin{aligned} {\cal V}_{NN} &=& V_C (r) + \vec \tau_1 \cdot \vec \tau_2 W_C (r) \nonumber \\ &+& \big[V_{S} (r)+ \vec \tau_1 \cdot \vec \tau_2 W_{S} (r)\big]\,\vec\sigma_1 \cdot \vec \sigma_2 \nonumber \\ &+& \big[ V_T (r)+ \vec \tau_1 \cdot \vec \tau_2 W_T (r) \big]\, \left( 3 \vec \sigma_1 \cdot \hat r \vec \sigma_2 \cdot \hat r - \vec\sigma_1 \cdot \vec \sigma_2 \right) \nonumber \\ &+& \big[ V_{LS}(r) +\vec\tau_1 \cdot \vec \tau_2 W_{LS} (r)\big] \,\vec L \cdot \vec S \, ,\end{aligned}$$ For states with good total angular momentum one obtains $$\begin{aligned} U_{jj}^{0j} (r) &=& M \left[ (V_C - 3 V_S )+ \tau (W_C - 3 W_S ) \right] \, ,\\ U_{jj}^{1j} (r) &=& M \big[ (V_C + V_S - V_{LS}) \nonumber \\ &+& \tau (W_C + W_S- W_{LS}) + 2 (V_T + \tau W_T ) \big] \, , \nonumber \\ \\ U_{j-1,j-1}^{1j} &=& M \big[ (V_C + \tau W_C + V_S + \tau W_S ) \nonumber\\ &+& (j-1) \left( V_{LS}+ \tau W_{LS} \right) \nonumber \\ &+&\frac{2(j-1)}{2j+1} \left( V_T +\tau W_T \right) \big] \, , \\ U_{j-1,j+1}^{1j} &=& - \frac{6\sqrt{j(j+1)}}{2j+1} M \left( V_T +\tau W_T\right) \, , \\ U_{j+1,j+1}^{1j} &=& M \big[ (V_C + \tau W_C + V_S + \tau W_S ) \nonumber \\ &+& 2(j+2) \left( V_{LS}+ \tau W_{LS} \right) \nonumber \\ &+& \frac{2(j+2)}{2j+1} \left( V_T +\tau W_T \right) \big] \, ,\end{aligned}$$ with $\tau= 2 T(T+1) -3 $. Remember that Fermi-Dirac statistics requires $(-1)^{L+S+T}=-1$. The LO (OPE) potentials read ($x= m_\pi r $ ) $$\begin{aligned} W_S^{OPE} &=& \frac{g^2 m^3 }{ 48 \pi f^2 }\frac{e^{-x}}{x} \, ,\\ W_T^{OPE} &=& \frac{g^2 m^3 }{ 48 \pi f^2 } \frac{e^{-x}}{x}\left( 3 + \frac{3}{x} + \frac1{x^2} \right) \, ,\end{aligned}$$ all others being zero. The non-vanishing NNLO (TPE) potentials are given by $$\begin{aligned} V_C^{TPE} (r) &=& \frac{3 g^2 m^6 }{32 \pi^2 f^4 }\frac{e^{-2 x}}{x^6} \Big\{ \left( 2 c_1 + \frac{3 g^2}{16 M} \right) x^2 (1+x)^2 + \frac{g^5 x^5}{32 M} + \left(c_3 + \frac{3 g^2}{16 M} \right) \left( 6 + 12 x + 10 x^2 + 4 x^3 + x^4 \right) \Big\} \nonumber \, , \\ W_T^{TPE} (r) &=& \frac{g^2 m^6 }{48 \pi^2 f^4 }\frac{e^{-2 x}}{x^6} \Big\{ - \left( c_4 + \frac{1}{4 M} \right) (1+x) (3 + 3 x +x^2) + \frac{ g^2}{32 M} \left( 36 + 72 x + 52 x^2 + 17 x^3 + 2 x^4 \right) \Big\} \nonumber \, , \\ V_T^{TPE} (r) &=& \frac{g^4 m^5 }{128 \pi^3 f^4 x^4 } \Big\{ -12 K_0 ( 2x) - (15 + 4 x^2 ) K_1 (2 x) + \frac{3 \pi m e^{-2x}}{8 M x } \left( 12 x^{-1} + 24 + 20 x + 9 x^2 + 2 x^3 \right) \Big\} \nonumber \, , \\ W_C^{TPE} (r) &=& \frac{g^4 m^5 }{128 \pi^3 f^4 x^4 } \Big\{ \left[ 1 + 2 g^2 ( 5 + 2 x^2 ) - g^4 ( 23 + 12 x^2 ) \right] K_1 (2 x ) + x \left[ 1+ 10 g^2 - g^4 ( 23 + 4 x^2 ) \right] K_0 ( 2 x) \nonumber \, , \\ &+& \frac{g^2 m \pi e^{-2x}}{4 M x} \left[ 2 ( 3g^2 - 2 ) \left( 6 x^{-1} + 12 + 10 x + 4 x^2 + x^3 \right) \right] + g^2 x \left( 2 + 4 x + 2 x^2 + 3 x^2 \right) \Big\} \nonumber \, ,\\ V_S^{TPE} (r) &=& \frac{g^4 m^5 }{32 \pi^3 f^4} \Big\{ 3 x K_0 (2 x) + ( 3 + 2 x^2 ) K_1 (2 x) - \frac{3 \pi m e^{-2x}}{16 M x} \left( 6 x^{-1} + 12 + 11 x + 6 x^2 + 2 x^3 \right) \Big\} \nonumber \, , \\ W_S^{TPE} (r) &=& \frac{g^2 m^6 }{48 \pi^2 f^4 }\frac{e^{-2x}}{x^6} \Big\{ \left(c_4 + \frac1{4M} \right) (1+x) ( 3 + 3 x + 2 x^2 ) - \frac{g^2 }{16 M }\left( 18 + 36 x + 31 x^2 + 14 x^3 + 2 x^4 \right) \Big\} \nonumber \, , \\ V_{LS}^{TPE} (r) &=& - \frac{3 g^4 m^6 }{64 \pi^2 M f^4 } \frac{e^{-2 x}}{x^6} (1+x) \left( 2 + 2 x + x^2 \right) \nonumber \, , \\ W_{LS}^{TPE} (r) &=& \frac{g^2 ( g^2 -1) m^6 }{32 \pi^2 M f^4 } \frac{e^{-2x}}{x^6}(1+x)^2 \, ,\end{aligned}$$ where $K_0$ and $K_1$ are modified Bessel functions. The NLO terms are obtained by dropping all terms in $1/M$ and $c_1$, $c_3$ and $c_4$. The divergence of perturbation theory for peripheral waves {#sec:pert} ========================================================== In this appendix we show that for a singular, attractive or repulsive, potential at the origin which diverges like $1/r^n $, there is always a finite order in perturbation theory where the phase shift diverges, regardless on the particular value of the angular momentum. Let us consider for simplicity the single channel case. The radial equation can be transformed into the integral equation $$\begin{aligned} u_l (r) = \hat j_l (k r ) + \int_0^\infty G_{k,l} (r,r') U(r') u_l (r') dr' \, , \label{eq:int_eq}\end{aligned}$$ where $G_{k,l}$ is the Green function given by $$\begin{aligned} k G_{k,l}(r,r') &=& \hat j_l (k r) \hat y_l (kr') \theta (r'-r) \nonumber \\ &+& \hat j_l (k r') \hat y_l (kr) \theta (r-r') \, ,\end{aligned}$$ where $\theta (x)$ is the Heavyside step function, $\theta(x) = 1 $ for $x \ge 0 $ and $\theta(x)=0$ for $x < 0 $ and $\hat j_l (x) = x j_l (x) $ and $\hat y_l (x) = x y_l (x) $ are the regular and singular reduced spherical Bessel functions respectively. To regularize the lower limit of integration in Eq. (\[eq:int\_eq\]) one may assume a short distance regulator which will eventually be removed. The phase shift is given by $$\begin{aligned} \tan \delta_l &=& - \frac1{k} \int_0^\infty \hat j_l (k r) U(r) u_l (r) \, .\end{aligned}$$ In perturbation theory by successive iteration of Eq. (\[eq:int\_eq\]) the Born series $$\begin{aligned} \tan \delta_l &=& - \frac1{k} \int_0^\infty dr \left[\hat j_l (k r)\right]^2 U(r) \nonumber \\ &-& \frac1{k} \int_0^\infty dr dr' \hat j_l (k r) U(r) U(r') G_{k,l} (r,r') j_l (k r') + \dots \nonumber \, ,\\\end{aligned}$$ is obtained. For our purposes of proving the divergence of perturbation theory it is sufficient to analyze the low energy limit. Using $\delta_l \to - \alpha_l k^{2l +1} $ and using known properties of the Bessel functions $$\begin{aligned} \hat j_l (x) \to \frac{x^{l+1}}{(2l+1)!!} \qquad \hat y_l (x) \to -\frac{(2l-1)!!}{x^{l}} \, .\end{aligned}$$ The Green’s function becomes $$\begin{aligned} -(2l+1) G_{0,l}(r,r') = \frac{r^{l+1}}{{r'}^l} \theta (r'-r) + \frac{{r'}^{l+1}}{r^l} \theta (r-r') \, , \nonumber \\ \end{aligned}$$ we get $$\begin{aligned} (2l+1)!!^2 \alpha_l &=& \int_0^\infty dr r^{2l+2} U(r) \nonumber \\ &+& \frac{2}{2l+1} \int_0^\infty dr r \int_0^r dr' (r') ^{2l+2} U(r) U(r') + \dots \nonumber \, .\\\end{aligned}$$ Since we only want to analyze the short distance behaviour we can estimate the convergence of integrals by using the finite range and singular potential $U(r) = (R/r)^n /R^2 \theta (a-r)$. Thus, we see that in the first Born approximation the integral converges for $ 2 l + 1 > n-2 $, whereas the second Born approximation requires $2 l + 1 > 2 (n-2) $. This is obviously a more stringent condition. In general, at $k-$th order convergence at the origin is determined by the integral $$\begin{aligned} \int_0^\infty dr_1 r_1 U(r_1 ) \int_0^{r_1} dr_2 r_2 U(r_2) \dots \int_0^{r_{k-1}} dr_k r_k^{2l+2} U(r_k) \, ,\nonumber \\ \end{aligned}$$ which is finite only for $2 l +1 > k (n-2) $, a condition violated for sufficiently high $k$ when $n> 2$. So, for $n > 2 $ there will always occur a divergent contribution at a given finite order, even if the Born approximation was finite due to a high value of the angular momentum, $l$. Leading singularities in the Short distance expansion {#sec:short} ===================================================== The determination of the short distance behaviour from the full potentials is straightforward, but it is necessary to determine the number of independent parameters in every channel and at any level of approximation. For a quick reference we list the leading singularity behaviour in Table \[tab:table3\] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Wave LO NLO NNLO ---------- ------------------------------------------------------------------ ---------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- $^1S_0 $ $ - \frac{g^2 m^2 M}{16 \pi f^2} \frac{1}{r} $ $\frac{(1+10 g^2 -59 g^4)M}{256 \pi^3 f^4}\frac{1}{r^5} $ $ \frac{3 g^2 (-4+24 \bar c_3 - 8 \bar c_4+ 15 g^2)}{128 \pi^2 f^4 }\frac{1}{r^6} $ $^3P_0 $ $ - \frac{g^2 M }{4 \pi f^2}\frac{1}{r^3} $ $ \frac{(1+10 g^2 +49 g^4)M}{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{g^2 (12+72 \bar c_3 +40 \bar c_4+g^2)}{128 \pi^2 f^4}\frac{1}{r^6} $ $^1P_1 $ $ \frac{3 g^2 m^2 M }{16 \pi f^2}\frac{1}{r} $ $ \frac{3(-1-10 g^2 +11 g^4)M}{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{9 g^2 (4+8 \bar c_3 +8 \bar c_4-3 g^2)}{128 \pi^2 f^4 }\frac{1}{r^6} $ $^3P_1 $ $ \frac{g^2 M }{8 \pi f^2 }\frac{1}{r^3} $ $ \frac{(1+10 g^2 -41 g^4)M}{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{g^2 (-2+36 \bar c_3 -4 \bar c_4+ 19 g^2)}{64 \pi^2 f^4 }\frac{1}{r^6} $ $^3S_1 $ $ 0$ $ \frac{3(-1-10 g^2 + 27 g^4) M}{256 \pi^3 f^4}\frac{1}{r^5} $ $ -\frac{3 g^2 (-4-24 \bar c_3 + 8 \bar c_4 + 3 g^2 )}{128 \pi^2 f^4 }\frac{1}{r^6} $ $^3D_1 $ $ \frac{3 g^2}{8 f^2 \pi }\frac{1}{r^3} $ $ \frac{3 (-1 -10 g^2 + 37 g^4)M }{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{9 g^2 (-1 +2 \bar c_3 - 2 \bar c_4 + 2 g^2 )}{32 \pi^2 f^4 }\frac{1}{r^6} $ $E_1 $ $ -\frac{3g^2}{4 \sqrt{2} f^2 \pi} \frac{1}{r^3}$ $-\frac{15 g^4 M }{64 \sqrt{2} f^4 \pi^3 } \frac{1}{r^5} $ $ \frac{- 3g^2 (-4 -16 \bar c_4 + 3 g^2 ) }{64 \sqrt{2} \pi^2 f^4 } \frac{1}{r^6} $ $^1D_2 $ $ - \frac{g^2 m^2 M}{16 \pi f^2}\frac{1}{r} $ $\frac{(1+10 g^2 -59 g^4)M}{256 \pi^3 f^4}\frac{1}{r^5} $ $ \frac{3 g^2 (-4+24 \bar c_3 - 8 \bar c_4+ 15 g^2)}{128 \pi^2 f^4 }\frac{1}{r^6} $ $^3D_2 $ $ - \frac{3 g^2 M}{8 \pi f^2}\frac{1}{r^3} $ $\frac{(1+10 g^2 -89 g^4)M}{256 \pi^3 f^4}\frac{1}{r^5} $ $ \frac{ g^2 (-4+18 \bar c_3 - 10 \bar c_4+15 g^2)}{32 \pi^2 f^4 }\frac{1}{r^6} $ $^3P_2 $ $ - \frac{g^2 M }{40 f^2 \pi } \frac{1}{r^3}$ $ \frac{(1+10 g^2 - 5 g^4)M }{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{g^2 (-9 + 90 \bar c_3 + 14 \bar c_4 + 5 g^2 )}{160 \pi^2 f^4 }\frac{1}{r^6}$ $^3F_2 $ $-\frac{g^2 M}{10 \pi f^2 }\frac{1}{r^3}$ $ \frac{(1+10 g^2 + 13 g^4 ) M }{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{g^2(76 + 360 \bar c_3 + 104 \bar c_4 + 175 g^2 )}{640 \pi^2 f^4 }\frac{1}{r^6}$ $E_2 $ $\frac{3\sqrt{3} }{20 \sqrt{2} \pi f^2 }\frac{1}{r^3}$ $-\frac{9 \sqrt{3} g^4 M }{64 \sqrt{2} f^4 \pi^3 }\frac{1}{r^5}$ $\frac{3 \sqrt{3}g^2 (-4 -16 \bar c_4 + 15 g^2)}{320 \sqrt{2} \pi^2 f^4 }\frac{1}{r^6} $ $^1F_3 $ $ \frac{3 g^2 m^2 M }{16 \pi f^2 } \frac{1}{r}$ $ \frac{3(-1-10 g^2 + 11 g^4)M }{256 \pi^3 f^4 } \frac{1}{r^5}$ $-\frac{9 g^2 (-4 - 8 \bar c_3 - 8 \bar c_4 + 3 g^2 )}{128 \pi^2 f^4 }\frac{1}{r^6}$ $^3F_3 $ $ \frac{g^2 M }{8 \pi f^2 }\frac{1}{r^3}$ $ \frac{(1+10 g^2 - 41 g^4 )M}{256 \pi^3 f^4 }\frac{1}{r^5}$ $\frac{g^2 (-2+36 \bar c_3 - 4\bar c_4 + 19 g^2) }{64 \pi^2 f^4 }\frac{1}{r^6}$ $^3D_3 $ $ -\frac{g^2 M }{28 \pi f^2 }\frac{1}{r^3}$ $ \frac{(7+70 g^2 - 17 g^4 ) M}{1792 \pi^3 f^4 }\frac{1}{r^5}$ $-\frac{g^2 (76 - 504 \bar c_3 - 88 \bar c_4 + 37 g^2 }{896 \pi^2 f^4 }\frac{1}{r^6}$ $^3G_3 $ $ -\frac{5 g^2 M }{56 \pi f^2 } \frac{1}{r^3}$ $ \frac{(7+70 g^2 + 73 g^4 ) M}{1792 \pi^3 f^4 }\frac{1}{r^5}$ $ \frac{g^2 (66 +252\bar c_3 + 68 \bar c_4 + 155 g^2 )}{448 \pi^2 f^4 }\frac{1}{r^6} $ $E_3 $ $ \frac{3 \sqrt{3} g^2 M }{28 \pi f^2 }\frac{1}{r^3}$ $ -\frac{45 \sqrt{3} g^4 M}{448 \pi^3 f^4 }\frac{1}{r^5}$ $ \frac{3 \sqrt{3} g^2 (-4 -16\bar c_4 + 15 g^2 )}{448 \pi^2 f^4 }\frac{1}{r^6}$ $^1G_4 $ $ -\frac{ g^2 m^2 M }{16 \pi f^2}\frac{1}{r} $ $ \frac{(1+10 g^2 -59 g^4)M}{256 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{3 g^2 (-4+24 \bar c_3 -8 \bar c_4+15 g^2)}{128 \pi^2 f^4 } \frac{1}{r^6}$ $^3G_4 $ $ -\frac{3 g^2 M }{8 \pi f^2}\frac{1}{r^3} $ $ \frac{3(-1-10 g^2 + 17 g^4)M}{256 \pi^3 f^4 } \frac{1}{r^5} $ $ \frac{3 g^2 (2+12 \bar c_3 +4 \bar c_4+ g^2)}{64 \pi^2 f^4 } \frac{1}{r^6} $ $^3F_4 $ $ \frac{3 g^2 M }{28 \pi f^2} \frac{1}{r^3} $ $ \frac{3(-7-70 g^2 + 209 g^4)M}{1792 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{3 g^2 (76 +168 \bar c_3 -88 \bar c_4- 127 g^2)}{896 \pi^2 f^4 } \frac{1}{r^6}$ $^3H_4 $ $ \frac{15 g^2 M }{56 \pi f^2}\frac{1}{r^3} $ $ \frac{3(-7-70 g^2 + 239 g^4)M}{1792 \pi^3 f^4 }\frac{1}{r^5} $ $ \frac{3 g^2 (-66 + 84 \bar c_3 -68 \bar c_4+ 137 g^2)}{448 \pi^2 f^4 } \frac{1}{r^6}$ $E_4 $ $ -\frac{9 \sqrt{3} g^2 M }{28 \pi f^2}\frac{1}{r^3} $ $ -\frac{45 \sqrt{3} g^4 M}{448 \pi^3 f^4 }\frac{1}{r^5} $ $ -\frac{9 \sqrt{3} g^2 (-4 - 16 \bar c_4 + 3 g^2)}{448 \pi^2 f^4 } \frac{1}{r^6}$ $^1H_5 $ $\frac{3 g^2 m^2 M }{16 \pi f2 }\frac{1}{r}$ $\frac{3(-1-10 g^2 + 11 g^4 )M}{256 \pi^3 f^4 }\frac{1}{r^5} $ $\frac{9 g^2 ( 4 + 8 \bar c_3 + 8 \bar c_4 - 3 g^2 )}{128 \pi^2 f^4 }\frac{1}{r^6}$ $^3H_5 $ $\frac{g^2 M }{8 \pi f^2}\frac{1}{r^3}$ $\frac{1+10g^2 - 41 g^4 )M}{256 \pi^3 f^4 }\frac{1}{r^5}$ $\frac{g^2(-2 + 36 \bar c_3 - 4 \bar c_4 + 19 g^2 )}{64 \pi^2 f^4 }\frac{1}{r^6}$ $^3G_5 $ $\frac{3 g^2 M }{22 \pi f^2 }\frac{1}{r^3}$ $\frac{3(-11-110 g^2 + 337 g^4 )M}{2816 \pi^3 f^4 }\frac{1}{r^5}$ $\frac{3 g^2 (204 + 264 \bar c_3 - 152 \bar c_4 - 373 g^2 )}{1408 \pi^2 f^4 } \frac{1}{r^6}$ $^3I_5 $ $\frac{21 g^2 M }{88 \pi f^2} \frac{1}{r}$ $\frac{3(-11-110 g^2 + 367 g^4 )M}{2816 \pi^3 f^4 } \frac{1}{r^5}$ $ \frac{3 g^2 ( - 73 + 66 \bar c_3 - 50 \bar c_4 + 151 g^2 )}{352 \pi^2 f^4 } \frac{1}{r^6}$ $E_5 $ $- \frac{9 \sqrt{15}g^2 M }{44 \sqrt{2} \pi f^2 } \frac{1}{r} $ -$\frac{45 \sqrt{15}g^4 M }{704 \sqrt{2} \pi^3 f^4 } \frac{1}{r^5}$ $-\frac{9 \sqrt{15}g^2 (- 4 - 16 \bar c_4 + 3 g^2 )}{704 \pi^2 f^4} \frac{1}{r^5}$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [99]{} S. Weinberg, Phys. Lett. B [**251**]{}, 288 (1990). S. Weinberg, Nucl. Phys. B [**363**]{}, 3 (1991). C. Ordonez, L. Ray and U. van Kolck, Phys. Rev. C [**53**]{}, 2086 (1996) N. Kaiser, R. Brockmann and W. Weise, Nucl. Phys. A [**625**]{}, 758 (1997) M. C. M. Rentmeester, R. G. E. Timmermans, J. L. Friar and J. J. de Swart, Phys. Rev. Lett.  [**82**]{}, 4992 (1999) E. Epelbaum, W. Gloeckle and U. G. Meissner, Eur. Phys. J. A [**19**]{}, 125 (2004) E. Epelbaum, W. Gloeckle and U. G. Meissner, Eur. Phys. J. A [**19**]{}, 401 (2004) D. R. Entem and R. Machleidt, Phys. Rev. C [**68**]{}, 041001 (2003) E. Epelbaum, W. Gloeckle and U. G. Meissner, Nucl. Phys. A [**671**]{}, 295 (2000) M. C. M. Rentmeester, R. G. E. Timmermans and J. J. de Swart, Phys. Rev. C [**67**]{}, 044001 (2003) E. Epelbaum, W. Glockle and U. G. Meissner, Nucl. Phys. A [**747**]{}, 362 (2005) T. A. Rijken and V. G. J. Stoks, Phys. Rev. C [**54**]{}, 2851 (1996) N. Kaiser, S. Gerstendorfer and W. Weise, Nucl. Phys. A [**637**]{}, 395 (1998) E. Epelbaum, W. Gloeckle and U. G. Meissner, Nucl. Phys. A [**637**]{}, 107 (1998) J. L. Friar, Phys. Rev. C [**60**]{}, 034002 (1999) K. G. Richardson, arXiv:hep-ph/0008118. N. Kaiser, Phys. Rev. C [**61**]{}, 014003 (2000) N. Kaiser, Phys. Rev. C [**62**]{}, 024001 (2000) N. Kaiser, Phys. Rev. C [**65**]{}, 017001 (2002) N. Kaiser, Phys. Rev. C [**64**]{}, 057001 (2001) N. Kaiser, Phys. Rev. C [**63**]{}, 044010 (2001) D. R. Entem and R. Machleidt, arXiv:nucl-th/0303017. D. R. Entem and R. Machleidt, Phys. Lett. B [**524**]{}, 93 (2002) D. R. Entem and R. Machleidt, Phys. Rev. C [**66**]{}, 014002 (2002) R. Higa and M. R. Robilotta, Phys. Rev. C [**68**]{}, 024004 (2003) R. Higa, M. R. Robilotta and C. A. da Rocha, Phys. Rev. C [**69**]{}, 034009 (2004) R. Higa, arXiv:nucl-th/0411046. M. C. Birse and J. A. McGovern, Phys. Rev. C [**70**]{}, 054002 (2004) P. F. Bedaque and U. van Kolck, Ann. Rev. Nucl. Part. Sci.  [**52**]{}, 339 (2002) N. Fettes, U. G. Meissner and S. Steininger, Nucl. Phys. A [**640**]{} (1998) 199 P. Buettiker and U. G. Meissner, Nucl. Phys. A [**668**]{} (2000) 97 A. Gomez Nicola, J. Nieves, J. R. Pelaez and E. Ruiz Arriola, Phys. Lett. B [**486**]{} (2000) 77 A. Gomez Nicola, J. Nieves, J. R. Pelaez and E. Ruiz Arriola, Phys. Rev. D [**69**]{} (2004) 076007 K. M Case, Phys. Rev. [**80**]{}, 797 (1950) W. M. Frank, D. J. Land, and R .M. Spector, Rev. Mod. Phys. [**43**]{}, 36 (1971). S. R. Beane, P. F. Bedaque, L. Childress, A. Kryjevski, J. McGuire and U. v. Kolck, Phys. Rev. A [**64**]{}, 042103 (2001) T. Frederico, V. S. Timoteo and L. Tomio, Nucl. Phys. A [**653**]{}, 209 (1999) S. R. Beane, P. F. Bedaque, M. J. Savage and U. van Kolck, Nucl. Phys. A [**700**]{}, 377 (2002) M. Pavon Valderrama and E. Ruiz Arriola, Phys. Lett. B [**580**]{}, 149 (2004) M. Pavon Valderrama and E. Ruiz Arriola, Phys. Rev. C [**70**]{}, 044006 (2004) M. Pavon Valderrama and E. Ruiz Arriola, Phys. Rev. C [**72**]{}, 054002 (2005) A. Nogga, R. G. E. Timmermans and U. van Kolck, Phys. Rev. C [**72**]{}, 054006 (2005) M. P. Valderrama and E. R. Arriola, arXiv:nucl-th/0506047. M. P. Valderrama and E. R. Arriola, arXiv:nucl-th/0605078. M. Pavon Valderrama and E. Ruiz Arriola, arXiv:nucl-th/0410020. V. G. J. Stoks, R. A. M. Kompl, M. C. M. Rentmeester and J. J. de Swart, 350-MeV,” Phys. Rev. C [**48**]{}, 792 (1993). V. G. J. Stoks, R. A. M. Klomp, C. P. F. Terheggen and J. J. de Swart, Phys. Rev. C [**49**]{}, 2950 (1994). http://nn-online.org M. Pavon Valderrama and E. Ruiz Arriola, arXiv:nucl-th/0407113. H.P. Stapp, T.J. Ypsilantis and N. Metropolis, Phys. Rev. [**105**]{} (1957) 302. J.M. Blatt and L.C. Biedenharn, Phys. Rev. [**86**]{} (1952) 399, Rev. Mod, Phys. [**24**]{} (1952) 258. J. J. de Swart, M. C. M. Rentmeester and R. G. E. Timmermans, PiN Newslett.  [**13**]{}, 96 (1997) R. Higa, M. Pavon Valderrama and E. Ruiz Arriola, (in preparation) M. Pavon Valderrama and E. Ruiz Arriola, (in preparation) M. C. Birse, Phys. Rev. C [**74**]{} (2006) 014003 D. B. Kaplan, M. J. Savage and M. B. Wise, Phys. Lett. B [**424**]{} (1998) 390 \[arXiv:nucl-th/9801034\]. U. van Kolck, Phys. Rev. C [**49**]{}, 2932 (1994). J. J. de Swart, C. P. F. Terheggen and V. G. J. Stoks, arXiv:nucl-th/9509032. [^1]: Actually, the potential in Eq. (\[eq:pot\_chpt\]) contains distributional contributions, which strictly speaking are zero for any [*finite*]{} distance. See the discussion in our previous work [@Valderrama:2005wv] [^2]: For non S-wave scattering the dimension of $\alpha_{l,l'}$ is ${\rm fm}^{l+l'+1}$ which is not a length. For simplicity we will abuse language and call them scattering lengths. [^3]: In fact, the next correction to the near-the-origin wave functions, which is energy dependent, is suppressed by a relative $ (k R)^2 (r/R)^{n/2+1}$ power with respect to the main term, so it is negligible in the $r \to 0$ limit. [^4]: As mentioned in our previous work [@Valderrama:2005wv] these effects are tiny for the deuteron. For central waves they are about 0.2 $^0$ at the maximum CM momentum $p=400 {\rm MeV}$. This trend is general also for peripheral waves. [^5]: This will generate slight inconsistencies in the TPE results of Sect. \[sec:NNLO-TPE\] which will be amended by a small modification of the threshold parameters, yet larger than the discrepancies between the threshold parameters for NijmII and Reid93 potentials obtained in Ref. [@PavonValderrama:2004se]. [^6]: It should be note that $\alpha_{02}$ and $\alpha_2$ are related with the behaviour of the scattering amplitude at order $k^2$ and $k^4$ respectively, relative to $\alpha_0$. [^7]: \[footnote:gauss\] There, a cut-off has been introduced according to the rule in the potential $ V(k',k) \to e^{-{k'}^4 / {\Lambda}^4} V(k'k) e^{-k^4 / \Lambda^4} $ and counterterms have been added. To get an order of magnitude of the equivalent sharp cut-off $\tilde \Lambda $ we estimate the linear divergence at zero energy in the contact theory, $$\tilde \Lambda= \int_0^\infty e^{-2 q^4 / \Lambda^2 } dq = \frac{\Gamma(\frac54)}{2^{\frac14}} \Lambda = 0.762 \Lambda \, ,$$ and also using $\tilde \Lambda = \pi /2 R_S $  [@PavonValderrama:2004td], we get $\Lambda = 1 / (0.48 R_S)$ . [^8]: Anyway, the lack of a clear plateau in this wave becomes obvious in the coordinate space treatment. Assuming the relationship $\Lambda \sim 1/(0.48R_S)$ for a gaussian cut-off ( see footnote \[footnote:gauss\]) between the momentum and coordinate space cut-offs, a linear dependence of the phase shifts on the $R_S$ coordinate cut-off would map into a $1/\Lambda$ dependence in momentum space, which might be regarded as a plateau in a sufficiently thin cut-off window.. Note that going from $R_s=0.2{\rm fm} $ to $R_s = 0.1{\rm fm}$ corresponds equivalently to double the momentum space cut-off from $\sim 2 {\rm GeV}$ to $\sim 4 {\rm GeV}$. [^9]: The jump in the evolution of the OPE potential in the $^3D_3$ channel around $R_S = 0.3 {\rm fm}$, Fig. \[fig:fig-j=3\] resembles a coupled channel resonance, corresponding to tunneling across the centrifugal barrier into the short distance attractive singularity. [^10]: By genuine we mean that the NNLO potential contains parameters which are relating $\pi N$ and $NN$ data in some intricate way. The fact that we are using the parameter Set IV [@Entem:2003ft] is because it nicely reproduces the deuteron properties. One could, of course improve on this by a large scale fit to the data [^11]: This also applies to the non-static OPE corrections which account for about $0.1^o$ at $E_{\rm LAB}=200 {\rm MeV}$. The effect can be mocked up by even tinier readjustments of both the scattering lengths as well as the chiral couplings $c_1$, $c_3$ and $c_4$ than deduced from inaccuracies in the NijmII potentials.
--- abstract: | On a Riemannian or a [[semiRiemannian]{}]{} manifold, the metric determines invariants like the Levi-Civita connection and the Riemann curvature. If the metric becomes degenerate (as in singular [[semiRiemannian]{}]{} geometry), these constructions no longer work, because they are based on the inverse of the metric, and on related operations like the contraction between covariant indices. In this article we develop the geometry of singular [[semiRiemannian]{}]{} manifolds. First, we introduce an invariant and canonical contraction between covariant indices, applicable even for degenerate metrics. This contraction applies to a special type of tensor fields, which are [[radicalannihilator]{}]{} in the contracted indices. Then, we use this contraction and the Koszul form to define the covariant derivative for [[radicalannihilator]{}]{} indices of covariant tensor fields, on a class of singular [[semiRiemannian]{}]{} manifolds named [[radicalstationary]{}]{}. We use this covariant derivative to construct the Riemann curvature, and show that on a class of singular [[semiRiemannian]{}]{} manifolds, named [[semiregular]{}]{}, the Riemann curvature is smooth. We apply these results to construct a version of Einstein’s tensor whose density of weight 2 remains smooth even in the presence of [[semiregular]{}]{} singularities. We can thus write a densitized version of Einstein’s equation, which is smooth, and which is equivalent to the standard Einstein equation if the metric is [[nondegenerate]{}]{}. author: - Cristi  Stoica title: 'On Singular Semi-Riemannian Manifolds' --- [^1] Introduction ============ Motivation and related advances ------------------------------- Let $M$ be a differentiable manifold with a symmetric inner product structure, named metric, on its tangent bundle. If the metric is [[nondegenerate]{}]{}, we can construct in a canonical way a Levi-Civita connection and the Riemann, Ricci and scalar curvatures. If the metric is allowed to be degenerate (hence $M$ is a singular [[semiRiemannian]{}]{} manifold), some obstructions prevented the construction of such invariants. Degenerate metrics are useful because they can arise in various contexts in which [[semiRiemannian]{}]{} manifolds are used. They are encountered even in manifolds with [[nondegenerate]{}]{} (but indefinite) metric, because the metric induced on a submanifold can be degenerate. The properties of such submanifolds were studied [*e.g.* ]{}in [@Kup87a; @Kup87c], [@Bej95; @Bej96]. In General Relativity, there are models or situations when the metric becomes degenerate or changes its signature. As the Penrose and Hawking *singularity theorems* [@Pen65; @Haw66i; @Haw66ii; @Haw67iii; @HP70; @HE95] show, Einstein’s equation leads to singularities under very general conditions, apparently similar to the matter distribution in our Universe. Therefore, many attempts were done to deal with such singularities. For example it was suggested that Ashtekar’s method of “new variables” [@ASH87; @ASH91; @Rom93a] can be used to pass beyond the singularities, because the variable $\widetilde E^a_i$ – a densitized frame of vector fields – defines the metric, which can be degenerate. Unfortunately, it turned out that in this case the connection variable $A_a^i$ may become singular [*cf.* ]{}[*e.g.* ]{}[@Yon97]. In some cosmological models the initial singularity of the Big Bang is eliminated by making the metric Riemannian for the early Universe. The metric changes the signature when traversing a hypersurface, becoming Lorentzian, so that time emerges from a space dimension. Some particular junction conditions were studied (see [@Sak84],[@Ellis92a; @Ellis92b],[@Hay92; @Hay93; @Hay95], [@Der93], [@Dray91; @Dray93; @Dray94; @Dray95; @Dray96; @Dray01], [@Koss85; @Koss87; @Koss93a; @Koss93b; @Koss94a; @Koss94b] [*etc.*]{}). Other situation where the metric can become degenerate was proposed by Einstein and Rosen, as a model of charged particles [@ER35]. All these applications in Geometry and General Relativity demand a generalization of the standard methods of [[semiRiemannian]{}]{} Geometry, to cover the degenerate case. A degenerate metric prevents the standard constructions like covariant derivative and curvature. Manifolds endowed with degenerate metrics were studied by Moisil [@Moi40], Strubecker [@Str41; @Str42a; @Str42b; @Str45], Vrănceanu [@Vra42]. Notable is the work of Kupeli [@Kup87a; @Kup87b; @Kup87c], which is limited to the constant signature case. Presentation of this article ---------------------------- The purpose of this article twofold: 1. to provide a toolbox of geometric invariants, which extend the standard constructions from [[semiRiemannian]{}]{} geometry to the [[nondegenerate]{}]{} case, with constant or variable signature, 2. and to apply these constructions to extend Einstein’s equation to a class of singular spacetimes. The first goal of this article is to construct canonical invariants such as the covariant derivative and Riemann curvature tensor, in the case of singular [[semiRiemannian]{}]{} geometry. The main obstruction for this is the fact that when the metric is [[nondegenerate]{}]{}, it doesn’t admit an inverse. This prohibits operations like index raising and contractions between covariant indices. This prevents the definition of a Levi-Civita connection, and by this, the construction of the curvature invariants. This article presents a way to construct such invariants even if the metric is degenerate, for a class of singular [[semiRiemannian]{}]{} manifolds which are named *[[semiregular]{}]{}*. The second goal is to apply the tools developed here to write a densitized version of Einstein’s tensor which remains smooth in the presence of singularities, if the spacetime is [[semiregular]{}]{}. Consequently, we can write a version of Einstein’s equation which is equivalent to the standard one if the metric is [[nondegenerate]{}]{}. This allows us to extend smoothly the equations of General Relativity beyond the apparent limits imposed by the singularity theorems of Penrose and Hawking [@Pen65; @Haw66i; @Haw66ii; @Haw67iii; @HP70; @HE95]. Section [§\[s\_singular\_semi\_riemannian\]]{} contains generalities on singular [[semiRiemannian]{}]{} manifolds, in particular the radical bundle associated to the metric, made of the degenerate tangent vectors. In section [§\[s\_dual\_inner\_prod\]]{} are studied the properties of the [[radicalannihilator]{}]{} bundle, consisting in the covectors annihilating the degenerate vectors. Tensor fields which are [[radicalannihilator]{}]{} in some of their covariant indices are introduced. On this bundle we can define a metric which is the next best thing to the inverse of the metric, and which will be used to perform contractions between covariant indices. Section [§\[s\_tensors\_contraction\_sign\_const\]]{} shows how we can contract covariant indices of tensor fields, so long as these indices are [[radicalannihilator]{}]{}s. Normally, the Levi-Civita connection is obtained by raising an index of the right member of the Koszul formula (named here Koszul form), operation which is not available when the metric is degenerate. Section [§\[s\_koszul\_form\]]{} studies the properties of the Koszul form, which are similar to those of the Levi-Civita connection. This allows us to construct in section [§\[s\_cov\_der\]]{} a sort of covariant derivative for vector fields, and in [§\[s\_cov\_der\_covect\]]{} a covariant derivative for differential forms. The notion of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold is defined in section [§\[s\_riemann\_curvature\]]{} as a special type of singular [[semiRiemannian]{}]{} manifold with variable signature on which the lower covariant derivative of any vector field, which is a $1$-form, admits smooth covariant derivatives. The Riemann curvature tensor is constructed in [§\[s\_riemann\_curvature\]]{} with the help of the Koszul form and of the covariant derivative for differential forms introduced in section [§\[s\_cov\_der\]]{}. For [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds, the Riemann curvature tensor is shown to be smooth, and to have the same symmetry properties as in the [[nondegenerate]{}]{} case. In addition, it is [[radicalannihilator]{}]{} in all of its indices, this allowing the construction of the Ricci and scalar curvatures. Then, in section [§\[s\_riemann\_curvature\_ii\]]{}, the Riemann curvature tensor is expressed directly in terms of the Koszul form, obtaining an useful formula. Then the Riemann curvature is compared with a curvature tensor obtained by Kupeli by other means [@Kup87b]. Section [§\[s\_semi\_reg\_semi\_riem\_man\_example\]]{} presents two examples of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds. The first is based on diagonal metrics, and the second on degenerate metrics which are conformal to [[nondegenerate]{}]{} metrics. The final section, [§\[s\_einstein\_tensor\_densitized\]]{}, applies the results of this article to General Relativity. This section studies the Einstein’s equation on [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds. It proposes a densitized version of this equation, which remains smooth on [[semiregular]{}]{} spacetimes, and reduces to the standard Einstein equation if the metric is [[nondegenerate]{}]{}. Singular [[semiRiemannian]{}]{} manifolds {#s_singular_semi_riemannian} ========================================= Definition of singular [[semiRiemannian]{}]{} manifolds {#s_singular_semi_riemannian_def} ------------------------------------------------------- (see [*e.g.* ]{}[@Kup87b], [@Pam03][265]{} for comparison) \[def\_sing\_semiRiemm\_man\] A *singular [[semiRiemannian]{}]{} manifold* is a pair $(M,g)$, where $M$ is a differentiable manifold, and $g\in \Gamma(T^*M \odot_M T^*M)$ is a symmetric bilinear form on $M$, named *metric tensor* or *metric*. If the signature of $g$ is fixed, then $(M,g)$ is said to be with *constant signature*. If the signature of $g$ is allowed to vary from point to point, $(M,g)$ is said to be with *variable signature*. If $g$ is [[nondegenerate]{}]{}, then $(M,g)$ is named *[[semiRiemannian]{}]{} manifold*. If $g$ is positive definite, $(M,g)$ is named *Riemannian manifold*. \[rem\_sign\_var\_points\] Let $(M,g)$ be a singular [[semiRiemannian]{}]{} manifold and let ${M{}_{\wr}}\subseteq M$ be the set of the points where the metric changes its signature. The set $M-{M{}_{\wr}}$ is dense in $M$, and it is a union of singular [[semiRiemannian]{}]{} manifolds with constant signature. \[ex\_sing\_semi\_euclidean\] Let $r,s,t\in{\mathbb{N}}$, $n=r+s+t$, We define the singular [[semiEuclidean]{}]{} space ${{\mathbb{R}}^{r,s,t}}$ by: $${{\mathbb{R}}^{r,s,t}}:=({\mathbb{R}}^n,{\langle,\rangle}),$$ where the metric acts on two vector fields $X$, $Y$ on ${\mathbb{R}}^n$ at a point $p$ on the manifold, in the natural chart, by $${\langleX_p,Y_p\rangle} = -\sum_{i=r+1}^s X^i Y^i + \sum_{j=r+s+1}^n X^j Y^j.$$ If $r=0$ we fall over the [[semiEuclidean]{}]{} space ${\mathbb{R}}^n_s:={{\mathbb{R}}^{0,s,t}}$ (see [*e.g.* ]{}[@ONe83][58]{}). If $s=0$ we find the degenerate Euclidean space. If $r=s=0$, then $t=n$ and we recover the Euclidean space ${\mathbb{R}}^n$ endowed with the natural scalar product. The radical of a singular [[semiRiemannian]{}]{} manifold {#s_radix} --------------------------------------------------------- ([*cf.* ]{}[*e.g.* ]{}[@Bej95][1]{}, [@Kup96][3]{} and [@ONe83][53]{}) Let $(V,g)$ be a finite dimensional inner product space, where the inner product $g$ may be degenerate. The totally degenerate space ${{V{}_{\circ}{}}}:=V^\perp$ is named the *radical* of $V$. An inner product $g$ on a vector space $V$ is [[nondegenerate]{}]{} if and only if ${{V{}_{\circ}{}}}=\{0\}$. (see [*e.g.* ]{}[@Kup87b][261]{}, [@Pam03][263]{}) We denote by ${{T{}_{\circ}{}}}M$ and we call *the radical of $TM$* the following subset of the tangent bundle: ${{T{}_{\circ}{}}}M=\cup_{p\in M}{{(T_pM){}_{\circ}{}}}$. We can define vector fields on $M$ valued in ${{T{}_{\circ}{}}}M$, by taking those vector fields $W\in{{{\mathfrak{X}}}(M)}$ for which $W_p\in{{(T_pM){}_{\circ}{}}}$. We denote by ${{{\mathfrak{X}}}_\circ(M)}\subseteq{{{\mathfrak{X}}}(M)}$ the set of these sections – they form a vector space over ${\mathbb{R}}$ and a module over ${{\mathscr{F}}(M)}$. ${{T{}_{\circ}{}}}M$ is a vector bundle if and only if the signature of $g$ is constant on all $M$, and in this case, ${{T{}_{\circ}{}}}M$ is a distribution. \[ex\_sing\_semi\_euclidean\_radix\] The radical ${{T{}_{\circ}{}}}{{\mathbb{R}}^{r,s,t}}$ of the singular [[semiEuclidean]{}]{} manifold ${{\mathbb{R}}^{r,s,t}}$ in the Example \[ex\_sing\_semi\_euclidean\] is spanned at each point $p$ by the tangent vectors $\partial_{ap}$ with $a\leq r$: $${{T{}_{\circ}{}}}{{\mathbb{R}}^{r,s,t}} = \bigcup_{p\in{{\mathbb{R}}^{r,s,t}}}{\textnormal}{span}({\{(p,\partial_{ap})|\partial_{ap}\in T_p{{\mathbb{R}}^{r,s,t}},a\leq r\}}).$$ The sections of ${{T{}_{\circ}{}}}{{\mathbb{R}}^{r,s,t}}$ are therefore given by $${{{\mathfrak{X}}}_\circ({{\mathbb{R}}^{r,s,t}})} = \{X\in{{{\mathfrak{X}}}({{\mathbb{R}}^{r,s,t}})}|X=\sum_{a=1}^r X^a\partial_a\}.$$ The [[radicalannihilator]{}]{} inner product space {#s_dual_inner_prod} ================================================== Let $(V,g)$ be an inner product vector space. If the inner product $g$ is [[nondegenerate]{}]{}, it defines an isomorphism $\flat:V\to V^*$ (see [*e.g.* ]{}[@Gibb06][15]{}; [@GHLF04][72]{}). If $g$ is degenerate, $\flat$ remains a linear morphism, but not an isomorphism. This is why we can no longer define a dual for $g$ on $V^*$ in the usual sense. We will see that we can still define canonically an inner product ${{{g{}_{\bullet}{}}}}\in\flat(V)^*\odot\flat(V)^*$, and use it to define contraction and index raising in a weaker sense than in the [[nondegenerate]{}]{} case. This rather elementary construction can be immediately extended to singular [[semiRiemannian]{}]{} manifolds. It provides a tool to contract covariant indices and construct the invariants we need. The [[radicalannihilator]{}]{} vector space {#s_rad_annih_space} ------------------------------------------- This section applies well-known elementary properties of linear algebra, with the purpose is to extend fundamental notions related to the [[nondegenerate]{}]{} inner product $g$ on a vector space $V$ induced on the dual space $V^*$ [([*cf.* ]{}[*e.g.* ]{}[[@Rom08], p. 59]{})]{}, to the case when $g$ is allowed to be degenerate. Let $(V,g)$ be an inner product space over ${\mathbb{R}}$. \[def\_inner\_morphism\] The inner product $g$ defines a vector space morphism, named the *index lowering morphism* $\flat:V\to V^*$, by associating to any $u\in V$ a linear form $\flat(u):V\to {\mathbb{R}}$ defined by $\flat(u)v:={\langleu,v\rangle}$. Alternatively, we use the notation $u^\flat$ for $\flat(u)$. For reasons which will become apparent, we will also use the notation ${{u{}^{\bullet}{}}}:=u^\flat$. \[thm\_radix\_ker\] It is easy to see that ${{V{}_{\circ}{}}}=\ker\flat$, so $\flat$ is an isomorphism if and only if $g$ is [[nondegenerate]{}]{}. \[def\_radical\_annihilator\] The *[[radicalannihilator]{}]{}* vector space ${{V{}^{\bullet}{}}}:={{\textnormal}{im }}\flat\subseteq V^*$ is the space of $1$-forms $\omega$ which can be expressed as $\omega={{u{}^{\bullet}{}}}$ for some $u$, and they act on $V$ by $\omega(v)={\langleu,v\rangle}$. Obviously, in the case when $g$ is [[nondegenerate]{}]{}, we have the identification ${{V{}^{\bullet}{}}}=V^*$. \[thm\_img\_ker\_radix\] In other words, ${{V{}^{\bullet}{}}}$ is the annihilator of ${{V{}_{\circ}{}}}$. It follows that $\dim{{V{}^{\bullet}{}}}+\dim{{V{}_{\circ}{}}}=n$. Any $u'\in V$ satisfying ${{u'{}^{\bullet}{}}}=\omega$ differs from $u$ by $u'-u\in{{V{}_{\circ}{}}}$. Such $1$-forms $\omega\in{{V{}^{\bullet}{}}}$ satisfy $\omega|_{{{V{}_{\circ}{}}}} = 0$. \[def\_co\_inner\_product\] On the vector space ${{V{}^{\bullet}{}}}$ we can define a unique [[nondegenerate]{}]{} inner product ${{{g{}_{\bullet}{}}}}$ by ${{{g{}_{\bullet}{}}}}(\omega,\tau):={\langleu,v\rangle}$, where ${{u{}^{\bullet}{}}}=\omega$ and ${{v{}^{\bullet}{}}}=\tau$. We alternatively use the notation ${{{\langle\!\langle\omega,\tau\rangle\!\rangle{}_{\bullet}{}}}}={{{g{}_{\bullet}{}}}}(\omega,\tau)$. The inner product ${{{g{}_{\bullet}{}}}}$ from above is well-defined, being independent on the vectors $u,v$ chosen to represent the $1$-forms $\omega$, $\tau$. If $u',v'\in V$ are other vectors satisfying ${{u'{}^{\bullet}{}}}=\omega$ and ${{v'{}^{\bullet}{}}}=\tau$, then $u'-u\in{{V{}_{\circ}{}}}$ and $v'-v\in{{V{}_{\circ}{}}}$. ${\langleu',v'\rangle}={\langleu,v\rangle}+{\langleu'-u,v\rangle}+{\langleu,v'-v\rangle}+{\langleu'-u,v'-v\rangle}={\langleu,v\rangle}$. \[thm\_cometric\_signature\] The inner product ${{{g{}_{\bullet}{}}}}$ from above is [[nondegenerate]{}]{}, and if $g$ has the signature $(r,s,t)$, then the signature of ${{{g{}_{\bullet}{}}}}$ is $(0,s,t)$. Let’s take an orthonormal basis $(e_a)_{a=1}^n$ in which the inner product is diagonal, with the first $r$ diagonal elements being $0$. We have ${{e_a{}^{\bullet}{}}}=0$ for $a\in\{1,\ldots,r\}$, and the $1$-forms $\omega_a:={{e_{r+a}{}^{\bullet}{}}}$ for $a\in\{1,\ldots,s+t\}$ are the generators of ${{V{}^{\bullet}{}}}$. They satisfy ${{{\langle\!\langle\omega_a,\omega_b\rangle\!\rangle{}_{\bullet}{}}}}={\langlee_{r+a},e_{r+b}\rangle}$. Therefore, $(\omega_a)_{a=1}^{s+t}$ are linear independent and the signature of ${{{g{}_{\bullet}{}}}}$ is $(0,s,t)$. Figure \[degenerate-metric\] illustrates the various spaces associated with a degenerate inner product space $(V,g)$ and the inner products induced by $g$ on them. ![image](degenerate-metric){width="100.00000%"} The [[radicalannihilator]{}]{} vector bundle {#s_annih} -------------------------------------------- We denote by ${{T{}^{\bullet}{}}}M$ the subset of the cotangent bundle defined as $${{T{}^{\bullet}{}}}M=\bigcup_{p\in M}{{(T_pM){}^{\bullet}{}}}$$ where ${{(T_pM){}^{\bullet}{}}} \subseteq T^*_pM$ is the space of covectors at $p$ which can be expressed as $\omega_p(X_p)={\langleY_p,X_p\rangle}$ for some $Y_p\in T_p M$ and any $X_p\in T_p M$. ${{T{}^{\bullet}{}}}M$ is a vector bundle if and only if the signature of the metric is constant. We can define sections of ${{T{}^{\bullet}{}}}M$ in the general case, by $${{{{\mathcal{A}}{}^{\bullet}{}}}(M)}:=\{\omega\in{{\mathcal{A}}^{1}(M)}|\omega_p\in{{(T_pM){}^{\bullet}{}}}{\textnormal}{ for any }p\in M\}.$$ ${{(T_pM){}^{\bullet}{}}}$ is the annihilator space [([*cf.* ]{}[*e.g.* ]{}[[@Rom08], p. 102]{})]{} of the radical space ${{T{}_{\circ}{}}}_pM$, that is, it contains the linear forms $\omega_p$ which satisfy $\omega_p|_{{{T{}_{\circ}{}}}_pM}=0$. \[ex\_sing\_semi\_euclidean\_annih\] The [[radicalannihilator]{}]{} ${{T{}^{\bullet}{}}}{{\mathbb{R}}^{r,s,t}}$ of the singular [[semiEuclidean]{}]{} manifold ${{\mathbb{R}}^{r,s,t}}$ in the Example \[ex\_sing\_semi\_euclidean\] is: $${{T{}^{\bullet}{}}}{{\mathbb{R}}^{r,s,t}} = \bigcup_{p\in{{\mathbb{R}}^{r,s,t}}}{\textnormal}{span}({\{{\textnormal{d}}x^a\in T^*_p{{\mathbb{R}}^{r,s,t}}|a> r\}}).$$ Consequently, the [[radicalannihilator]{}]{} $1$-forms have the general form $$\omega=\sum_{a=r+1}^n\omega_a{\textnormal{d}}x^a,$$ and $${{{{\mathcal{A}}{}^{\bullet}{}}}({{\mathbb{R}}^{r,s,t}})}=\{\omega\in{{\mathcal{A}}^{1}({{\mathbb{R}}^{r,s,t}})}|\omega^i=0,i\leq r\}.$$ The [[radicalannihilator]{}]{} inner product in a basis ------------------------------------------------------- Let us consider an inner product space $(V,g)$, and a basis $(e_a)_{a=1}^n$ of $V$ in which $g$ takes the diagonal form $g={\textnormal{diag}}(\alpha_1,\alpha_2,\ldots,\alpha_n)$, $\alpha_a\in{\mathbb{R}}$ for all $1\leq a\leq n$. The inner product satisfies: $$g_{ab}={\langlee_a,e_b\rangle}=\alpha_a\delta_{ab}.$$ We also have $${{e_a{}^{\bullet}{}}}(e_b):={\langlee_a,e_b\rangle}=\alpha_a\delta_{ab},$$ and, if $(e^{*a})_{a=1}^n$ is the dual basis of $(e_a)_{a=1}^n$, $${{e_a{}^{\bullet}{}}}=\alpha_a e^{*a}.$$ \[thm\_cometric\_in\_basis\] If in a basis the inner product has the form $g_{ab}=\alpha_a\delta_{ab}$, then $${{{g{}_{\bullet}{}}}}^{ab}=\frac 1{\alpha_a}\delta^{ab},$$for all $a$ so that $\alpha_a\neq 0$. Since $${{{\langle\!\langle{{e_a{}^{\bullet}{}}},{{e_b{}^{\bullet}{}}}\rangle\!\rangle{}_{\bullet}{}}}}={\langlee_a,e_b\rangle}=\alpha_a\delta_{ab},$$ and in the same time $${{{\langle\!\langle{{e_a{}^{\bullet}{}}},{{e_b{}^{\bullet}{}}}\rangle\!\rangle{}_{\bullet}{}}}}=\alpha_a \alpha_b {{{\langle\!\langlee^{*a},e^{*b}\rangle\!\rangle{}_{\bullet}{}}}}=\alpha_a \alpha_b{{{g{}_{\bullet}{}}}}^{ab},$$ we have that $$\alpha_a \alpha_b{{{g{}_{\bullet}{}}}}^{ab}=\alpha_a \delta_{ab},$$ This leads, for $\alpha_a\neq 0$, to $${{{g{}_{\bullet}{}}}}^{ab}=\frac 1{\alpha_a}\delta_{ab}.$$ The case when $\alpha_a = 0$ doesn’t happen, since ${{{g{}_{\bullet}{}}}}$ is defined only on ${{\textnormal}{im }}\flat$. Radical and [[radicalannihilator]{}]{} tensors {#s_radix_annih_tensors} ---------------------------------------------- For inner product vector spaces we define tensors that are radical in a contravariant slot, and [[radicalannihilator]{}]{} in a covariant slot, and give their characterizations. \[def\_radix\_annih\_tensor\_field\] Let $T$ be a tensor of type $(r,s)$. We call it *radical* in the $k$-th contravariant slot if $T\in {{\mathcal{T}}{}^{k-1}_{0}M}\otimes_M{{T{}_{\circ}{}}}M\otimes_M {{\mathcal{T}}{}^{r-k}_{s}M}$. We call it *[[radicalannihilator]{}]{}* in the $l$-th covariant slot if $T\in {{\mathcal{T}}{}^{r}_{l-1}M}\otimes_M{{T{}^{\bullet}{}}}M\otimes_M {{\mathcal{T}}{}^{0}_{s-l}M}$. \[thm\_radical\_contravariant\_index\] A tensor $T\in{{\mathcal{T}}{}^{r}_{s}M}$ is radical in the $k$-th contravariant slot if and only if its contraction $C^k_{s+1}(T\otimes\omega)$ with any [[radicalannihilator]{}]{} linear $1$-form $\omega\in {{\mathcal{A}}^{1}(M)}$ is zero. For simplicity, we can work on an inner product space $(V,g)$ and consider $k=r$ (if $k<r$, we can make use of the permutation automorphisms of the tensor space ${{\mathcal{T}}{}^{r}_{s}V}$). T can be written as a sum of linear independent terms having the form $\sum_{\alpha}S_{\alpha}\otimes v_{\alpha}$, with $S_{\alpha}\in{{\mathcal{T}}{}^{r-1}_{s}V}$ and $v_{\alpha}\in V$. We keep only the terms with $S_{\alpha}\neq 0$. The contraction of the $r$-th contravariant slot with any $\omega\in{{V{}^{\bullet}{}}}$ becomes $\sum_{\alpha}S_{\alpha}\omega(v_{\alpha})$. If $T$ is radical in the $r$-th contravariant slot, for all $\alpha$ and any $\omega\in{{V{}^{\bullet}{}}}$ we have $\omega(v_{\alpha})=0$, therefore $\sum_{\alpha}S_{\alpha}\omega(v_{\alpha})=0$. Reciprocally, if $\sum_{\alpha}S_{\alpha}\omega(v_{\alpha})=0$, it follows that for any $\alpha$, $S_{\alpha}\omega(v_{\alpha})=0$. Then, $\omega(v_{\alpha})=0$, because $S_{\alpha}\neq 0$. It follows that $v_{\alpha}\in{{V{}_{\circ}{}}}$. \[thm\_radical\_annihilator\_covariant\_index\] A tensor $T\in{{\mathcal{T}}{}^{r}_{s}M}$ is [[radicalannihilator]{}]{} in the $l$-th covariant slot if and only if its $l$-th contraction with any radical vector field is zero. The proof goes as in Proposition \[thm\_radical\_contravariant\_index\]. \[thm\_metric\_radical\_annihilator\] The inner product $g$ is [[radicalannihilator]{}]{} in both of its slots. This means that $g\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}\odot_M{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$. Follows directly from the definition of ${{TM{}_{\circ}{}}}$ and of [[radicalannihilator]{}]{} tensor fields. \[thm\_radical\_annihilator\_vs\_radical\_contraction\] The contraction between a radical slot and a [[radicalannihilator]{}]{} slot of a tensor is zero. Follows from the Proposition \[thm\_radical\_contravariant\_index\] combined with the commutativity between tensor products and linear combinations with contraction. The proof goes similar to that of the Proposition \[thm\_radical\_contravariant\_index\]. Covariant contraction of tensor fields {#s_tensors_contraction_sign_const} ====================================== We don’t need an inner product to define contractions between one covariant and one contravariant indices. We can use the inner product $g$ to contract between two contravariant indices, obtaining the *contravariant contraction operator* $C^{kl}$ [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 83]{})]{}. On the other hand, the contraction is not always well defined for two covariant indices. We will see that we can use ${{{g{}_{\bullet}{}}}}$ for such contractions, but this works only for vectors or tensors which are [[radicalannihilator]{}]{} in covariant slots. Fortunately, this kind of tensors turn out to be the relevant ones in the applications to singular [[semiRiemannian]{}]{} geometry. Covariant contraction on inner product spaces {#s_tensors_covariant_contraction_inner_prod} --------------------------------------------- \[def\_contraction\_covariant\] We can define uniquely the *covariant contraction* or *covariant trace* operator by the following steps. 1. We define it first on tensors $T\in{{V{}^{\bullet}{}}}\otimes{{V{}^{\bullet}{}}}$, by $C_{12}T={{{g{}_{\bullet}{}}}}^{ab}T_{ab}$. This definition is independent on the basis, because ${{{g{}_{\bullet}{}}}}\in{{V{}^{\bullet}{}}}^*\otimes{{V{}^{\bullet}{}}}^*$. 2. Let $T\in{{\mathcal{T}}{}^{r}_{s}V}$ be a tensor with $r\geq 0$ and $s\geq 2$, which satisfies $T\in V^{\otimes r}\otimes {V^*}^{\otimes {s-2}}\otimes{{V{}^{\bullet}{}}}\otimes{{V{}^{\bullet}{}}}$, that is, $T(\omega_1,\ldots,\omega_r,v_1,\ldots,v_s)=0$ for any $\omega_i\in V^*, i=1,\ldots,r$, $v_j\in V,j=1,\ldots,s$ whenever $v_{s-1}\in{{V{}_{\circ}{}}}$ or $v_{s}\in{{V{}_{\circ}{}}}$. Then, we define the covariant contraction between the last two covariant slots by the operator $$C_{s-1\,s}:=1_{{{\mathcal{T}}{}^{r}_{s-2}V}}\otimes{{{g{}_{\bullet}{}}}}:{{\mathcal{T}}{}^{r}_{s}V}\otimes {{V{}^{\bullet}{}}}\otimes{{V{}^{\bullet}{}}}\to{{\mathcal{T}}{}^{r}_{s-2}V},$$ where $1_{{{\mathcal{T}}{}^{r}_{s-2}V}}:{{\mathcal{T}}{}^{r}_{s-2}V}\to{{\mathcal{T}}{}^{r}_{s-2}V}$ is the identity. In a radical basis, the contraction can be expressed by $$\label{eq_contraction_covariant_end} (C_{s-1\,s} T)^{a_1\ldots a_r}{}_{b_1\ldots b_{s-2}} := {{{g{}_{\bullet}{}}}}^{b_{s-1} b_{s}}T^{a_1\ldots a_r}{}_{b_1\ldots \ldots b_{s-2}b_{s-1}b_{s}}.$$ 3. Let $T\in{{\mathcal{T}}{}^{r}_{s}V}$ be a tensor with $r\geq 0$ and $s\geq 2$, which satisfies $$T\in V^{\otimes r}\otimes {V^*}^{\otimes {k-1}}\otimes{{V{}^{\bullet}{}}}\otimes {V^*}^{\otimes l-k-1}\otimes{{V{}^{\bullet}{}}}\otimes {V^*}^{\otimes s-l},$$ $1\leq k<l\leq s$, that is, $T(\omega_1,\ldots,\omega_r,v_1,\ldots,v_k,\ldots,v_l,\ldots,v_s)=0$ for any $\omega_i\in V^*, i=1,\ldots,r$, $v_j\in V,j=1,\ldots,s$ whenever $v_k\in{{V{}_{\circ}{}}}$ or $v_l\in{{V{}_{\circ}{}}}$. We define the contraction $$C_{kl}:V^{\otimes r}\otimes {V^*}^{\otimes {k-1}}\otimes{{V{}^{\bullet}{}}}\otimes {V^*}^{\otimes l-k-1}\otimes{{V{}^{\bullet}{}}}\otimes {V^*}^{\otimes s-l} \to V^{\otimes r}\otimes {V^*}^{\otimes {s-2}},$$ by $C_{kl}:=C_{s-1\,s}\circ P_{k,s-1;l,s}$, where $C_{s-1\,s}$ is the contraction defined above, and $P_{k,s-1;l,s}:T\in{{\mathcal{T}}{}^{r}_{s}V}\to T\in{{\mathcal{T}}{}^{r}_{s}V}$ is the permutation isomorphisms which moves the $k$-th and $l$-th slots in the last two positions. In a basis, the components take the form $$\label{eq_contraction_covariant_inner_prod_space} (C_{kl} T)^{a_1\ldots a_r}{}_{b_1\ldots\widehat{b}_k\ldots\widehat{b}_l\ldots b_s} := {{{g{}_{\bullet}{}}}}^{b_k b_l}T^{a_1\ldots a_r}{}_{b_1\ldots b_k\ldots b_l\ldots b_s}.$$ We denote the contraction $C_{kl} T$ of $T$ also by $$C(T(\omega_1,\ldots,\omega_r,v_1,\ldots,{{{}_\bullet}},\ldots,{{{}_\bullet}},\ldots,v_s))$$ or simply $$T(\omega_1,\ldots,\omega_r,v_1,\ldots,{{{}_\bullet}},\ldots,{{{}_\bullet}},\ldots,v_s).$$ Covariant contraction on singular [[semiRiemannian]{}]{} manifolds {#ss_tensors_contraction_manifolds} ------------------------------------------------------------------ In [§\[s\_tensors\_covariant\_contraction\_inner\_prod\]]{} we have seen that we can contract in two covariant slots, so long as they are [[radicalannihilator]{}]{}s. The covariant contraction uses the inner product ${{{g{}_{\bullet}{}}}}\in{{V{}^{\bullet}{}}}^*\odot{{V{}^{\bullet}{}}}^*$. In Section [§\[s\_radix\_annih\_tensors\]]{} we have extended the notion of tensors which are [[radicalannihilator]{}]{} in some slots to a singular [[semiRiemannian]{}]{} manifold $(M,g)$ by imposing the condition that the corresponding factors in the tensor product, at $p\in M$, are from ${{T{}^{\bullet}{}}}_p M$, which is just a subset of $T^*_p M$. This allows us easily to extend the covariant contraction [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 40]{})]{} in [[radicalannihilator]{}]{} slots to singular [[semiRiemannian]{}]{} manifolds. \[def\_contraction\_covariant\_ct\_sign\] Let $T\in{{\mathcal{T}}{}^{r}_{s}M}$, $s\geq 2$, be a tensor field on $M$, which is [[radicalannihilator]{}]{} in the $k$-th and $l$-th covariant slots, where $1\leq k<l\leq s$. The *covariant contraction* or *covariant trace* operator is the linear operator $$C_{kl}:{{\mathcal{T}}{}^{r}_{k-1}M}\otimes_M{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}\otimes_M{{\mathcal{T}}{}^{0}_{l-k-1}M}\otimes_M{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}\otimes_M {{\mathcal{T}}{}^{0}_{s-l}M} \to {{\mathcal{T}}{}^{r}_{s-2}M}$$ by $$(C_{kl}T)(p)=C_{kl}(T(p))$$ in terms of the covariant contraction defined for inner product vector spaces, as in [§\[s\_tensors\_covariant\_contraction\_inner\_prod\]]{}. In local coordinates we have $$\label{eq_contraction_covariant_ct_sign} (C_{kl} T)^{a_1\ldots a_r}{}_{b_1\ldots\widehat{b}_k\ldots\widehat{b}_l\ldots b_s} := {{{g{}_{\bullet}{}}}}^{b_k b_l}T^{a_1\ldots a_r}{}_{b_1\ldots b_k\ldots b_l\ldots b_s}.$$ We denote the contraction $C_{kl} T$ of $T$ also by $$C(T(\omega_1,\ldots,\omega_r,X_1,\ldots,{{{}_\bullet}},\ldots,{{{}_\bullet}},\ldots,X_s))$$ or simply $$T(\omega_1,\ldots,\omega_r,X_1,\ldots,{{{}_\bullet}},\ldots,{{{}_\bullet}},\ldots,X_s).$$ \[thm\_contraction\_with\_metric\] If $T$ is a tensor field $T\in{{\mathcal{T}}{}^{r}_{s}M}$ with $r\geq 0$ and $s\geq 1$, which is [[radicalannihilator]{}]{} in the $k$-th covariant slot, $1\leq k\leq s$, then its contraction with the metric tensor gives again $T$: $$\label{eq_contraction_with_metric} \begin{array}{l} T(\omega_1,\ldots,\omega_r,X_1,\ldots,{{{}_\bullet}},\ldots,X_s){\langleX_k,{{{}_\bullet}}\rangle}\\ \,\,\,\,\,=T(\omega_1,\ldots,\omega_r,X_1,\ldots,X_k,\ldots,X_s) \end{array}$$ For simplicity, we can work on an inner product space $(V,g)$. Let’s first consider the case when $T\in{{\mathcal{T}}{}^{0}_{1}V}$, in fact, $T=\omega\in{{V{}^{\bullet}{}}}$. Then, equation reduces to $$\omega({{{}_\bullet}}){\langlev,{{{}_\bullet}}\rangle}=\omega(v).$$ But since $\omega\in{{V{}^{\bullet}{}}}$, it takes the form $\omega={{u{}^{\bullet}{}}}$ for $u\in V$, and $\omega({{{}_\bullet}}){\langlev,{{{}_\bullet}}\rangle}={{{\langle\!\langle\omega,{{v{}^{\bullet}{}}}\rangle\!\rangle{}_{\bullet}{}}}}={\langleu,v\rangle}={{u{}^{\bullet}{}}} (v)=\omega(v)$. The general case is obtained from the linearity of the tensor product in the $k$-th covariant slot. \[thm\_contracted\_metric\_w\_metric\] ${\langleX,{{{}_\bullet}}\rangle}{\langleY,{{{}_\bullet}}\rangle}={\langleX,Y\rangle}.$ Follows from Lemma \[thm\_contraction\_with\_metric\] and from $g\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}\odot_M{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$. \[thm\_contracted\_metric\_w\_itself\] ${\langle{{{}_\bullet}},{{{}_\bullet}}\rangle}={\textnormal{rank }}g.$ For simplicity, we can work on an inner product space $(V,g)$. We recall that $g\in{{V{}^{\bullet}{}}}\odot{{V{}^{\bullet}{}}}$, ${{{g{}_{\bullet}{}}}}\in{{V{}^{\bullet}{}}}^*\odot{{V{}^{\bullet}{}}}^*$. When restricted to ${{V{}^{\bullet}{}}}$ and ${{V{}^{\bullet}{}}}^*$ they are [[nondegenerate]{}]{} and inverse to one another. Since $\dim{{V{}^{\bullet}{}}}=\dim\ker\flat={\textnormal{rank }}g$, we obtain ${\langle{{{}_\bullet}},{{{}_\bullet}}\rangle}={\textnormal{rank }}g$. \[thm\_contraction\_orthogonal\] Let $(M,g)$ be a singular [[semiRiemannian]{}]{} manifold with constant signature. Let $T\in{{\mathcal{T}}{}^{r}_{s}M}$, $s\geq 2$, be a tensor field which is [[radicalannihilator]{}]{} in the $k$-th and $l$-th covariant slots ($1\leq k<l \leq n$). Let $(E_a)_{a=1}^n$ be an orthogonal basis on $M$, so that $E_1,\ldots,E_{n-{\textnormal{rank }}g}\in{{{\mathfrak{X}}}_\circ(M)}$. Then $$\label{eq_cov_contraction_orthogonal} \begin{array}{l} T(\omega_1,\ldots,\omega_r,X_1,\ldots,{{{}_\bullet}},\ldots,{{{}_\bullet}},\ldots,X_s) \\ \,\,\,\,\, =\sum_{a=n-{\textnormal{rank }}g+1}^n {{\displaystyle}{\frac{1}{{\langleE_a,E_a\rangle}}}}T(\omega_1,\ldots,\omega_r,X_1,\ldots,E_a,\ldots,E_a,\ldots,X_s), \end{array}$$ for any $X_1,\ldots,X_s\in{{{\mathfrak{X}}}(M)},\omega_1,\ldots,\omega_r\in{{\mathcal{A}}^{1}(M)}$. For simplicity, we will work on an inner product space $(V,g)$. From the Proposition \[thm\_cometric\_in\_basis\] we recall that ${{{g{}_{\bullet}{}}}}$ is diagonal and ${{{g{}_{\bullet}{}}}}^{aa}={\displaystyle}{\frac 1{g_{aa}}}$, for $a>n-{\textnormal{rank }}g$. Therefore $$\begin{array}{l} {{{g{}_{\bullet}{}}}}^{ab}T(\omega_1,\ldots,\omega_r,v_1,\ldots,E_a,\ldots,E_b,\ldots,v_s) \\ \,\,\,\,\, =\sum_{a=n-{\textnormal{rank }}g+1}^n {{\displaystyle}{\frac{1}{{\langleE_a,E_a\rangle}}}}T(\omega_1,\ldots,\omega_r,v_1,\ldots,E_a,\ldots,E_a,\ldots,v_s). \end{array}$$ \[rem\_contraction\_orthonormal\_invariant\] Since in fact $$\label{eq_annihprod_orthogonal} {{{\langle\!\langle\omega_1,\omega_2\rangle\!\rangle{}_{\bullet}{}}}}=\sum_{a=n-{\textnormal{rank }}g+1}^n {{\displaystyle}{\frac{\omega_1(E_a)\omega_2(E_a)}{{\langleE_a,E_a\rangle}}}},$$ for any [[radicalannihilator]{}]{} $1$-forms $\omega_1,\omega_2\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$, it follows that if we define the contraction alternatively by the equation , the definition is independent on the frame $(E_a)_{a=1}^n$. \[rem\_contraction\_sign\_change\] On regions of constant signature, the covariant contraction of a smooth tensor is smooth. But at the points where the signature changes, the contraction is not necessarily smooth, because the inverse of the metric becomes divergent at the points where the signature changes, as it follows from Proposition \[thm\_cometric\_in\_basis\]. The fact that ${{{g{}_{\bullet}{}}}}_p\in({{T{}^{\bullet}{}}}_pM)^*\odot({{T{}^{\bullet}{}}}_pM)^*$ raises some problems, because the union of $({{T{}^{\bullet}{}}}_pM)^*$ does not form a bundle, and for ${{{g{}_{\bullet}{}}}}$ the notions of continuity and smoothness don’t even make sense. The covariant contraction of the two indices of the metric tensor at a point $p\in M$ is $g_p({{{}_\bullet}},{{{}_\bullet}})={\textnormal{rank }}g(p)$ (see Example \[thm\_contracted\_metric\_w\_itself\]). When ${\textnormal{rank }}g(p)$ is not constant, $g_p({{{}_\bullet}},{{{}_\bullet}})$ is discontinuous. On the other hand, the following example shows that it is possible to have smooth contractions even when the signature changes: \[ex\_contraction\_sign\_change\_smooth\] If $X\in {{{\mathfrak{X}}}(M)}$ and $\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$, $C_{12}(\omega\otimes_M X^\flat)={{{\langle\!\langle\omega,X^\flat\rangle\!\rangle{}_{\bullet}{}}}}=\omega(X)$ and it is smooth, even if the signature is variable. \[rem\_contraction\_sign\_change\_smooth\] Since the points where the signature doesn’t change form a dense subset of $M$ (Remark \[rem\_sign\_var\_points\]), it makes sense to impose the condition of smoothness of the covariant contraction of a smooth tensor. To check smoothness, we simply check whether the extension by continuity of the contraction is smooth. The Koszul form {#s_koszul_form} =============== For convenience, we name *Koszul form* the right member of the Koszul formula (see [*e.g.* ]{}[@ONe83][61]{}): \[def\_Koszul\_form\] *The Koszul form* is defined as $${{\mathcal{K}}}:{{{\mathfrak{X}}}(M)}^3\to{\mathbb{R}},$$ $$\label{eq_Koszul_form} \begin{array}{llll} {{\mathcal{K}}}(X,Y,Z) &:=&{\displaystyle}{\frac 1 2} \{ X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&\ - {\langleX,[Y,Z]\rangle} + {\langleY, [Z,X]\rangle} + {\langleZ, [X,Y]\rangle}\}. \end{array}$$ The Koszul formula becomes $$\label{eq_koszul_formula} {\langle{{{\nabla}_{X}}{Y}},Z\rangle} = {{\mathcal{K}}}(X,Y,Z),$$ and for [[nondegenerate]{}]{} metric, the unique Levi-Civita connection is obtained by raising the $1$-form ${{\mathcal{K}}}(X,Y,\_)$: $$\label{eq_koszul_formula_inv} {{{\nabla}_{X}}{Y}} = {{\mathcal{K}}}(X,Y,\_)^\sharp.$$ If the metric is degenerate, then this is not in general possible. We can raise ${{\mathcal{K}}}(X,Y,\_)$ on regions of constant signature, and what we obtain is what Kupeli ([@Kup87b][261–262]{}) called *Koszul derivative* – which is in general not a connection and is not unique. Kupeli’s construction is done only for singular [[semiRiemannian]{}]{} manifolds with metrics with constant signature, which satisfy the condition of *radical-stationarity* (Definition \[def\_radical\_stationary\_manifold\]). But if the metric changes its signature, the Koszul derivative is discontinuous at the points where the signature changes. In this article we would not need to use the Koszul derivative, because for our purpose it will be enough to work with the Koszul form. Basic properties of the Koszul form {#s_koszul_form_props} ----------------------------------- Let’s recall the Lie derivative of a tensor field $T\in{{\mathcal{T}}{}^{0}_{2}M}$: [(see [*e.g.* ]{}[[@HE95], p. 30]{})]{} \[def\_lie\_derivative\_metric\] Let $M$ be a differentiable manifold. Recall that the *Lie derivative* of a tensor field $T\in{{\mathcal{T}}{}^{0}_{2}M}$ with respect to a vector field $Z\in{{{\mathfrak{X}}}(M)}$ is given by $$({{\mathcal{L}}}_Z T)(X,Y):=Z T(X,Y) - T([Z,X],Y) - T(X,[Z,Y])$$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The following properties of the Koszul form correspond directly to standard properties of the Levi-Civita connection of a [[nondegenerate]{}]{} metric [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 61]{})]{}. We prove them explicitly here, because in the case of degenerate metric the proofs need to avoid using the Levi-Civita connection and the index raising. These properties will turn out to be important for what it follows. \[thm\_Koszul\_form\_props\] The Koszul form of a singular [[semiRiemannian]{}]{} manifold $(M,g)$ has, for any $X,Y,Z\in{{{\mathfrak{X}}}(M)}$ and $f\in{{\mathscr{F}}(M)}$, the following properties: 1. \[thm\_Koszul\_form\_props\_linear\] It is additive and ${\mathbb{R}}$-linear in each of its arguments. 2. \[thm\_Koszul\_form\_props\_flinearX\] It is ${{\mathscr{F}}(M)}$-linear in the first argument: ${{\mathcal{K}}}(fX,Y,Z) = f{{\mathcal{K}}}(X,Y,Z).$ 3. \[thm\_Koszul\_form\_props\_flinearY\] Satisfies the *Leibniz rule*: ${{\mathcal{K}}}(X,fY,Z) = f{{\mathcal{K}}}(X,Y,Z) + X(f) {\langleY,Z\rangle}.$ 4. \[thm\_Koszul\_form\_props\_flinearZ\] It is ${{\mathscr{F}}(M)}$-linear in the third argument: ${{\mathcal{K}}}(X,Y,fZ) = f{{\mathcal{K}}}(X,Y,Z).$ 5. \[thm\_Koszul\_form\_props\_commutYZ\] It is *metric*: ${{\mathcal{K}}}(X,Y,Z) + {{\mathcal{K}}}(X,Z,Y) = X {\langleY,Z\rangle}$. 6. \[thm\_Koszul\_form\_props\_commutXY\] It is *symmetric* or *torsionless*: ${{\mathcal{K}}}(X,Y,Z) - {{\mathcal{K}}}(Y,X,Z) = {\langle[X,Y],Z\rangle}$. 7. \[thm\_Koszul\_form\_props\_commutZX\] Relation with the Lie derivative of $g$: ${{\mathcal{K}}}(X,Y,Z) + {{\mathcal{K}}}(Z,Y,X) = ({{\mathcal{L}}}_Y g)(Z,X)$. 8. \[thm\_Koszul\_form\_props\_commutX2Y\] ${{\mathcal{K}}}(X,Y,Z) + {{\mathcal{K}}}(Y,Z,X) = Y{\langleZ,X\rangle} + {\langle[X,Y],Z\rangle}$.   Follows from Definition \[def\_Koszul\_form\], and from the linearity of $g$, of the action of vector fields on scalars, and of the Lie brackets. $$\begin{array}{llll} \eqref{thm_Koszul_form_props_flinearX}\ &2{{\mathcal{K}}}(fX,Y,Z) &=& fX {\langleY,Z\rangle} + Y {\langleZ,fX\rangle} - Z {\langlefX,Y\rangle} \\ &&&- {\langlefX,[Y,Z]\rangle} + {\langleY, [Z,fX]\rangle} + {\langleZ, [fX,Y]\rangle} \\ &&=& fX {\langleY,Z\rangle} + Y (f{\langleZ,X\rangle}) - Z (f{\langleX,Y\rangle}) \\ &&&- f{\langleX,[Y,Z]\rangle}+ {\langleY, f[Z,X] + Z(f)X\rangle} \\ &&&+ {\langleZ, f[X,Y] - Y(f)X\rangle} \\ &&=& fX {\langleY,Z\rangle} + fY {\langleZ,X\rangle} \\ &&&+ Y(f) {\langleZ,X\rangle} - fZ {\langleX,Y\rangle} \\ &&&- Z(f){\langleX,Y\rangle} - f{\langleX,[Y,Z]\rangle} + f{\langleY, [Z,X]\rangle} \\ &&&+ Z(f){\langleY, X\rangle} + f{\langleZ, [X,Y]\rangle} - Y(f){\langleZ, X\rangle} \\ &&=& fX {\langleY,Z\rangle} + fY {\langleZ,X\rangle} - fZ {\langleX,Y\rangle} \\ &&& - f{\langleX,[Y,Z]\rangle} + f{\langleY,[Z,X]\rangle} + f{\langleZ,[X,Y]\rangle} \\ &&=& 2f{{\mathcal{K}}}(X,Y,Z) \\ \end{array}$$ $$\begin{array}{llll} \eqref{thm_Koszul_form_props_flinearY}\ &2{{\mathcal{K}}}(X,fY,Z) &=& X {\langlefY,Z\rangle} + fY {\langleZ,X\rangle} - Z {\langleX,fY\rangle} \\ &&&- {\langleX,[fY,Z]\rangle} + {\langlefY, [Z,X]\rangle} + {\langleZ, [X,fY]\rangle} \\ &&=& X(f) {\langleY,Z\rangle} + fX {\langleY,Z\rangle} \\ &&& + fY {\langleZ,X\rangle} - Z(f) {\langleX,Y\rangle} \\ &&& -fZ {\langleX,Y\rangle}- f{\langleX,[Y,Z]\rangle} + Z(f){\langleX,Y\rangle} \\ &&& +f{\langleY,[Z,X]\rangle} + f{\langleZ,[X,Y]\rangle} + X(f){\langleZ,Y\rangle} \\ &&=& f(X {\langleY,Z\rangle} + Y {\langleZ,X\rangle}- Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleZ,[X,Y]\rangle} + {\langleY,[Z,X]\rangle}) \\ &&&+ X(f) \({\langleY,Z\rangle} + {\langleZ,Y\rangle}\) \\ &&=& 2\(f{{\mathcal{K}}}(X,Y,Z) + X(f) {\langleY,Z\rangle}\) \\ \end{array}$$ $$\begin{array}{llll} \eqref{thm_Koszul_form_props_flinearZ}\ &2{{\mathcal{K}}}(X,Y,fZ) &=& X {\langleY,fZ\rangle} + Y {\langlefZ,X\rangle} - fZ {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,fZ]\rangle} + {\langleY, [fZ,X]\rangle} + {\langlefZ, [X,Y]\rangle} \\ &&=& fX {\langleY,Z\rangle} + X(f) {\langleY,Z\rangle} \\ &&& + fY {\langleZ,X\rangle}+ Y (f){\langleZ,X\rangle} \\ &&&- fZ ({\langleX,Y\rangle})- f{\langleX,[Y,Z]\rangle} - Y(f){\langleX,Z\rangle} \\ &&&+ f{\langleY,[Z,X]\rangle} - X(f){\langleY,Z\rangle} + f{\langleZ,[X,Y]\rangle} \\ &&=& fX {\langleY,Z\rangle} + fY {\langleZ,X\rangle} - fZ ({\langleX,Y\rangle}) \\ &&&- f{\langleX,[Y,Z]\rangle}+ f{\langleY,[Z,X]\rangle} + f{\langleZ,[X,Y]\rangle} \\ &&=& 2f{{\mathcal{K}}}(X,Y,Z) \\ \end{array}$$ $$\begin{array}{llll} \eqref{thm_Koszul_form_props_commutYZ}\ & 2[{{\mathcal{K}}}(X,Y,Z) &+& {{\mathcal{K}}}(X,Z,Y)] \\ &&=& X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&&+ X {\langleZ,Y\rangle} + Z {\langleY,X\rangle} - Y {\langleX,Z\rangle} \\ &&&- {\langleX,[Z,Y]\rangle} + {\langleZ,[Y,X]\rangle} + {\langleY,[X,Z]\rangle} \\ &&=& X {\langleY,Z\rangle} - {\langleX,[Y,Z]\rangle} \\ &&&+ {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} + X {\langleY,Z\rangle} \\ &&&+ {\langleX,[Y,Z]\rangle} - {\langleZ,[X,Y]\rangle} - {\langleY,[Z,X]\rangle} \\ &&=& 2X {\langleY,Z\rangle} \\ \end{array}$$ $$\begin{array}{llll} \eqref{thm_Koszul_form_props_commutZX}\ & 2[{{\mathcal{K}}}(X,Y,Z) &+& {{\mathcal{K}}}(Z,Y,X)] \\ &&=& X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&& +Z {\langleY,X\rangle} + Y {\langleX,Z\rangle} - X {\langleZ,Y\rangle} \\ &&&- {\langleZ,[Y,X]\rangle} + {\langleY,[X,Z]\rangle} + {\langleX,[Z,Y]\rangle} \\ &&=& X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&& +Z {\langleX,Y\rangle} + Y {\langleZ,X\rangle} - X {\langleY,Z\rangle} \\ &&&+ {\langleZ,[X,Y]\rangle} - {\langleY,[Z,X]\rangle} - {\langleX,[Y,Z]\rangle} \\ &&=& 2Y {\langleZ,X\rangle} - 2{\langleX,[Y,Z]\rangle} + 2{\langleZ,[X,Y]\rangle} \\ &&=& 2(Y {\langleZ,X\rangle} - {\langleX,{{\mathcal{L}}}_YZ\rangle} - {\langleZ,{{\mathcal{L}}}_YX\rangle}) \\ &&=& 2({{\mathcal{L}}}_Y g)(Z,X) \\ \end{array}$$ $$\begin{array}{llll} \eqref{thm_Koszul_form_props_commutXY}\ & 2[{{\mathcal{K}}}(X,Y,Z) &-& {{\mathcal{K}}}(Y,X,Z)] \\ &&=& X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&&- Y {\langleX,Z\rangle} - X {\langleZ,Y\rangle} + Z {\langleY,X\rangle} \\ &&&+ {\langleY,[X,Z]\rangle} - {\langleX,[Z,Y]\rangle} - {\langleZ,[Y,X]\rangle} \\ &&=& X {\langleY,Z\rangle} + Y {\langleZ,X\rangle} - Z {\langleX,Y\rangle} \\ &&&- {\langleX,[Y,Z]\rangle} + {\langleY,[Z,X]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&&- Y {\langleZ,X\rangle} - X {\langleY,Z\rangle} + Z {\langleX,Y\rangle} \\ &&&- {\langleY,[Z,X]\rangle} + {\langleX,[Y,Z]\rangle} + {\langleZ,[X,Y]\rangle} \\ &&=&2 {\langleZ,[X,Y]\rangle} = 2 {\langle[X,Y],Z\rangle} \\ \end{array}$$ By subtracting from , we obtain $${{\mathcal{K}}}(Y,X,Z) + {{\mathcal{K}}}(X,Z,Y) = X {\langleY,Z\rangle} - {\langle[X,Y],Z\rangle}.$$ By applying the permutation $(X,Y,Z)\mapsto(Y,X,Z)$ we get $${{\mathcal{K}}}(X,Y,Z) + {{\mathcal{K}}}(Y,Z,X) = Y{\langleZ,X\rangle} + {\langle[X,Y],Z\rangle}.$$ \[thm\_Koszul\_form\_index\] If $U\subseteq M$ is an open set in $M$ and $(E_a)_{a=1}^n\subset{{{\mathfrak{X}}}(U)}$ are vector fields on $U$ forming a frame of $T_pU$ at each $p\in U$, then $$\label{eq_Koszul_form_index} \begin{array}{lll} {{\mathcal{K}}}_{abc}&:=&{{\mathcal{K}}}(E_a,E_b,E_c) \\ &=&{\displaystyle}{\frac 1 2} \{E_a(g_{bc}) + E_b(g_{ca}) - E_c(g_{ab}) - g_{as} {\mathscr{C}}^s_{bc} + g_{bs} {\mathscr{C}}^s_{ca} + g_{cs} {\mathscr{C}}^s_{ab}\}, \end{array}$$ where $g_{ab} = {\langleE_a,E_b\rangle}$ and ${\mathscr{C}}^c_{ab}$ are the coefficients of the Lie bracket of vector fields [(see [*e.g.* ]{}[[@Das07], p. 107]{})]{}, $[E_a,E_b] = {\mathscr{C}}_{ab}^c E_c$. The equations (\[thm\_Koszul\_form\_props\_commutYZ\] – \[thm\_Koszul\_form\_props\_commutX2Y\]) in Theorem \[thm\_Koszul\_form\_props\] become in the basis $(E_a)_{a=1}^n$: $$\begin{array}{ll} (\ref{thm_Koszul_form_props_commutYZ}') & {{\mathcal{K}}}_{abc} + {{\mathcal{K}}}_{acb} = E_a (g_{bc}). \\ (\ref{thm_Koszul_form_props_commutZX}') & {{\mathcal{K}}}_{abc} + {{\mathcal{K}}}_{cba} = ({{\mathcal{L}}}_{E_b} g)_{ca}. \\ (\ref{thm_Koszul_form_props_commutXY}') & {{\mathcal{K}}}_{abc} - {{\mathcal{K}}}_{bac} = g_{sc}{\mathscr{C}}^s_{ab}. \\ (\ref{thm_Koszul_form_props_commutX2Y}') & {{\mathcal{K}}}_{abc} + {{\mathcal{K}}}_{bca} = E_b(g_{ca}) + g_{sc}{\mathscr{C}}^s_{ab}. \\ \end{array}$$ If $E_a=\partial_a:={\displaystyle}{\frac {\partial}{\partial x^a}}$ for all $a\in\{1,\ldots,n\}$ are the partial derivatives in a coordinate system, $[\partial_a,\partial_b]=0$ and the equation reduces to $$\label{eq_Koszul_form_coord} {{\mathcal{K}}}_{abc}={{\mathcal{K}}}(\partial_a,\partial_b,\partial_c)={\displaystyle}{\frac 1 2} ( \partial_a g_{bc} + \partial_b g_{ca} - \partial_c g_{ab}),$$ which are Christoffel’s symbols of the first kind [([*cf.* ]{}[*e.g.* ]{}[[@HE95], p. 40]{})]{}. \[thm\_Koszul\_form\] Let $X,Y\in{{{\mathfrak{X}}}(M)}$ two vector fields. The map ${{\mathcal{K}}}(X,Y,\_):{{{\mathfrak{X}}}(M)}\to{\mathbb{R}}$ defined as $${{\mathcal{K}}}(X,Y,\_)(Z) := {{\mathcal{K}}}(X,Y,Z)$$ is a differential $1$-form. It is a direct consequence of Theorem \[thm\_Koszul\_form\_props\], properties and . \[thm\_Koszul\_null\_props\] If $X,Y\in {{{\mathfrak{X}}}(M)}$ and $W\in{{{\mathfrak{X}}}_\circ(M)}$, then $${{\mathcal{K}}}(X,Y,W) = {{\mathcal{K}}}(Y,X,W) = -{{\mathcal{K}}}(X,W,Y) = -{{\mathcal{K}}}(Y,W,X). \\$$ From Theorem \[thm\_Koszul\_form\_props\], property , $${{\mathcal{K}}}(X,Y,W) = {{\mathcal{K}}}(Y,X,W) + {\langle[X,Y],W\rangle} = {{\mathcal{K}}}(Y,X,W).$$ From Theorem \[thm\_Koszul\_form\_props\], property , $${{\mathcal{K}}}(X,Y,W) = -{{\mathcal{K}}}(X,W,Y) + X{\langleY,W\rangle}= -{{\mathcal{K}}}(X,W,Y)$$ and $${{\mathcal{K}}}(Y,X,W) = -{{\mathcal{K}}}(Y,W,X).$$ The covariant derivative {#s_cov_der} ======================== The lower covariant derivative of vector fields {#s_l_cov_dev} ----------------------------------------------- \[def\_l\_cov\_der\] The *lower covariant derivative* of a vector field $Y$ in the direction of a vector field $X$ is the differential $1$-form ${{{{\nabla}^{\flat}}_{X}}{Y}} \in {{\mathcal{A}}^{1}(M)}$ defined as $$\label{eq_l_cov_der_vect} {({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} := {{\mathcal{K}}}(X,Y,Z)$$ for any $Z\in{{{\mathfrak{X}}}(M)}$. The *lower covariant derivative operator* is the operator $${{\nabla}^{\flat}}:{{{\mathfrak{X}}}(M)} \times {{{\mathfrak{X}}}(M)} \to {{\mathcal{A}}^{1}(M)}$$ which associates to each $X,Y\in{{{\mathfrak{X}}}(M)}$ the differential $1$-form ${{{\nabla}^{\flat}}_{X}}Y$. Unlike the case of the covariant derivative defined when the metric is [[nondegenerate]{}]{}, the result of applying the lower covariant derivative to a vector field is not another vector field, but a differential $1$-form. When the metric is [[nondegenerate]{}]{} the two are equivalent by changing the type of the $1$-form ${{{{\nabla}^{\flat}}_{X}}{Y}}$ into a vector field ${{{\nabla}_{X}}{Y}}=({{{{\nabla}^{\flat}}_{X}}{Y}})^\sharp$. Similar objects mapping vector fields to $1$-forms were used in [*e.g.* ]{}[@Koss85][464–465]{}. The lower covariant derivative doesn’t require a [[nondegenerate]{}]{} metric, and it will be very useful in what follows. The following properties correspond to standard properties of the Levi-Civita connection of a [[nondegenerate]{}]{} metric [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 61]{})]{}, and are extended here to the case when the metric can be degenerate. \[thm\_l\_cov\_der\_props\] The lower covariant derivative operator ${{\nabla}^{\flat}}$ of vector fields defined on a singular [[semiRiemannian]{}]{} manifold $(M,g)$ has the following properties: 1. \[thm\_l\_cov\_der\_props\_linear\] It is additive and ${\mathbb{R}}$-linear in both of its arguments. 2. \[thm\_l\_cov\_der\_props\_flinearX\] It is ${{\mathscr{F}}(M)}$-linear in the first argument: ${{{{\nabla}^{\flat}}_{fX}}{Y}} = f{{{{\nabla}^{\flat}}_{X}}{Y}}.$ 3. \[thm\_l\_cov\_der\_props\_flinearY\] Satisfies the *Leibniz rule*: ${{{{\nabla}^{\flat}}_{X}}{fY}} = f{{{{\nabla}^{\flat}}_{X}}{Y}} + X(f) Y^\flat.$ or, explicitly, ${({{{{\nabla}^{\flat}}_{X}}{fY}})(Z)} = f{({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} + X(f) {\langleY,Z\rangle}.$ 4. \[thm\_l\_cov\_der\_props\_flinearZ\] It is *metric*: ${({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} + {({{{{\nabla}^{\flat}}_{X}}{Z}})(Y)} = X {\langleY,Z\rangle}$. 5. \[thm\_l\_cov\_der\_props\_commutXY\] It is *symmetric* or *torsionless*: ${{{{\nabla}^{\flat}}_{X}}{Y}} - {{{{\nabla}^{\flat}}_{Y}}{X}} = [X,Y]^\flat$ or, explicitly, ${({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} - {({{{{\nabla}^{\flat}}_{Y}}{X}})(Z)} = {\langle[X,Y],Z\rangle}$. 6. \[thm\_l\_cov\_der\_props\_commutZX\] Relation with the Lie derivative of $g$: ${({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} + {({{{{\nabla}^{\flat}}_{Z}}{Y}})(X)} = ({{\mathcal{L}}}_Y g)(Z,X)$. 7. \[thm\_l\_cov\_der\_props\_commutX2Y\] ${({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)} + {({{{{\nabla}^{\flat}}_{Y}}{Z}})(X)} = Y{\langleZ,X\rangle} + {\langle[X,Y],Z\rangle}$. for any $X,Y,Z\in{{{\mathfrak{X}}}(M)}$ and $f\in{{\mathscr{F}}(M)}$. Follows from the direct application of Theorem \[thm\_Koszul\_form\_props\]. [[Radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifolds {#s_radical_stationary_manifolds} ------------------------------------------------------------------- The [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifolds of constant signature were introduced by Kupeli in [@Kup87b][259–260]{}, where he called them singular [[semiRiemannian]{}]{} manifolds. Later, in [@Kup96] Definition 3.1.3, he named them “stationary singular [[semiRiemannian]{}]{} manifolds”. Here we use the term “[[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifolds” to avoid possible confusion, since the word “stationary” is used in general for manifolds admitting a Killing vector field, and in particular for spacetimes invariant at time translation. Kupeli introduced them to ensure the existence of the Koszul derivative. Our need is different, since we don’t rely on Kupeli’s Koszul derivative. \[def\_radical\_stationary\_manifold\] A singular [[semiRiemannian]{}]{} manifold $(M,g)$ is *[[radicalstationary]{}]{}* if it satisfies the condition $$\label{eq_radical_stationary_manifold} {{\mathcal{K}}}(X,Y,\_)\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)},$$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The condition from Definition \[def\_radical\_stationary\_manifold\] means that ${{\mathcal{K}}}(X,Y,W_p)=0$ for any $X,Y\in {{{\mathfrak{X}}}(M)}$ and $W_p\in {{{\mathfrak{X}}}_\circ(M_p)}$, $p\in M$. \[thm\_Koszul\_null\_props\_rad\_stat\] If $(M,g)$ is [[radicalstationary]{}]{} and $X,Y\in {{{\mathfrak{X}}}(M)}$ and $W\in{{{\mathfrak{X}}}_\circ(M)}$, then $${{\mathcal{K}}}(X,Y,W) = {{\mathcal{K}}}(Y,X,W) = -{{\mathcal{K}}}(X,W,Y) = -{{\mathcal{K}}}(Y,W,X) = 0. \\$$ Follows directly from the Corollary \[thm\_Koszul\_null\_props\]. \[rem\_rad\_stat\_lower\_der\] The condition can be expressed in terms of the lower derivative as $${{{{\nabla}^{\flat}}_{X}}{Y}}\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)},$$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The covariant derivative of differential 1-forms {#s_cov_der_covect} ------------------------------------------------ For [[nondegenerate]{}]{} metrics the covariant derivative of a differential $1$-form is defined in terms of ${\nabla}_XY$ [([*cf.* ]{}[*e.g.* ]{}[[@GHLF04], p. 70]{})]{} by $$\left({\nabla}_X\omega\right)(Y) = X\left(\omega(Y)\right) - \omega\left({\nabla}_X Y\right).$$ In order to generalize this formula to the case of degenerate metrics, we need to express $\omega\left({\nabla}_XY\right)$ in terms of ${{{{\nabla}^{\flat}}_{X}}{Y}}$. We can use the identity $$\label{eq_cov_der_form_raise} \omega\left({\nabla}_XY\right) = {\langle{\nabla}_XY,\omega^\sharp\rangle}$$ and rewrite it in a way compatible to the degenerate case as $$\tag{\ref{eq_cov_der_form_raise}'} \omega\left({\nabla}_XY\right) = {\langle{\nabla}_XY,{{{}_\bullet}}\rangle}{\langle\omega^\sharp,{{{}_\bullet}}\rangle}$$ If the metric is degenerate, we need to be allowed to define the contraction ${{\mathcal{K}}}(X,Y,{{{}_\bullet}})\omega({{{}_\bullet}})$. This is possible on [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifolds – since ${{{{\nabla}^{\flat}}_{X}}{Y}}$ is [[radicalannihilator]{}]{} – if the differential form $\omega$ is [[radicalannihilator]{}]{} too. We can therefore give the following definition: \[def\_cov\_der\_covect\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. We define the covariant derivative of a [[radicalannihilator]{}]{} $1$-form $\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$ in the direction of a vector field $X\in{{{\mathfrak{X}}}(M)}$ by $${\nabla}:{{{\mathfrak{X}}}(M)} \times {{{{\mathcal{A}}{}^{\bullet}{}}}(M)} \to {A_d{}^{1}(M)}$$ $$\left({\nabla}_X\omega\right)(Y) := X\left(\omega(Y)\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}},\omega\rangle\!\rangle{}_{\bullet}{}}}},$$ where ${A_d{}^{1}(M)}$ is the set of sections of $T^*M$ smooth at the points of $M$ where the signature is constant. \[thm\_cov\_deriv\_annih\] If $(M,g)$ is [[radicalstationary]{}]{} and $\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$ is a [[radicalannihilator]{}]{} $1$-form, then for any $X\in{{{\mathfrak{X}}}(M)}$ and $p\in M - {M{}_{\wr}}$, ${\nabla}_{X_p}\omega_p\in{{T_p{}^{\bullet}{}}}M$. It follows from the Definition \[def\_cov\_der\_covect\]. Let $U$ be a neighborhood of $p$ where $g$ has constant signature, and let $W\in{{{\mathfrak{X}}}_\circ(U)}$ so that $W_p\in{{T_p{}_{\circ}{}}}M$. Then, on $U$, $\left({\nabla}_X\omega\right)(W) = X\left(\omega(W)\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{W}},\omega\rangle\!\rangle{}_{\bullet}{}}}} = 0$. \[thm\_cov\_deriv\_annih\_smooth\] If ${\nabla}_X\omega$ is smooth, then it is a [[radicalannihilator]{}]{} differential $1$-form, ${\nabla}_X\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$. Follows from Proposition \[thm\_cov\_deriv\_annih\] because of continuity. \[def\_cov\_der\_smooth\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. We define the following vector spaces of differential forms having smooth covariant derivatives: $${{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)} = \{\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}|(\forall X\in{{{\mathfrak{X}}}(M)})\ {\nabla}_X\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}\},$$ $${{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)} := \bigwedge^k_M{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)}.$$ The following theorem extends some properties of the covariant derivative known from the [[nondegenerate]{}]{} case [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 59]{})]{}. \[thm\_cov\_der\_covect\_props\] The covariant derivative operator ${\nabla}$ of differential $1$-forms defined on a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold $(M,g)$ has the following properties: 1. \[thm\_cov\_der\_covect\_props\_linear\] It is additive and ${\mathbb{R}}$-linear in both of its arguments. 2. \[thm\_cov\_der\_covect\_props\_flinearX\] It is ${{\mathscr{F}}(M)}$-linear in the first argument: ${{{\nabla}_{fX}}{\omega}} = f{{{\nabla}_{X}}{\omega}}.$ 3. \[thm\_cov\_der\_covect\_props\_flinearY\] It satisfies the *Leibniz rule*: ${{{\nabla}_{X}}{f\omega}} = f{{{\nabla}_{X}}{\omega}} + X(f) \omega.$ 4. \[thm\_cov\_der\_covect\_props\_flat\_commut\] It commutes with the lowering operator: ${{{\nabla}_{X}}{Y^\flat}} = {{{{\nabla}^{\flat}}_{X}}{Y}}$. for any $X,Y\in{{{\mathfrak{X}}}(M)}$, $\omega\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$ and $f\in{{\mathscr{F}}(M)}$. The property follows from the direct application of Theorem \[thm\_l\_cov\_der\_props\] to the Definition \[def\_cov\_der\_covect\]. For property , $${({{{{\nabla}_{fX}}{\omega}}})(Y)} = fX\left(\omega(Y)\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{fX}}{Y}},\omega\rangle\!\rangle{}_{\bullet}{}}}} = f {({{{{\nabla}_{X}}{\omega}}})(Y)}.$$ Property results by $$\begin{array}{lll} {({{{{\nabla}_{X}}{f\omega}}})(Y)} &=& X\left(f\omega(Y)\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}},f\omega\rangle\!\rangle{}_{\bullet}{}}}} \\ &=& X(f)\omega(Y) +fX\left(\omega(Y)\right) - f{{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}},\omega\rangle\!\rangle{}_{\bullet}{}}}}\\ &=& f{({{{{\nabla}_{X}}{\omega}}})(Y)} + X(f) \omega(Y). \end{array}$$ For property , we apply Definition \[def\_cov\_der\_covect\] to $\omega=Y^\flat$. Let $Z\in{{{\mathfrak{X}}}(M)}$. Then, $$\begin{array}{lll} {({{{{\nabla}_{X}}{Y^\flat}}})(Z)} &=& X\left(Y^\flat(Z)\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Z}},Y^\flat\rangle\!\rangle{}_{\bullet}{}}}} \\ &=& X{\langleY,Z\rangle} - {({{{{\nabla}^{\flat}}_{X}}{Z}})(Y)} \\ &=& {({{{{\nabla}^{\flat}}_{X}}{Y}})(Z)}, \\ \end{array}$$ where the last identity follows from Theorem \[thm\_l\_cov\_der\_props\_flinearZ\] property . Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold, and $${{{{\mathscr{F}}{}^{\bullet}{}}}(M)}=\{f\in{{\mathscr{F}}(M)}|{\textnormal{d}}f\in{{{{\mathcal{A}}{}^{\bullet}{}}}{}^{1}(M)}\}.$$ Then, ${{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)}$ from Definition \[def\_cov\_der\_smooth\] are ${{{{\mathscr{F}}{}^{\bullet}{}}}(M)}$-modules of differential forms. From Theorem \[thm\_cov\_der\_covect\_props\] property follows that for any $f\in{{{{\mathscr{F}}{}^{\bullet}{}}}(M)}$ and $\omega\in{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)}$, $f\omega\in{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)}$. The covariant derivative of differential forms {#s_cov_der_forms} ---------------------------------------------- We define now the covariant derivative for tensors which are covariant and radical annihilator in all their slots, in particular on differential forms (generalizing the corresponding formulas from the [[nondegenerate]{}]{} case, see [*e.g.* ]{}[@GHLF04][70]{}). \[def\_cov\_der\_cov\_tensors\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. We define the covariant derivative of tensors of type $(0,s)$ as the operator $${\nabla}:{{{\mathfrak{X}}}(M)} \times \otimes^s_M{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)} \to \otimes^s_M{{{{\mathcal{A}}{}^{\bullet}{}}}{}^{1}(M)}$$ acting by $${\nabla}_X(\omega_1\otimes\ldots\otimes\omega_s) := {\nabla}_X(\omega_1)\otimes\ldots\otimes\omega_s +\ldots + \omega_1\otimes\ldots\otimes{\nabla}_X(\omega_s)$$ In particular, \[def\_cov\_der\_forms\] On a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold $(M,g)$ we define the covariant derivative of $k$-differential forms by $${\nabla}:{{{\mathfrak{X}}}(M)} \times {{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)} \to {{{{\mathcal{A}}{}^{\bullet}{}}}{}^{k}(M)},$$ acting by $${\nabla}_X(\omega_1\wedge\ldots\wedge\omega_s) := {\nabla}_X(\omega_1)\wedge\ldots\wedge\omega_s +\ldots + \omega_1\wedge\ldots\wedge{\nabla}_X(\omega_s)$$ \[thm\_cov\_der\_cov\_tensors\] The covariant derivative of a tensor $T\in\otimes^k_M{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)}$ on a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold $(M,g)$ satisfies the formula $$\begin{array}{lll} \left(\nabla_X T\right)(Y_1,\ldots,Y_k) &=& X\left(T(Y_1,\ldots,Y_k)\right) \\ && - \sum_{i=1}^k{{\mathcal{K}}}(X,Y_i,{{{}_\bullet}})T(Y_1,,\ldots,{{{}_\bullet}},\ldots,Y_k) \end{array}$$ Because of linearity, it is enough to prove it for the case $$T = \omega_1\otimes_M\ldots\otimes_M\omega_k.$$ From the Definitions \[def\_cov\_der\_cov\_tensors\] and \[def\_cov\_der\_covect\], $$\begin{array}{lll} ({\nabla}_XT)(Y_1,\ldots,Y_k) &=& {\nabla}_X(\omega_1\otimes_M\ldots\otimes_M\omega_k)(Y_1,\ldots,Y_k) \\ &=& {({{{{\nabla}_{X}}{\omega_1}}})(Y_1)}\cdot\ldots\cdot\omega_k(Y_k) +\ldots \\ && + \omega_1(Y_1)\cdot\ldots\cdot{({{{{\nabla}_{X}}{\omega_k}}})(Y_k)} \\ &=& (X(\omega_1(Y_1)) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}}_1,\omega_1\rangle\!\rangle{}_{\bullet}{}}}})\cdot\ldots\cdot\omega_k(Y_k) +\ldots \\ && + \omega_1(Y_1)\cdot\ldots\cdot(X(\omega_k(Y_k)) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}}_k,\omega_k\rangle\!\rangle{}_{\bullet}{}}}}) \\ &=& X(\omega_1(Y_1))\cdot\ldots\cdot\omega_k(Y_k) + \ldots \\ && + \omega_1(Y_1)\cdot\ldots\cdot X(\omega_k(Y_k)) \\ && - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}}_1,\omega_1\rangle\!\rangle{}_{\bullet}{}}}}\cdot\ldots\cdot\omega_k(Y_k) \\ && - \omega_1(Y_1)\cdot\ldots\cdot{{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Y}}_k,\omega_k\rangle\!\rangle{}_{\bullet}{}}}} \\ &=& X\left(T(Y_1,\ldots,Y_k)\right) \\ && - {\displaystyle}{\sum_{i=1}^k}{{\mathcal{K}}}(X,Y_i,{{{}_\bullet}})T(Y_1,,\ldots,{{{}_\bullet}},\ldots,Y_k) \end{array}$$ and the desired formula follows. \[thm\_cov\_der\_forms\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. The covariant derivative of a $k$-differential form $\omega\in{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{k}(M)}$ takes the form $$\begin{array}{lll} \left(\nabla_X\omega\right)(Y_1,\ldots,Y_k) &:=& X\left(\omega(Y_1,\ldots,Y_k)\right) \\ && - \sum_{i=1}^k{{\mathcal{K}}}(X,Y_i,{{{}_\bullet}})\omega(Y_1,,\ldots,{{{}_\bullet}},\ldots,Y_k) \end{array}$$ Follows from Theorem \[thm\_cov\_der\_cov\_tensors\], by verifying that the antisymmetry property of $\omega$ is maintained. On a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold $(M,g)$, the metric $g$ is parallel: $$\nabla_Xg = 0.$$ Follows from Theorems \[thm\_cov\_der\_cov\_tensors\] and \[thm\_Koszul\_form\_props\], property : $$(\nabla_Xg)(Y,Z) = X{\langleY,Z\rangle} - {{\mathcal{K}}}(X,Y,{{{}_\bullet}})g({{{}_\bullet}},Z) - {{\mathcal{K}}}(X,Z,{{{}_\bullet}})g(Y,{{{}_\bullet}}) = 0.$$ [[Semiregular]{}]{} [[semiRiemannian]{}]{} manifolds {#s_semi_regular} ---------------------------------------------------- An important particular type of [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold is provided by the [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds, introduced below. \[def\_semi\_regular\_semi\_riemannian\] A *[[semiregular]{}]{} [[semiRiemannian]{}]{} manifold* is a singular [[semiRiemannian]{}]{} manifold $(M,g)$ which satisfies $${{{\nabla}^{\flat}}_{X}} Y \in{{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)}$$ for any vector fields $X,Y\in{{{\mathfrak{X}}}(M)}$. \[rem\_semi\_regular\_semi\_riemannian\] By Definition \[def\_cov\_der\_smooth\], this is equivalent to saying that for any $X,Y,Z\in{{{\mathfrak{X}}}(M)}$ $${{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z \in {{{{\mathcal{A}}{}^{\bullet}{}}}(M)}.$$ Recall that ${{{{\mathscr{A}}{}^{\bullet}{}}}{}^{1}(M)} \subseteq {{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$. This means that any [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold is also [[radicalstationary]{}]{} ([*cf.* ]{}Definition \[def\_radical\_stationary\_manifold\]). \[thm\_sr\_cocontr\_kosz\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. Then, the manifold $(M,g)$ is [[semiregular]{}]{} if and only if for any $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$ $${{\mathcal{K}}}(X,Y,{{{}_\bullet}}){{\mathcal{K}}}(Z,T,{{{}_\bullet}}) \in {{\mathscr{F}}(M)}.$$ From the Definition \[def\_cov\_der\_covect\] of the covariant derivative of $1$-forms we obtain that $$\begin{array}{lll} {({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}{Z}}}}})(T)} &=& X\left({({{{{\nabla}^{\flat}}_{Y}}{Z}})(T)}\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{T}},{{{{\nabla}^{\flat}}_{Y}}{Z}}\rangle\!\rangle{}_{\bullet}{}}}} \\ &=& X\left({({{{{\nabla}^{\flat}}_{Y}}{Z}})(T)}\right) - {{\mathcal{K}}}(X,T,{{{}_\bullet}}){{\mathcal{K}}}(Y,Z,{{{}_\bullet}}). \\ \end{array}$$ It follows that ${({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}{Z}}}}})(T)}$ is smooth if and only if ${{\mathcal{K}}}(X,T,{{{}_\bullet}}){{\mathcal{K}}}(Y,Z,{{{}_\bullet}})$ is. Curvature of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds {#s_riemann_curvature} ================================================================= The standard way to define the curvature invariants is to construct the Levi-Civita connection of the metric [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 59]{})]{}, and from this the curvature operator [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 74]{})]{}. The Ricci tensor and the scalar curvature [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 87–88]{})]{} follow by contraction [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 83]{})]{}. Unfortunately, in the case of singular [[semiRiemannian]{}]{} manifolds the usual road is not available, because there is no intrinsic Levi-Civita connection. But, as we shall see in this section, the Riemann curvature tensor can be obtained from the lower covariant derivative and the covariant derivative of [[radicalannihilator]{}]{} differential forms. For [[radicalstationary]{}]{} manifolds the Riemann curvature tensor thus introduced is guaranteed to be smooth only on the regions of constant signature, but for [[semiregular]{}]{} manifolds it is smooth everywhere. In order to obtain the Ricci curvature tensor, and further the scalar curvature, we need to contract the Riemann curvature tensor in two covariant indices. Because the metric may be degenerate, this covariant contraction can be defined only if the Riemann curvature tensor is [[radicalannihilator]{}]{} in its slots. We will see that this is the case, and in [§\[s\_ricci\_tensor\_scalar\]]{} we define the Ricci tensor and the scalar curvature. Riemann curvature of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds {#ss_riemann_curvature} ------------------------------------------------------------------------- \[def\_riemann\_curvature\_operator\] Let $(M,g)$ be a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold. We define the *lower Riemann curvature operator* as $${{\mathcal{R}}^\flat_{}}: {{{\mathfrak{X}}}(M)} ^3 \to {A_d{}^{1}(M)}$$ $$\label{eq_riemann_curvature_operator} {{\mathcal{R}}^\flat_{XY}} Z := {{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z - {{\nabla}_{Y}} {{{{\nabla}^{\flat}}_{X}}}Z - {{{\nabla}^{\flat}}_{[X,Y]}}Z$$ for any vector fields $X,Y,Z\in{{{\mathfrak{X}}}(M)}$. \[def\_riemann\_curvature\] We define the *Riemann curvature tensor* as $$R: {{{\mathfrak{X}}}(M)}\times {{{\mathfrak{X}}}(M)}\times {{{\mathfrak{X}}}(M)}\times {{{\mathfrak{X}}}(M)} \to {\mathbb{R}},$$ $$\label{eq_riemann_curvature} R(X,Y,Z,T) := ({{\mathcal{R}}^\flat_{XY}} Z)(T)$$ for any vector fields $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$. The Riemann curvature tensor from Definition \[def\_riemann\_curvature\] generalizes the Riemann curvature tensor $R(X,Y,Z,T) := {\langleR_{XY}Z,T\rangle}$ known from [[semiRiemannian]{}]{} geometry [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 75]{})]{}. It follows from the Definition \[def\_riemann\_curvature\] that $$\label{eq_riemann_curvature_explicit} R(X,Y,Z,T) = {({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}Z}}})(T)} - {({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}Z}}})(T)} - {({{{{\nabla}^{\flat}}_{[X,Y]}}{Z}})(T)}$$ for any vector fields $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$. \[thm\_riemann\_curvature\_semi\_regular\] Let $(M,g)$ be a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold. The Riemann curvature is a smooth tensor field $R\in{{\mathcal{T}}{}^{0}_{4}M}$. Remember from Theorem \[thm\_l\_cov\_der\_props\], property that the lower covariant derivative for vector fields is additive and ${\mathbb{R}}$-linear in both of is arguments. From the same Theorem \[thm\_cov\_der\_covect\_props\] property , we recall that the covariant derivative for differential $1$-forms is additive and ${\mathbb{R}}$-linear in both of is arguments. By combining the two, it follows *the additivity and ${\mathbb{R}}$-linearity* of the Riemann curvature $R$ in all of its four arguments. We will show now that $R$ is ${{\mathscr{F}}(M)}$-linear in its four arguments. The proof goes almost similar to the [[nondegenerate]{}]{} case, but we will give it explicitly, because in our proof we need to avoid any use of the Levi-Civita connection or of the inverse of the metric tensor, for example index raising. We apply the properties of the lower covariant derivative for vector fields, as exposed in Theorem \[thm\_cov\_der\_covect\_props\] properties -, and those of the covariant derivative for differential $1$-forms, as known from Theorem \[thm\_cov\_der\_covect\_props\], properties -, to verify that for any function $f\in{{\mathscr{F}}(M)}$, $R(fX,Y,Z,T)=R(X,fY,Z,T)=R(X,Y,fZ,T)=R(X,Y,Z,fT)=fR(X,Y,Z,T)$. Since $[fX,Y]=f[X,Y]-Y(f)X$, $$\begin{array}{lll} R(fX,Y,Z,T) &=& {({{{{\nabla}_{fX}}{{{{{\nabla}^{\flat}}_{Y}}}Z}}})(T)} - {({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{fX}}}Z}}})(T)} - {({{{{\nabla}^{\flat}}_{[fX,Y]}}{Z}})(T)} \\ &=& f{({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}Z}}})(T)} - {({{{{\nabla}_{Y}}{{(f{{{\nabla}^{\flat}}_{X}}}Z)}}})(T)} \\ && - {({{{{\nabla}^{\flat}}_{f[X,Y]-Y(f)X}}{Z}})(T)} \\ &=& f{({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}Z}}})(T)} - f{({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}Z}}})(T)} \\ && - Y(f){({{{{\nabla}^{\flat}}_{X}}{Z}})(T)} - f{({{{{\nabla}^{\flat}}_{[X,Y]}}{Z}})(T)} \\ && + Y(f){({{{{\nabla}^{\flat}}_{X}}{Z}})(T)} \\ &=& fR(X,Y,Z,T). \\ \end{array}$$ The Definition \[def\_riemann\_curvature\] implies that $R(X,Y,Z,T)=-R(Y,X,Z,T)$, which leads immediately to $$R(X,fY,Z,T)=fR(X,Y,Z,T).$$ $$\begin{array}{lll} R(X,Y,fZ,T) &=& {({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}{fZ}}}})(T)} - {({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}{fZ}}}})(T)} - {({{{{\nabla}^{\flat}}_{[X,Y]}}{fZ}})(T)} \\ &=& {({{{{\nabla}_{X}}{(f{{{{\nabla}^{\flat}}_{Y}}}{Z}+Y(f)Z)}}})(T)} \\ && - {({{{{\nabla}_{Y}}{(f{{{{\nabla}^{\flat}}_{X}}}{Z}+X(f)Z)}}})(T)}\\ && - (f{{{{\nabla}^{\flat}}_{[X,Y]}}}{Z}+[X,Y](f)Z^\flat)(T)\\ &=& {({{{{\nabla}_{X}}{(f{{{{\nabla}^{\flat}}_{Y}}}{Z})}}})(T)} + {({{{{\nabla}_{X}}{(Y(f)Z^\flat)}}})(T)} \\ && - {({{{{\nabla}_{Y}}{(f{{{{\nabla}^{\flat}}_{X}}}{Z})}}})(T)} - {({{{{\nabla}_{Y}}{(X(f)Z^\flat)}}})(T)} \\ && - f({{{{\nabla}^{\flat}}_{[X,Y]}}}{Z})(T)-[X,Y](f)Z^\flat(T)\\ &=& f{({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}{Z}}}})(T)} + X(f) {({{{{\nabla}^{\flat}}_{Y}}}{Z})}(T) \\ && + X(Y(f)) {(Z^\flat)}(T) + Y(f){({{{{\nabla}_{X}}{Z^\flat}}})(T)} \\ && - f{({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}{Z}}}})(T)} - Y(f) {({{{{\nabla}^{\flat}}_{X}}}{Z})}(T) \\ && - Y(X(f)) {(Z^\flat)}(T) - X(f){({{{{\nabla}_{Y}}{Z^\flat}}})(T)} \\ && - f({{{{\nabla}^{\flat}}_{[X,Y]}}}{Z})(T)-[X,Y](f)Z^\flat(T)\\ &=& fR(X,Y,Z,T). \\ \end{array}$$ The ${{\mathscr{F}}(M)}$-linearity in $T$ follows from the definition of $R$, observing that ${{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}{Z}}}$, ${{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}{Z}}}$ and ${{{{\nabla}^{\flat}}_{[X,Y]}}{Z}}$ are in fact differential $1$-forms. The lower covariant derivative of a smooth vector field is a smooth differential $1$-form on $M$, therefore ${{{{\nabla}^{\flat}}_{X}}}{Z}$, ${{{{\nabla}^{\flat}}_{Y}}}{Z}$ and ${{{{\nabla}^{\flat}}_{[X,Y]}}}{Z}$ are smooth on $M$. It follows that $R$ is also smooth on $M$. One can write $${{\mathcal{R}}^\flat_{}}: {{{\mathfrak{X}}}(M)} ^2 \to {{\mathcal{T}}{}^{0}_{2}M}$$ $${{\mathcal{R}}^\flat_{XY}} := {{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}} - {{\nabla}_{Y}} {{{{\nabla}^{\flat}}_{X}}} - {{{\nabla}^{\flat}}_{[X,Y]}},$$ with the amendment that $${{\mathcal{R}}^\flat_{XY}}(Z,T) := ({{\mathcal{R}}^\flat_{XY}}Z)(T)$$ for any $Z,T\in{{{\mathfrak{X}}}(M)}$. The symmetries of the Riemann curvature tensor {#s_riemann_curvature_symmetries} ---------------------------------------------- The following proposition generalizes well-known symmetry properties of the Riemann curvature tensor of a [[nondegenerate]{}]{} metric [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 75]{})]{} to [[semiregular]{}]{} metrics. The proofs are similar to the [[nondegenerate]{}]{} case, except that they avoid using the covariant derivative and the index raising, so we prefer to give them explicitly. \[thm\_curv\_symm\] Let $(M,g)$ be a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold. Then, for any $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$, the Riemann curvature has the following symmetry properties 1. \[thm\_curv\_symm\_xy\] ${{\mathcal{R}}^\flat_{XY}} = -{{\mathcal{R}}^\flat_{YX}}$ 2. \[thm\_curv\_symm\_zt\] ${{\mathcal{R}}^\flat_{XY}}(Z,T) = -{{\mathcal{R}}^\flat_{XY}}(T,Z)$ 3. \[thm\_curv\_symm\_xyz\] ${{\mathcal{R}}^\flat_{YZ}} X + {{\mathcal{R}}^\flat_{ZX}} Y + {{\mathcal{R}}^\flat_{XY}} Z = 0$ 4. \[thm\_curv\_symm\_xy\_zt\] ${{\mathcal{R}}^\flat_{XY}}(Z,T) = {{\mathcal{R}}^\flat_{ZT}}(X,Y)$ Follows from the Definition \[def\_riemann\_curvature\_operator\]: $$\begin{array}{lll} {{\mathcal{R}}^\flat_{XY}} Z&=& {{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z - {{\nabla}_{Y}} {{{{\nabla}^{\flat}}_{X}}}Z - {{{\nabla}^{\flat}}_{[X,Y]}}Z \\ &=&-{{\mathcal{R}}^\flat_{YX}} Z \end{array}$$ This is equivalent to $${{\mathcal{R}}^\flat_{XY}}(V,V)=0$$ for any $V\in{{{\mathfrak{X}}}(M)}$. From the property of the lower covariant derivative of being metric (Theorem \[thm\_l\_cov\_der\_props\], property ) it follows that $${({{{{\nabla}^{\flat}}_{[X,Y]}}{V}})(V)}=\frac 1 2[X,Y]{\langleV,V\rangle}$$ and $$X({({{{{\nabla}^{\flat}}_{Y}}{V}})(V)}) = {{\displaystyle}{\frac{1}{2}}} XY{\langleV,V\rangle}.$$ From the Definition \[def\_cov\_der\_covect\] of the covariant derivative of $1$-forms we obtain that $${({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}{V}}}}})(V)} = X\left({({{{{\nabla}^{\flat}}_{Y}}{V}})(V)}\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{V}},{{{{\nabla}^{\flat}}_{Y}}{V}}\rangle\!\rangle{}_{\bullet}{}}}}.$$ By combining them we get $${({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}{V}}}}})(V)} = {{\displaystyle}{\frac{1}{2}}} XY{\langleV,V\rangle} - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{V}},{{{{\nabla}^{\flat}}_{Y}}{V}}\rangle\!\rangle{}_{\bullet}{}}}}.$$ Therefore, $$\begin{array}{lll} {{\mathcal{R}}^\flat_{XY}}(V,V) &=& {({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}V}}})(V)} - {({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}V}}})(V)} - {({{{{\nabla}^{\flat}}_{[X,Y]}}{V}})(V)} \\ &=& {{\displaystyle}{\frac{1}{2}}}X\left({({{{{\nabla}^{\flat}}_{Y}}{V}})(V)}\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{V}},{{{{\nabla}^{\flat}}_{Y}}{V}}\rangle\!\rangle{}_{\bullet}{}}}} \\ && - {{\displaystyle}{\frac{1}{2}}}Y\left({({{{{\nabla}^{\flat}}_{X}}{V}})(V)}\right) + {{{\langle\!\langle{{{{\nabla}^{\flat}}_{Y}}{V}},{{{{\nabla}^{\flat}}_{X}}{V}}\rangle\!\rangle{}_{\bullet}{}}}} \\ && - \frac 1 2[X,Y]{\langleV,V\rangle} = 0\\ \end{array}$$ As the proof of this identity usually goes, we define the cyclic sum for any $F:{{{\mathfrak{X}}}(M)}^3\to{{\mathcal{A}}^{1}(M)}$ by $$\begin{array}{l} \sum_{{\circlearrowleft}}F(X,Y,Z):=F(X,Y,Z)+F(Y,Z,X)+F(Z,X,Y) \end{array}$$ and observe that it doesn’t change at cyclic permutations of $X,Y,Z$. Then, from the properties of the lower covariant derivative and from Jacobi’s identity, $$\begin{array}{lll} \sum_{{\circlearrowleft}}{{\mathcal{R}}^\flat_{XY}} Z &=& \sum_{{\circlearrowleft}}{{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z - \sum_{{\circlearrowleft}}{{\nabla}_{Y}} {{{{\nabla}^{\flat}}_{X}}}Z - \sum_{{\circlearrowleft}}{{{\nabla}^{\flat}}_{[X,Y]}}Z\\ &=& \sum_{{\circlearrowleft}}{{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z - \sum_{{\circlearrowleft}}{{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Z}}}Y - \sum_{{\circlearrowleft}}{{{\nabla}^{\flat}}_{[X,Y]}}Z\\ &=& \sum_{{\circlearrowleft}}{{\nabla}_{X}} \left({{{{\nabla}^{\flat}}_{Y}}}Z - {{{{\nabla}^{\flat}}_{Z}}}Y\right) - \sum_{{\circlearrowleft}}{{{\nabla}^{\flat}}_{[X,Y]}}Z\\ &=& \sum_{{\circlearrowleft}}{{\nabla}_{X}} [Y,Z]^\flat - \sum_{{\circlearrowleft}}{{{\nabla}^{\flat}}_{[X,Y]}}Z\\ &=& \sum_{{\circlearrowleft}}{{\nabla}_{X}}^\flat [Y,Z] - \sum_{{\circlearrowleft}}{{{\nabla}^{\flat}}_{[Y,Z]}}X\\ &=& \sum_{{\circlearrowleft}}[X,[Y,Z]]^\flat = 0.\\ \end{array}$$ To show we apply four times (as in the usual proof of the properties of the curvature): $$\begin{array}{lllllll} {{\mathcal{R}}^\flat_{XY}}(Z,T) &+& {{\mathcal{R}}^\flat_{YZ}}(X,T) &+& {{\mathcal{R}}^\flat_{ZX}}(Y,T) &=& 0 \\ {{\mathcal{R}}^\flat_{YZ}}(T,X) &+& {{\mathcal{R}}^\flat_{ZT}}(Y,X) &+& {{\mathcal{R}}^\flat_{TY}}(Z,X) &=& 0 \\ {{\mathcal{R}}^\flat_{ZT}}(X,Y) &+& {{\mathcal{R}}^\flat_{TX}}(Z,Y) &+& {{\mathcal{R}}^\flat_{XZ}}(T,Y) &=& 0 \\ {{\mathcal{R}}^\flat_{TX}}(Y,Z) &+& {{\mathcal{R}}^\flat_{XY}}(T,Z) &+& {{\mathcal{R}}^\flat_{YT}}(X,Z) &=& 0 \\ \end{array}$$ then sum up, divide by $2$ and get: $${{\mathcal{R}}^\flat_{XY}}(Z,T) = {{\mathcal{R}}^\flat_{ZT}}(X,Y).$$ \[thm\_curvature\_tensor\_radical\] For any $X,Y,Z\in {{{\mathfrak{X}}}(M)}$ and $W\in{{{\mathfrak{X}}}_\circ(M)}$, the Riemann curvature tensor $R$ satisfies $$R(W,X,Y,Z) = R(X,W,Y,Z) = R(X,Y,W,Z) = R(X,Y,Z,W) = 0.$$ From the Remark \[rem\_semi\_regular\_semi\_riemannian\], ${{\nabla}_{X}} {{{{\nabla}^{\flat}}_{Y}}}Z \in {{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$, and from the Remark \[rem\_rad\_stat\_lower\_der\], ${{{{\nabla}^{\flat}}_{X}}{Y}}\in{{{{\mathcal{A}}{}^{\bullet}{}}}(M)}$, for any $X,Y,Z\in{{{\mathfrak{X}}}(M)}$. Therefore, $R(X,Y,Z,W)=0$. From the symmetry properties and from Theorem \[thm\_curv\_symm\], this property extends to all other slots of the Riemann curvature tensor. \[thm\_curv\_annih\] Let $(M,g)$ be a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold. Then, for any $X,Y\in{{{\mathfrak{X}}}(M)}$, $ {{\mathcal{R}}^\flat_{XY}}\in{{{{\mathcal{A}}{}^{\bullet}{}}}{}^{2}(M)}$ (${{\mathcal{R}}^\flat_{XY}}$ is a [[radicalannihilator]{}]{}). Follows from the Corollary \[thm\_curvature\_tensor\_radical\]. Ricci curvature tensor and scalar curvature {#s_ricci_tensor_scalar} ------------------------------------------- In [[nondegenerate]{}]{} [[semiRiemannian]{}]{} geometry, the Ricci tensor is obtained by tracing the Riemann curvature, and the scalar curvature by tracing the Ricci tensor [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 87–88]{})]{}. In the degenerate case, an invariant contraction can be performed only on [[radicalannihilator]{}]{} slots. Fortunately, this is the case of the Riemann tensor even in the case when the metric is degenerate (Corollary \[thm\_curvature\_tensor\_radical\]), so it is possible to define the Ricci tensor as: \[def\_ricci\_curvature\_tensor\] Let $(M,g)$ be a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold with constant signature. The *Ricci curvature tensor* is defined as the covariant contraction of the Riemann curvature tensor $${{\textnormal}{Ric}}(X,Y):=R(X,{{{}_\bullet}},Y,{{{}_\bullet}})$$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The symmetry of the Ricci tensor works just like in the [[nondegenerate]{}]{} case [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 87]{})]{}: The Ricci curvature tensor on a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold with constant signature is symmetric: $${{\textnormal}{Ric}}(X,Y)={{\textnormal}{Ric}}(Y,X)$$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The Proposition \[thm\_curv\_symm\] states that $R(X,Y,Z,T)=R(Z,T,X,Y)$ for any $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$. Therefore, ${{\textnormal}{Ric}}(X,Y)={{\textnormal}{Ric}}(Y,X)$. The scalar curvature is obtained from the Ricci tensor like in the [[nondegenerate]{}]{} case [([*cf.* ]{}[*e.g.* ]{}[[@ONe83], p. 88]{})]{}: \[def\_scalar\_curvature\] Let $(M,g)$ be a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold with constant signature. The *scalar curvature* is defined as the covariant contraction of the Ricci curvature tensor $$s:={{\textnormal}{Ric}}({{{}_\bullet}},{{{}_\bullet}}).$$ The Ricci and the scalar curvatures are smooth for the case of [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifolds having the metric with constant signature. For [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds, the Ricci and scalar curvatures are smooth in the regions of constant curvature, and become in general divergent as we approach the points where the signature changes. Curvature of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds II {#s_riemann_curvature_ii} ==================================================================== This section contains some complements on the Riemann curvature tensor of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds. A useful formula of this curvature in terms of the Koszul form is provided in [§\[s\_riemann\_curvature\_koszul\_formula\]]{}. In the subsection [§\[s\_koszul\_deriv\_curv\_funct\]]{} we recall some results from [@Kup87b] concerning the (non-unique) Koszul derivative ${\nabla}$ and the associated curvature function $R_{\nabla}$, and show that ${\langleR_{\nabla}(\_,\_)\_,\_\rangle}$ coincides with that of the Riemann curvature tensor given in this article in [§\[s\_riemann\_curvature\]]{}. Riemann curvature in terms of the Koszul form {#s_riemann_curvature_koszul_formula} --------------------------------------------- \[thm\_riemann\_curvature\_tensor\_koszul\_formula\] For any vector fields $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$ on a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold $(M,g)$: $$\begin{array}{lll} R(X,Y,Z,T) &=& X\left({({{{{\nabla}^{\flat}}_{Y}}{Z}})(T)}\right) - Y\left({({{{{\nabla}^{\flat}}_{X}}{Z}})(T)}\right) - {({{{{\nabla}^{\flat}}_{[X,Y]}}{Z}})(T)} \\ && + {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Z}},{{{{\nabla}^{\flat}}_{Y}}{T}}\rangle\!\rangle{}_{\bullet}{}}}} - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{Y}}{Z}},{{{{\nabla}^{\flat}}_{X}}{T}}\rangle\!\rangle{}_{\bullet}{}}}} \\ \end{array}$$ and, alternatively, $$\label{eq_riemann_curvature_tensor_koszul_formula} \begin{array}{lll} R(X,Y,Z,T)&=& X {{\mathcal{K}}}(Y,Z,T) - Y {{\mathcal{K}}}(X,Z,T) - {{\mathcal{K}}}([X,Y],Z,T)\\ && + {{\mathcal{K}}}(X,Z,{{{}_\bullet}}){{\mathcal{K}}}(Y,T,{{{}_\bullet}}) - {{\mathcal{K}}}(Y,Z,{{{}_\bullet}}){{\mathcal{K}}}(X,T,{{{}_\bullet}}) \end{array}$$ From the Definition \[def\_cov\_der\_covect\] of the covariant derivative of $1$-forms we obtain that $${({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}{Z}}}}})(T)} = X\left({({{{{\nabla}^{\flat}}_{Y}}{Z}})(T)}\right) - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{T}},{{{{\nabla}^{\flat}}_{Y}}{Z}}\rangle\!\rangle{}_{\bullet}{}}}},$$ therefore $$\begin{array}{lll} R(X,Y,Z,T) &=& {({{{{\nabla}_{X}}{{{{{\nabla}^{\flat}}_{Y}}}Z}}})(T)} - {({{{{\nabla}_{Y}}{{{{{\nabla}^{\flat}}_{X}}}Z}}})(T)} - {({{{{\nabla}^{\flat}}_{[X,Y]}}{Z}})(T)} \\ &=& X\left({({{{{\nabla}^{\flat}}_{Y}}{Z}})(T)}\right) - Y\left({({{{{\nabla}^{\flat}}_{X}}{Z}})(T)}\right) - {({{{{\nabla}^{\flat}}_{[X,Y]}}{Z}})(T)} \\ && + {{{\langle\!\langle{{{{\nabla}^{\flat}}_{X}}{Z}},{{{{\nabla}^{\flat}}_{Y}}{T}}\rangle\!\rangle{}_{\bullet}{}}}} - {{{\langle\!\langle{{{{\nabla}^{\flat}}_{Y}}{Z}},{{{{\nabla}^{\flat}}_{X}}{T}}\rangle\!\rangle{}_{\bullet}{}}}} \\ \end{array}$$ for any vector fields $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$. The second formula follows from the definition of the lower derivative of vector fields. In a coordinate basis, the components of the Riemann curvature tensor are given by $$\label{eq_riemann_curvature_tensor_coord} R_{abcd}= \partial_a {{\mathcal{K}}}_{bcd} - \partial_b {{\mathcal{K}}}_{acd} + {{{g{}_{\bullet}{}}}}^{st}({{\mathcal{K}}}_{acs}{{\mathcal{K}}}_{bdt} - {{\mathcal{K}}}_{bcs}{{\mathcal{K}}}_{adt}).$$ $$\begin{array}{lll} R_{abcd}&:=& R(\partial_a,\partial_b,\partial_c,\partial_d)\\ &=& \partial_a {{\mathcal{K}}}(\partial_b,\partial_c,\partial_d) - \partial_b {{\mathcal{K}}}(\partial_a,\partial_c,\partial_d) - {{\mathcal{K}}}([\partial_a,\partial_b],\partial_c,\partial_d)\\ && + {{\mathcal{K}}}(\partial_a,\partial_c,{{{}_\bullet}}){{\mathcal{K}}}(\partial_b,\partial_d,{{{}_\bullet}}) - {{\mathcal{K}}}(\partial_b,\partial_c,{{{}_\bullet}}){{\mathcal{K}}}(\partial_a,\partial_d,{{{}_\bullet}})\\ &=& \partial_a {{\mathcal{K}}}_{bcd} - \partial_b {{\mathcal{K}}}_{acd} + {{{g{}_{\bullet}{}}}}^{st}({{\mathcal{K}}}_{acs}{{\mathcal{K}}}_{bdt} - {{\mathcal{K}}}_{bcs}{{\mathcal{K}}}_{adt}) \end{array}$$ Relation with Kupeli’s curvature function {#s_koszul_deriv_curv_funct} ----------------------------------------- Through the work of Demir Kupeli [@Kup87b] we have seen that for a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold (with constant signature) $(M,g)$ there is always a Koszul derivative ${\nabla}$, from whose curvature function $R_{\nabla}$ we can construct a tensor field ${\langleR_{\nabla}(\_,\_)\_,\_\rangle}$. We may wonder how is ${\langleR_{\nabla}(\_,\_)\_,\_\rangle}$ related to the Riemann curvature tensor from the Definition \[def\_riemann\_curvature\]. We will see that they coincide for a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold. \[def\_Koszul\_derivative\] A *Koszul derivative* on a [[radicalstationary]{}]{} [[semiRiemannian]{}]{} manifold with constant signature is an operator ${\nabla}:{{{\mathfrak{X}}}(M)}\times{{{\mathfrak{X}}}(M)}\to {{{\mathfrak{X}}}(M)}$ which satisfies the *Koszul formula* $$\label{eq_Koszul_formula} \begin{array}{llll} {\langle{\nabla}_X Y,Z\rangle} &=& {{\mathcal{K}}}(X,Y,Z). \end{array}$$ The Koszul derivative corresponds, for the [[nondegenerate]{}]{} case, to the Levi-Civita connection. \[def\_curvature\_function\] The *curvature function* $R_{\nabla}: {{{\mathfrak{X}}}(M)}\times{{{\mathfrak{X}}}(M)}\times{{{\mathfrak{X}}}(M)}\to {{{\mathfrak{X}}}(M)}$ of a Koszul derivative ${\nabla}$ on a singular [[semiRiemannian]{}]{} manifold with constant signature $(M,g)$ is defined by $$\label{eq_curvature_function} R_{\nabla}(X,Y)Z:={\nabla}_X{\nabla}_Y Z - {\nabla}_Y {\nabla}_X Z - {\nabla}_{[X,Y]}Z.$$ In [@Kup87b][266-268]{} it is shown that ${\langleR_{\nabla}(\_,\_)\_,\_\rangle}\in{{\mathcal{T}}{}^{0}_{4}M}$ and it has the same symmetry properties as the Riemann curvature tensor of a Levi-Civita connection. Let $(M,g)$ be a [[radicalstationary]{}]{} singular [[semiRiemannian]{}]{} manifold with constant signature, and ${\nabla}$ a Koszul derivative on $M$. The Riemann curvature tensor is related to the curvature function by $${\langleR_{\nabla}(X,Y)Z,T\rangle} = R(X,Y,Z,T)$$ for any $X,Y,Z,T\in{{{\mathfrak{X}}}(M)}$. From Theorem \[thm\_Koszul\_form\_props\] and Definition \[def\_curvature\_function\], applying the property of contraction with the metric from Lemma \[thm\_contraction\_with\_metric\] and the Koszul formula for the Riemann curvature tensor , we obtain $$\begin{array}{lll} {\langleR_{\nabla}(X,Y)Z,T\rangle} &=&{\langle{\nabla}_X {\nabla}_Y Z,T\rangle} - {\langle{\nabla}_Y {\nabla}_X Z,T\rangle} - {\langle{\nabla}_{[X,Y]}Z,T\rangle}\\ &=&X {\langle{\nabla}_YZ,T\rangle} - {\langle{\nabla}_Y Z, {\nabla}_X T\rangle} \\ && - Y {\langle{\nabla}_X Z,T\rangle} + {\langle{\nabla}_X Z, {\nabla}_Y T\rangle} - {\langle{\nabla}_{[X,Y]} Z,T\rangle} \\ &=& X {{\mathcal{K}}}(Y,Z,T) - {{\mathcal{K}}}(Y,Z,{{{}_\bullet}}){{\mathcal{K}}}(X,T,{{{}_\bullet}}) \\ && - Y {{\mathcal{K}}}(X,Z,T) + {{\mathcal{K}}}(X,Z,{{{}_\bullet}}){{\mathcal{K}}}(Y,T,{{{}_\bullet}})\\ && - {{\mathcal{K}}}([X,Y],Z,T) \\ &=& R(X,Y,Z,T) \end{array}$$ Examples of [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds {#s_semi_reg_semi_riem_man_example} ================================================================ Diagonal metric {#s_semi_reg_semi_riem_man_example_diagonal} --------------- Let $(M,g)$ be a singular [[semiRiemannian]{}]{} manifold with variable signature having the property that for each point $p\in M$ there is a local coordinate system around $p$ in which the metric takes a diagonal form $g={\textnormal{diag}}{(g_{11},\ldots,g_{nn})}$. According to equation , $2{{\mathcal{K}}}_{abc}=\partial_a g_{bc} + \partial_b g_{ca} - \partial_c g_{ab}$, but since $g$ is diagonal, we have only the following possibilities: ${{\mathcal{K}}}_{baa} = {{\mathcal{K}}}_{aba} = -{{\mathcal{K}}}_{aab} = \frac 1 2\partial_b g_{aa}$, for $a\neq b$, and ${{\mathcal{K}}}_{aaa} = \frac 1 2\partial_a g_{aa}$. The manifold $(M,g)$ is [[radicalstationary]{}]{} if and only if whenever $g_{aa}=0$, $\partial_b g_{aa} = \partial_a g_{bb} = 0$. According to Proposition $\ref{thm_sr_cocontr_kosz}$, the manifold $(M,g)$ is [[semiregular]{}]{} if and only if $$\label{eq_diag_g_contr_kosz} \sum_{\genfrac{}{}{0pt}{}{s\in\{1,\ldots,n\}}{g_{ss}\neq 0}} {{\displaystyle}{\frac{\partial_a g_{ss}\partial_b g_{ss}}{g_{ss}}}}, \sum_{\genfrac{}{}{0pt}{}{s\in\{1,\ldots,n\}}{g_{ss}\neq 0}} {{\displaystyle}{\frac{\partial_s g_{aa}\partial_s g_{bb}}{g_{ss}}}}, \sum_{\genfrac{}{}{0pt}{}{s\in\{1,\ldots,n\}}{g_{ss}\neq 0}} {{\displaystyle}{\frac{\partial_a g_{ss}\partial_s g_{bb}}{g_{ss}}}}$$ are all smooth. One way to ensure this is for instance if the functions $u,v:M\to{\mathbb{R}}$ defined as $$u(p):=\Bigg\{ \begin{array}{ll} {{\displaystyle}{\frac{\partial_b g_{aa}}{\sqrt{{\left|g_{aa}\right|}}}}} & g_{aa}\neq 0 \\ 0 & g_{aa}= 0 \\ \end{array} {\textnormal}{ and } v(p):=\Bigg\{ \begin{array}{ll} {{\displaystyle}{\frac{\partial_a g_{bb}}{\sqrt{{\left|g_{aa}\right|}}}}} & g_{aa}\neq 0 \\ 0 & g_{aa}= 0 \\ \end{array}$$ and $\sqrt{{\left|g_{aa}\right|}}$ are smooth for all $a,b\in\{1,\ldots,n\}$. In this case it is easy to see that all the terms of the sums in equation are smooth. Conformally-[[nondegenerate]{}]{} metrics {#s_semi_reg_semi_riem_man_example_conformal} ----------------------------------------- Another example of [[semiregular]{}]{} metric is given by those that can be obtained by a conformal transformation [([*cf.* ]{}[*e.g.* ]{}[[@HE95], p. 42]{})]{} from [[nondegenerate]{}]{} metrics. A singular [[semiRiemannian]{}]{} manifold $(M,g)$ is said to be *conformally [[nondegenerate]{}]{}* if there is a [[nondegenerate]{}]{} [[semiRiemannian]{}]{} metric $\tilde g$ on $M$ and a smooth function $\Omega\in{{\mathscr{F}}(M)}$, $\Omega\geq 0$, so that $g(X,Y)=\Omega^2\tilde g(X,Y)$ for any $X,Y\in{{{\mathfrak{X}}}(M)}$. The manifold $(M,g)$ is alternatively denoted by $(M,\tilde g, \Omega)$. The following proposition shows what happens to the Koszul form at a conformal transformation of the metric, similar to the [[nondegenerate]{}]{} case [([*cf.* ]{}[*e.g.* ]{}[[@HE95], p. 42]{})]{}. \[thm\_conformal\_koszul\_form\] Let $(M,\tilde g, \Omega)$ be a conformally [[nondegenerate]{}]{} singular [[semiRiemannian]{}]{} manifold. Then, the Koszul form ${{\mathcal{K}}}$ of $g$ is related to the Koszul form $\tilde {{\mathcal{K}}}$ of $\tilde g$ by: $$\label{eq_conformal_koszul_form} {{\mathcal{K}}}(X,Y,Z) = \Omega^2\tilde {{\mathcal{K}}}(X,Y,Z) + \Omega\left[\tilde g(Y,Z)X + \tilde g(X,Z)Y - \tilde g(X,Y)Z\right](\Omega)$$ From the Koszul formula we obtain $$\begin{array}{llll} {{\mathcal{K}}}(X,Y,Z) &=&{\displaystyle}{\frac 1 2} \{ X (\Omega^2\tilde g(Y,Z)) + Y (\Omega^2\tilde g(Z,X)) - Z (\Omega^2\tilde g(X,Y)) \\ &&\ - \Omega^2\tilde g(X,[Y,Z]) + \Omega^2\tilde g(Y, [Z,X]) + \Omega^2\tilde g(Z, [X,Y])\} \\ &=&{\displaystyle}{\frac 1 2} \{ \Omega^2X (\tilde g(Y,Z)) + \tilde g(Y,Z)X(\Omega^2) + \Omega^2Y (\tilde g(X,Z)) \\ && + \tilde g(X,Z)Y(\Omega^2) - \Omega^2Z (\tilde g(X,Y)) - \tilde g(X,Y)Z(\Omega^2) \\ &&\ - \Omega^2\tilde g(X,[Y,Z]) + \Omega^2\tilde g(Y, [Z,X]) + \Omega^2\tilde g(Z, [X,Y])\} \\ &=& \Omega^2 \tilde {{\mathcal{K}}}(X,Y,Z) + {\displaystyle}{\frac 1 2} \{ \tilde g(Y,Z)X(\Omega^2) \\ && + \tilde g(X,Z)Y(\Omega^2) - \tilde g(X,Y)Z(\Omega^2)\} \\ &=& \Omega^2\tilde {{\mathcal{K}}}(X,Y,Z) + \Omega\big[\tilde g(Y,Z)X \\ && + \tilde g(X,Z)Y - \tilde g(X,Y)Z\big](\Omega) \end{array}$$ \[thm\_conformal\_semi\_regular\] Let $(M,\tilde g, \Omega)$ be a singular [[semiRiemannian]{}]{} manifold which is conformally [[nondegenerate]{}]{}. Then, $(M,g=\Omega^2\tilde g)$ is a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold. The metric $g$ is either [[nondegenerate]{}]{}, or it is $0$. Therefore, the manifold $(M,g)$ is [[radicalstationary]{}]{}. Let $(E_a)_{a=1}^n$ be a local frame of vector fields on an open $U\subseteq M$, which is orthonormal with respect to the [[nondegenerate]{}]{} metric $\tilde g$. Then, the metric $g$ is diagonal in $(E_a)_{a=1}^n$. Proposition \[thm\_conformal\_koszul\_form\] implies that the Koszul form has the form ${{\mathcal{K}}}(X,Y,Z) = \Omega h(X,Y,Z)$, where $$h(X,Y,Z) = \Omega \tilde {{\mathcal{K}}}(X,Y,Z) + \left[\tilde g(Y,Z)X + \tilde g(X,Z)Y - \tilde g(X,Y)Z\right](\Omega)$$ is a smooth function depending on $X,Y,Z$. Moreover, if $\Omega=0$, then $h(X,Y,Z)=0$ as well, because the first term is multiple of $\Omega$, and the second is a partial derivative of $\Omega$, which reaches its minimum at $0$. Theorem \[thm\_contraction\_orthogonal\] saids that, on the regions of constant signature, if $r=n-{\textnormal{rank }}g+1$, for any $X,Y,Z,T\in U$ and for any $a\in\{1,\ldots,n\}$, $$\begin{array}{lll} {{\mathcal{K}}}(X,Y,{{{}_\bullet}}){{\mathcal{K}}}(Z,T,{{{}_\bullet}}) &=& \sum_{a=r}^n {{\displaystyle}{\frac{{{\mathcal{K}}}(X,Y,E_a){{\mathcal{K}}}(Z,T,E_a)}{g(E_a,E_a)}}} \\ &=& \sum_{a=r}^n {{\displaystyle}{\frac{\Omega^2 h(X,Y,E_a) h(Z,T,E_a)}{\Omega^2\tilde g(E_a,E_a)}}} \\ &=& \sum_{a=1}^n {{\displaystyle}{\frac{h(X,Y,E_a) h(Z,T,E_a)}{\tilde g(E_a,E_a)}}}. \\ \end{array}$$ If $\Omega=0$, then $h(X,Y,Z)=0$, therefore the last member does not depend on $r$. It follows that ${{\mathcal{K}}}(X,Y,{{{}_\bullet}}){{\mathcal{K}}}(Z,T,{{{}_\bullet}})\in{{\mathscr{F}}(M)}$, and according to Proposition \[thm\_sr\_cocontr\_kosz\], $(M,g)$ is [[semiregular]{}]{}. Einstein’s equation on [[semiregular]{}]{} spacetimes {#s_einstein_tensor_densitized} ===================================================== The problem of singularities {#s_intro_singularities} ---------------------------- In 1965 Roger Penrose [@Pen65], and later he and S. Hawking [@Haw66i; @Haw66ii; @Haw67iii; @HP70; @HE95], proved a set of *singularity theorems*. These theorems state that under reasonable conditions the spacetime turns out to be *geodesic incomplete* – [*i.e.* ]{}it has *singularities*. Consequently, some researchers proclaimed that General Relativity predicts its own breakdown, by predicting the singularities [@HP70; @Haw76; @ASH91; @HP96; @Ash08; @Ash09]. Hawking’s discovery of the black hole evaporation, leading to his *information loss paradox* [@Haw73; @Haw76], made the things even worse. The singularities seem to destroy information, in particular violating the unitary evolution of quantum systems. The reason is that the field equations cannot be continued through singularities. By applying the results presented in this article we shall see that, at least for [[semiregular]{}]{} [[semiRiemannian]{}]{} manifolds, we can extend Einstein’s equation through the singularities. Einstein’s equation is replaced by a densitized version which is equivalent to the standard version if the metric is [[nondegenerate]{}]{}. This equation remains smooth at singularities, which now become harmless. Einstein’s equation on [[semiregular]{}]{} spacetimes {#ss_einstein_tensor_densitized} ----------------------------------------------------- To define the Einstein tensor on a [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold, we normally make use of the Ricci tensor and the scalar curvature: $$\label{eq_einstein_tensor} G:={{\textnormal}{Ric}}-\frac 1 2 s g$$ These two quantities can be defined even for a degenerate metric, so long as the metric doesn’t change its signature (see [§\[s\_ricci\_tensor\_scalar\]]{}), but at the points where the signature changes, they can become infinite. \[def\_semi\_reg\_spacetime\] A *[[semiregular]{}]{} spacetime* is a fourdimensional [[semiregular]{}]{} [[semiRiemannian]{}]{} manifold having the signature $(0,3,1)$ at the points where it is [[nondegenerate]{}]{}. \[thm\_densitized\_einstein\] Let $(M,g)$ be a [[semiregular]{}]{} spacetime. Then its Einstein density tensor of weight $2$, $G\det g$, is smooth. At the points $p$ where the metric is [[nondegenerate]{}]{}, the Einstein tensor can be expressed using the Hodge $\ast$ operator by: $$\label{eq_einstein_tensor_hodge} G_{ab} = g^{st}(\ast R\ast)_{asbt},$$ where $(\ast R\ast)_{abcd}$ is obtained by taking the Hodge dual of $R_{abcd}$ with respect to the first and the second pairs of indices [([*cf.* ]{}[*e.g.* ]{}[[@PeR87], p. 234]{})]{}. Explicitly, if we write the components of the volume form associated to the metric as $\varepsilon_{abcd}$, we have $$(\ast R\ast)_{abcd} = \varepsilon_{ab}{}^{st}\varepsilon_{cd}{}^{pq}R_{stpq}.$$ If we employ coordinates, the volume form can be expressed in terms of the Levi-Civita symbol by $$\varepsilon_{abcd} = \epsilon_{abcd}\sqrt{-\det g}.$$ We can rewrite the Einstein tensor as $$\label{eq_einstein_tensor_hodge_lc} G^{ab} = {{\displaystyle}{\frac{g_{kl}\epsilon^{akst}\epsilon^{blpq} R_{stpq}}{\det g}}},$$ If we allow the metric to become degenerate, the Einstein tensor so defined becomes divergent, as it is expected. But the tensor density $G^{ab}\det g$, of weight $2$, associated to it remains smooth, and we get $$\label{eq_einstein_tensor_density} G^{ab}\det g = g_{kl}\epsilon^{akst}\epsilon^{blpq} R_{stpq}.$$ Since the spacetime is [[semiregular]{}]{}, this quantity is indeed smooth, because it is constructed only from the Riemann curvature tensor, which is smooth (see Theorem \[thm\_riemann\_curvature\_semi\_regular\]), and from the Levi-Civita symbol, which is constant in the particular coordinate system. The determinant of the metric converges to $0$ so that it cancels the divergence which normally would appear in $G^{ab}$. The tensor density $G_{ab}\det g$, being obtained by lowering its indices, is also smooth. Because the densitized Einstein tensor $G_{ab}\det g$ is smooth, it follows that the densitized curvature scalar is smooth $$\label{eq_curvature_scalar_density} s\det g = -g_{ab}G^{ab}\det g,$$ and so is the densitized Ricci tensor $$\label{eq_ricci_density} R_{ab}\det g = g_{as}g_{bt}G^{st}\det g + {{\displaystyle}{\frac{1}{2}}} s g_{ab}\det g.$$ In the context of General Relativity, on a [[semiregular]{}]{} spacetime, if $T$ is the stress-energy tensor, we can write the *densitized Einstein equation*: $$\label{eq_einstein:densitized} G\det g + \Lambda g\det g = \kappa T\det g,$$ or, in coordinates or local frames, $$\label{eq_einstein_idx:densitized} G_{ab}\det g + \Lambda g_{ab}\det g = \kappa T_{ab}\det g,$$ where $\kappa:={{\displaystyle}{\frac{8\pi {\mathcal{G}}}{c^4}}}$, with ${\mathcal{G}}$ and $c$ being Newton’s constant and the speed of light. [10]{} A. Ashtekar, *[New Hamiltonian formulation of general relativity]{}*, Phys. Rev. D **36** (1987), no. 6, 1587–1602. [to3em]{}, *[Non-perturbative canonical gravity, Lecture notes in collaboration with R. S. Tate]{}*, World Scientific, Singapore, 1991. [to3em]{}, *[[Singularity Resolution in Loop Quantum Cosmology: A Brief Overview]{}]{}*, J. Phys. Conf. Ser. **189** (2009), 012003, [arXiv:gr-qc/0812.4703](http://arxiv.org/abs/0812.4703). A. Ashtekar and E. Wilson-Ewing, *[[Loop quantum cosmology of Bianchi I models]{}]{}*, Phys. Rev. **D79** (2009), 083535, [arXiv:gr-qc/0903.3397](http://arxiv.org/abs/0903.3397). A. Bejancu and K.L. Duggal, *[Lightlike Submanifolds of Semi-Riemannian Manifolds]{}*, Acta Appl. Math. **38** (1995), no. 2, 197–215. A. Das, *[Tensors: the Mathematics of Relativity Theory and Continuum Mechanics]{}*, Springer Verlag, 2007. T. Dereli and R.W. Tucker, *[Signature Dynamics in General Relativity]{}*, Classical Quantum Gravity **10** (1993), 365. T. Dray, *[Einstein’s Equations in the Presence of Signature Change]{}*, Journal of Mathematical Physics **37** (1996), 5627–5636, [arXiv:gr-qc/9610064](http://arxiv.org/abs/gr-qc/9610064). T. Dray, G. Ellis, and C. Hellaby, *[Note on Signature Change and Colombeau Theory]{}*, Gen. Relativity Gravitation **33** (2001), no. 6, 1041–1046. T. Dray, C.A. Manogue, and R.W. Tucker, *[Particle production from signature change]{}*, Gen. Relativity Gravitation **23** (1991), no. 8, 967–971. [to3em]{}, *[Scalar Field Equation in the Presence of Signature Change]{}*, Phys. Rev. D **48** (1993), no. 6, 2587–2590. [to3em]{}, *[Boundary Conditions for the Scalar Field in the Presence of Signature Change]{}*, Classical Quantum Gravity **12** (1995), 2767. K.L. Duggal and A. Bejancu, *[Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications]{}*, vol. 364, Kluwer Academic, 1996. A. Einstein and N. Rosen, *[The Particle Problem in the General Theory of Relativity]{}*, Phys. Rev. **48** (1935), no. 1, 73. G. Ellis, A. Sumeruk, D. Coule, and C. Hellaby, *[Change of Signature in Classical Relativity]{}*, Classical Quantum Gravity **9** (1992), 1535. G.F.R. Ellis, *[Covariant Change of Signature in Classical Relativity]{}*, Gen. Relativity Gravitation **24** (1992), no. 10, 1047–1068. S. Gallot, D. Hullin, and J. Lafontaine, *[Riemannian Geometry]{}*, 3rd ed., Springer-Verlag, Berlin, New York, 2004. G.W. Gibbons, *[Part III: Applications of Differential Geometry to Physics]{}*, Cambridge CB3 0WA, UK (2006). S. Hawking, *The occurrence of singularities in cosmology*, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences **294** (1966), no. 1439, 511–521. [to3em]{}, *The occurrence of singularities in cosmology. ii*, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences (1966), 490–493. [to3em]{}, *The occurrence of singularities in cosmology. iii. causality and singularities*, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences **300** (1967), no. 1461, 187–201. [to3em]{}, *[Particle Creation by Black Holes]{}*, Comm. Math. Phys. (1973), no. 33, 323. [to3em]{}, *[Breakdown of Predictability in Gravitational Collapse]{}*, Phys. Rev. D (1976), no. 14, 2460. S. Hawking and G. Ellis, *[The Large Scale Structure of Space Time]{}*, Cambridge University Press, 1995. S. Hawking and R. Penrose, *[The Singularities of Gravitational Collapse and Cosmology]{}*, Proc. Roy. Soc. London Ser. A (1970), no. 314, 529–548. [to3em]{}, *[The Nature of Space and Time]{}*, Princeton University Press, 1996. S.A. Hayward, *[Signature Change in General Relativity]{}*, Classical Quantum Gravity **9** (1992), 1851. [to3em]{}, *[Junction Conditions for Signature Change]{}*, [arXiv:gr-qc/9303034](http://arxiv.org/abs/gr-qc/9303034) (1993). [to3em]{}, *[Comment on “Failure of Standard Conservation Laws at a Classical Change of Signature”]{}*, Phys. Rev. D **52** (1995), no. 12, 7331–7332. C. Hellaby and T. Dray, *[Failure of Standard Conservation Laws at a Classical Change of Signature]{}*, Phys. Rev. D **49** (1994), no. 10, 5096–5104. M. Kossowski, *[Fold Singularities in Pseudo Riemannian Geodesic Tubes]{}*, Proc. Amer. Math. Soc. (1985), 463–469. [to3em]{}, *[Pseudo-Riemannian Metric Singularities and the Extendability of Parallel Transport]{}*, Proc. Amer. Math. Soc. **99** (1987), no. 1, 147–154. M. Kossowski and M. Kriele, *[Signature Type Change and Absolute Time in General Relativity]{}*, Classical Quantum Gravity **10** (1993), 1157. [to3em]{}, *[Smooth and Discontinuous Signature Type Change in General Relativity]{}*, Classical Quantum Gravity **10** (1993), 2363. [to3em]{}, *[The Einstein Equation for Signature Type Changing Spacetimes]{}*, Proceedings: Mathematical and Physical Sciences **446** (1994), no. 1926, 115–126. [to3em]{}, *[Transverse, Type Changing, Pseudo Riemannian Metrics and the Extendability of Geodesics]{}*, Proceedings: Mathematical and Physical Sciences (1994), 297–306. D. Kupeli, *[Degenerate Manifolds]{}*, Geom. Dedicata **23** (1987), no. 3, 259–290. [to3em]{}, *[Degenerate Submanifolds in Semi-Riemannian Geometry]{}*, Geom. Dedicata **24** (1987), no. 3, 337–361. [to3em]{}, *[On Null Submanifolds in Spacetimes]{}*, Geom. Dedicata **23** (1987), no. 1, 33–51. [to3em]{}, *[Singular Semi-Riemannian Geometry]{}*, Kluwer Academic Publishers Group, 1996. G. C. Moisil, *[Sur les géodésiques des espaces de Riemann singuliers]{}*, Bull. Math. Soc. Roumaine Sci. (1940), no. 42, 33–52. B. O’Neill, *[[Semi-Riemannian]{} Geometry with Applications to Relativity]{}*, Pure Appl. Math. (1983), no. 103, 468. A. Pambira, *[Harmonic Morphisms Between Segenerate Semi-Riemannian Manifolds]{}*, Contributions to Algebra and Geometry **46** (2005), no. 1, 261–281, [arXiv:math/0303275](http://arxiv.org/abs/math/0303275). R. Penrose, *[Gravitational Collapse and Space-Time Singularities]{}*, Phys. Rev. Lett. (1965), no. 14, 57–59. R. Penrose and W. Rindler, *[Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (Cambridge Monographs on Mathematical Physics)]{}*, [Cambridge University Press]{}, 1987. S. Roman, *[Advanced Linear Algebra]{}*, Springer, 2008. Joseph D. Romano, *[Geometrodynamics vs. Connection Dynamics]{}*, Gen.Rel.Grav. (1993), no. 25, 759–854, [arXiv:gr-qc/9303032](http://arxiv.org/abs/gr-qc/9303032). AD Sakharov, *[Cosmological Transitions with a Change in Metric Signature]{}*, Sov. Phys. JETP **60** (1984), 214. K. Strubecker, *[Differentialgeometrie des isotropen Raumes. 1. Theorie der Raumkurven]{}*, Sitzungsber. Akad. Wiss. Wien, Math.-Naturw. Kl., Abt. IIa **150** (1941), 1–53. [to3em]{}, *[Differentialgeometrie des isotropen Raumes. II. Die Fl[ä]{}chen konstanter Relativkr[ü]{}mmungK= rt- s 2]{}*, Math. Z. **47** (1942), no. 1, 743–777. [to3em]{}, *[Differentialgeometrie des isotropen Raumes. III. Fl[ä]{}chentheorie]{}*, Math. Z. **48** (1942), no. 1, 369–427. [to3em]{}, *[Differentialgeometrie des isotropen Raumes. IV. Theorie der fl[ä]{}chentreuen Abbildungen der Ebene]{}*, Math. Z. **50** (1944), no. 1, 1–92. G. Vrănceanu, *[Sur les invariants des espaces de Riemann singuliers]{}*, Disqu. math. physic. (1942), no. 2, 253–281. G. Yoneda, H. Shinkai, and A. Nakamichi, *[Trick for Passing Degenerate Points in the Ashtekar Formulation]{}*, Phys. Rev. D **56** (1997), no. 4, 2086–2093. [^1]: Partially supported by Romanian Government grant PN II Idei 1187.
--- abstract: 'Using recent data from photometric monitoring and data from the photographic plate archives we aim to study, the long-term photometric behavior of FUors. The construction of the historical light curves of FUors could be very important for determining the beginning of the outburst, the time to reach the maximum light, the rate of increase and decrease in brightness, the pre-outburst variability of the star. Our CCD photometric observations were performed with the telescopes of the Rozhen (Bulgaria) and Skinakas (Crete, Greece) observatories. Most suitable for long-term photometric study are the plate archives of the big Schmidt telescopes, as the telescopes at Kiso Observatory, Asiago Observatory, Palomar Observatory and others. In comparing our results with light curves of the well-studied FUors, we conclude that every new FUor object shows different photometric behavior. Each known FUor has a different rate of increase and decrease in brightness and a different light curve shape.' --- Introduction ============ The young eruptive objects such as FU Orionis (here after FUor) are very rare, but with a significant role in the stellar evolution. All known FUors share the same defining characteristics: a $\Delta$$V$$\approx$4-6 magnitudes outburst amplitude, association with reflection nebulae, location in regions of active star formation, an F-G supergiant spectrum during the outburst ([@Audar_etal14 Audard et al. 2014]; [@ReiAs10 Reipurth & Aspin 2010]). FUor stars seem to be related to the low-mass pre-main sequence objects (T Tauri stars), which have massive circumstellar disks. The widespread explanation of the FUor phenomenon is a sizable increase in accretion rate from the circumstellar disc onto the stellar surface. The cause of increased accretion is still being discussed. But the possible triggering mechanisms of FUor outburst could be: thermal or gravitational instability in the circumstellar disk ([@Hart1966 Hartmann & Kenyon 1996]) and the interactions of the circumstellar disk with a giant planet or nearby stellar companion on an eccentric orbit ([@Lod2004 Lodato & Clarke 2004]; [@Pfa2008 Pfalzner 2008]). The construction of the historical light curves of FUors could be very important to study the photometric evolution of the objects, for determining the exact moment of the beginning of the outburst, the time to reach the maximum light and the time spent in maximum light. Another important option is to study the pre-outburst variability of the FUor objects. Observations ============ Our CCD photometric observations of FUor objects were performed with the 2 m RCC, the 50/70 cm Schmidt, and the 60 cm Cassegrain telescopes of the National Astronomical Observatory Rozhen (Bulgaria) and with the 1.3 m RC telescope of the Skinakas Observatory[^1] of the Institute of Astronomy, University of Crete (Greece). The technical parameters for the CCD cameras used, observational procedure and data reduction process are described in [@Ibryamov2015 Ibryamov et al. (2015)]. The only possibility for long-time photometric study is a search in the photographic plate archives at the astronomical observatories around the world. Most suitable for this purpose are the plate archives of the big Schmidt telescopes that have a large field of view. In this paper we present photometric data obtained from the photographic plate archives of the 105/150 cm Schmidt telescope at the Kiso Observatory (Japan) and the 67/92 cm Schmidt telescope at the Asiago Observatory (Italy). We also used the digitized plates from the Palomar Schmidt telescope, available via the website of the Space Telescope Science Institute. Results and discussion ====================== In this section we present results from the long-term photometric study of three FUor objects. All three objects were discovered recently and the available data for their photometric behavior are still incomplete. V2493 Cyg --------- The outburst of V2493 Cyg was discovered during the summer of 2010 ([@Semkov2010 Semkov et al. 2010]; [@Miller2011 Miller et al. 2011]) in the dark clouds between NGC 7000 and IC 5070 (so-called “Gulf of Mexico”). Subsequent photometric and spectral observations ([@Kospal2011 K[ó]{}sp[á]{}l et al. 2011]; [@semkov2012 Semkov et al. 2012]; [@Baek2015 Baek et al. 2015]) indicate that the object can be definitely assigned to the class of FUors. The $BVRI$ light curves of V2493 Cyg from the collected photometric data are plotted in Fig. 1. The filled diamonds represent our CCD observations from Rozhen and Skinakas observatories, the filled circles CCD observations from the 48 inch Samuel Oschin telescope at Palomar Observatory ([@Miller2011 Miller et al. 2011]), the open diamonds photographic data from the Asiago Schmidt telescopes, the open squares photographic data from the Palomar Schmidt telescope, the filled squares photographic data from the Byurakan Schmidt telescope and the open circles photographic data from the Rozhen 2-m RCC telescope. ![Historical $BVRI$ light curves of V2493 Cyg for the period September 1973 - November 2016[]{data-label="fig1"}](Fig1.eps){width="13cm"} The photometric observation obtained before the outburst displayed only small amplitude variations in all pass-bands typical of T Tauri stars. Acoording to our data the outburst started sometime before May 2010, and reached the first maximum value at the period September - October 2010. Since October 2010, a slow fading was observed and up to May 2011 the star brightness decreased by 1.4 mag. ($V$). Since the autumn of 2011, another light increase occurred and the star became brighter by 1.8 mag. ($V$) until April 2013. From the spring of 2011 up to now the star keeps its maximum brightness showing a little bit fluctuations around it. Therefore, we have observed a classical outburst from FUor type, which should continue over the next few decades. V582 Aur -------- The discovery of V582 Aur was reported by the amateur astronomer Anton Khruslov. The star is located in a region of active star formation near Auriga OB2 association. According to [@Samus2009 Samus (2009)] the increase in brightness of the star started between 1982 and 1986. [@Munari2009 Munari et al. (2009)] obtained the first spectrum of V582 Aur, which confirms the FUor nature of the star (presence of absorption lines of the Balmer series, Na I D and Ba II ($\lambda$ 6496)). On the basis of spectral and photometric ([@Semkov2013 Semkov et al. 2013]) data we prove that the star is a FUor object. The historical $BVRI$ light curves of V582 Aur from all available photometric observations are plotted in Fig. 2. On the figure, the filled diamonds represent our CCD observations from Rozhen end Skinakas observatories, the filled circles photographic data from the Asiago Schmidt telescope, the filled triangles photographic data from the Kiso Schmidt telescope, and the filled squares photographic data from the Palomar Schmidt telescope. ![Historical $BVRI$ light curves of V582 Aur for the period December 1954 $-$ October 2016[]{data-label="fig2"}](Fig2.eps){width="13cm"} The results of lasted for six years photometric monitoring of V582 Mon show extremely strong variability that is not seen in other FUor objects. We suggest that the strong photometric variability can be explained by 1) time-variable extinction or 2) changes in accretion rate from the circumstellar disk onto the stellar surface. During the large drops in brightness, an appearance of dust particles in the immediate circumstellar environment of the star and change of the shape of basic spectral lines from absorption to emission was registered ([@Semkov2013 Semkov et al. 2013]). V900 Mon -------- The variability of V582 Aur was discovered by the amateur astronomer Jim Thommes. Based on detailed multi-wavelength study of the star [@Reipurth2012 Reipurth et al. (2012)] reach the conclusion that V582 Aur belongs to the group of FUor objects. According to the authors the outburst of the star occurred between 1953 and 2009. Recently [@Varricatt2015 Varricatt et al. (2015)] registered a rise in the brightness of V900 Mon in the infrared. ![$BVRI$ light curves of V900 Mon for the period August 2011 $-$ March 2016[]{data-label="fig3"}](Fig3.eps){width="13cm"} Our photometric monitoring of V900 Mon during the period from 2011 to 2016 shows a gradual increase in the brightness (Fig. 3). Our search in the Digitized Sky Surveys shows that the star was registered at minimum light on the photographic plates obtained on 8 Jan. 1989 ($R$) and 10 Feb. 1985 ($I$). Hence the rise in the brightness of V900 Mon began after 1985 and continued in the recent years. 2014, in: H. Beuther et al. (eds.), *Protostars and Planets VI* (Tucson, AZ: Univ. Arizona Press), p.387 2015, *AJ*, 14, 73 1996, *ARA&A*, 34, 207 2015, *PASA*, 32, e021 , 2011, *A&A*, 527, A133 2004, *MNRAS*, 353, 841 2011, *ApJ*, 730, 80 2009, *CBET*, 1898, 1 2008, *A&A*, 492, 735 2012, *ApJ*, 748, 5 2010, in: H. A. Harutyunian, A. M. Mickaelian & Y. Terzian (eds.), *Evolution of Cosmic Objects through their Physical Activity* (Yerevan: Gitutyun), p.19 2009, *CBET*, 1896, 1 2010, *A&A*, 523, L3 2012, *A&A*, 542, A43 2013, *A&A*, 556, A60 2017, *Bulg. Astr. J.*, in press 2015, *ATel*, 8174, 1 [^1]: Skinakas Observatory is a collaborative project of the University of Crete, the Foundation for Research and Technology - Hellas, and the Max-Planck-Institut für Extraterrestrische Physik.
--- abstract: 'In this paper we show that an intuitionistic theory for fixed points is conservative over the Heyting arithmetic with respect to a certain class of formulas. This extends partly the result of mine. The proof is inspired by the quick cut-elimination due to G. Mints' author: - | Toshiyasu Arai\ Toshiyasu ARAI\ Graduate School of Science\ Chiba University\ 1-33, Yayoi-cho, Inage-ku, Chiba, 263-8522, JAPAN title: 'Intuitionistic fixed point theories over Heyting arithmetic [^1] ' --- Introduction ============ Fixed points occur frequently in mathematical reasonings. Let us consider in this paper the fixed point predicate $I^{\Phi}(x)$ for positive formula $\Phi(X,x)$: $$\label{eq:fix} (FP)^{\Phi} \; \; \forall x[I^{\Phi}(x) \leftrightarrow \Phi(I^{\Phi},x)]$$ Over the classical logic, the existence of fixed points strengthens theories. In [@Motohashi] we have shown that a first-order logic calculus with the axioms (\[eq:fix\]) has non-elementary speed-ups over the classical first-order predicate logic. As an extension of the first-order arithmetic PA, the theory $\widehat{ID}$ for fixed points is stronger than PA. In $\widehat{ID}$ one can readily define the truth definition of arithmetic formulas. Moreover $\widehat{ID}$ proves the transfinite induction up to each ordinal less than $\varphi\varepsilon_{0}0$ for arithmetic formulas, [@Feferman] and [@attic]. However [*intuitionistic*]{} theories for fixed points may be proof-theoretically equivalent to the intuitionistic arithmetic HA. W. Buchholz[@Buchholz] showed that an intuitionistic fixed point theory $\widehat{ID}^{i}({\cal M})$ is conservative over the Heyting arithmetic HA with respect to almost negative formulas(, in which $\lor$ does not occur and $\exists$ occurs in front of atomic formulas only). The theory $\widehat{ID}^{i}({\cal M})$ has the axioms (\[eq:fix\]) $(FP)^{\Phi}$ for fixed points for [*monotone formula*]{} $\Phi(X,x)$, which is generated from arithmetic atomic formulas and $X(t)$ by means of (first order) monotonic connectives $\lor,\land,\exists,\forall$. Namely $\to$ nor $\lnot$ does occur in monotone formula. The proof is based on a recursive realizability interpretation. After seeing the result of Buchholz, we[@attic] showed that an intuitionistic fixed point (second order) theory is conservative over HA for any arithmetic formulas. In the theory the operator $\Phi$ for fixed points is generated from $X(t)$ and any second order formulas by means of first order monotonic connectives and second order existential quantifiers $\exists f(\in\omega\to\omega)$. The proof in [@attic] is to interpret the fixed points by $\Sigma^{1}_{1}$-formulas as in [@Feferman]. In interpreting the fixed points by $\Sigma^{1}_{1}$-formulas, we need an axiom of choice $\mbox{AC}_{01}$. Therefore the proof does not work for strictly positive operators, e.g., $\Phi(X,x): \Leftrightarrow \lnot\exists y\forall z A(x) \to X(x)$ since $\lnot\exists y\forall z A(x) \to \exists f R(f,x)\leftrightarrow \exists f[\lnot\exists y\forall z A(x)\to R(x)]$ is nothing but the independence of premiss, IP, which is not valid intuitionistically. The crux is the fact that the axiom of choice adds nothing to HA, i.e., N. Goodman’s theorem[@Goodman], while the theorem is proved by a combination of a realizability interpretation and a forcing. Also cf. [@Mintsfinite] for a proof-theoretic proof of the Goodman’s theorem. I met Grisha first time in Hiroshima, Japan, September, 1995. I explained to him the result in [@attic]. He soon realized that it be followed from the Goodman’s theorem before I told the proof. Then he asked me “Can you prove it by means of proof-transformations, e.g., cut-elimination?”. This paper is a partial answer to his question. Now let $\widehat{ID}^{i}({\cal HM})$ denote an intuitionistic fixed point theory in which the operator $\Phi(X,x)$ is in a class ${\cal HM}$ of formulas, cf. Definition \[df:leftright\] below. The class ${\cal HM}$ contains properly the monotone formulas and typically is of the form $H(x) \to M(X,x)$ for a (Rasiowa-)Harrop formula $H$(, in which there is no strictly positive occurrence of disjunction nor existential subformulas) and a monotonic formula $M$. We show that the theory $\widehat{ID}^{i}({\cal HM})$ is conservative over HA with respect to the class ${\cal HM}$. Thus the result of the paper extends partly one in [@attic]. On the other side, C. Rüede and T. Strahm[@Strahm] extends significantly the results in [@Buchholz] and [@attic]. They showed that the intuitionistic fixed point theory for [*strictly positive*]{} operators is conservative over HA with respect to negative and $\Pi^{0}_{2}$-formulas. Moreover they determined the proof-theoretic strengths of intuitionisitic theories for transfinitely iterations of fixed points by strictly positive operators. The class of strictly positive formulas is wider than our class ${\cal HM}$. In this respect the result in [@Strahm] supersedes ours. A merit here is that the class ${\cal HM}$ is wider than the class concerned in [@Strahm]. For example any formula in prenex normal form is (equivalent to a formula) in ${\cal HM}$, but a $\Pi^{0}_{3}$-formula is neither negative nor $\Pi^{0}_{2}$. Rather, I think that the novelty lies in our [*proof technique*]{}, which shows that, cf. Theorem \[lem:quickelim\], eliminating cut inferences with ${\cal HM}$-cut formulas from derivations of ${\cal HM}$-end formulas blows up depths of derivations [*only by one exponential*]{}, e.g., towers of exponentials are dispensable. This is seen from the fact that there exists an embedding from the resulting tree of cut-free derivation to such a derivation with cut inferences such that the embedding maps the deeper nodes in the tree ordering to larger nodes with respect to [*Kleene-Brouwer ordering*]{}. In other words eliminating monotone cut formulas is essentially to linearize the well founded tree as in Kleene-Brouwer ordering. This is an essence of quick cut-elimination in [@Mintsmono]. Let us explain an idea of our proof more closely. First the finitary derivations in $\widehat{ID}^{i}({\cal HM})$ is embedded to infinitary derivations, and eliminate cuts partially. This results in an infinitary derivation of depth less than $\varepsilon_{0}$, and in which there occurs cut inferences with cut formulas $I^{\Phi}(t)$ for fixed points only. Now the constrains on operator $\Phi$ and the end formula admits us to invert cut-free derivations of sequents with a Harrop antecedent and a monotonic succedent formula. Therefore the quick cut-elimination (and pruning) technique in Grisha’s[@Mintsmono] could work to eliminate cut inferences with cut formulas $I^{\Phi}(t)$. In this way we will get an infinitary derivation of depth less than $\varepsilon_{0}$, and in which there occurs no fixed point formulas. By formalizing the arguments we see that the end formula is true in HA. An intuitionistic theory $\widehat{ID}^{i}({\cal HM})$ ====================================================== $L_{HA}$ denote the language of the Heyting arithmetic. $L_{HA}$ consists of the equality sign $=$, individual constants $0,1$ for zero and one, function symbols $+,\cdot$ for addition and multiplication, and logical connectives $\lor,\land,\to, \exists,\forall$. Let $X$ be a fresh predicate symbol, which is assumed to be unary for simplicity. $L_{HA}(X)$ denotes the language $L_{HA}\cup\{X\}$. \[df:leftright\] [Define inductively two classes of formulas]{} ${\cal H}$ [in]{} $L_{HA}$, [and]{} ${\cal HM}$ [in]{} $L_{HA}(X)$ [as follows.]{} 1. [Any atomic formula]{} $s=t$ [belongs to both of]{} ${\cal H}$ [and]{} ${\cal HM}$. 2. [Any atomic formula]{} $X(t)$ [belongs to the class]{} ${\cal HM}$. 3. [If]{} $H,G\in{\cal H}$[, then]{} $H\land G, \forall x H\in{\cal H}$. 4. [If]{} $H\in{\cal H}$[, then]{} $A\to H\in{\cal H}$ [for any formula]{} $A\in L_{HA}$. 5. [If]{} $R,S\in{\cal HM}$[, then]{} $R\lor S, R\land S, \exists x R,\forall x R\in{\cal HM}$. 6. \[df:leftright9\] [If]{} $L\in{\cal H}$ [and]{} $R\in{\cal HM}$[, then]{} $L\to R\in{\cal HM}$. ${\cal H}$ denotes the class of (Rasiowa-)Harrop formulas, in which there occurs no strictly positive existential nor disjunctive subformula. ${\cal HM}$ contains properly the monotone formulas, i.e., the class POS in [@Buchholz], e.g., $\lnot A\to X(a)\in{\cal HM}$ is not intuitionistically equivalent to any monotone formula, but there exists a strongly positive formula with respect to $X$ not in ${\cal HM}$, e.g., $(\forall x\exists y A\to\exists z B)\land X(a)$. Any formula in ${\cal HM}$ is strictly positive with respect to $X$. Let $\widehat{ID}^{i}({\cal HM})$ denote the following extension of HA. Its language is obtained from $L_{HA}$ by adding a unary set constant $I^{\Phi}$ for each $\Phi\equiv\Phi(X,x)\in {\cal HM}$, in which only a fixed variable $x$ occurs freely. Its axioms are those of HA in the expanded language(, i.e., the induction axioms are available for any formulas in the expanded language) plus the axiom $(FP)^{\Phi}$, (\[eq:fix\]) for fixed points. Now our theorem runs as follows. \[th:main\] $\widehat{ID}^{i}({\cal HM})$ is conservative over [HA]{} with respect to formulas in ${\cal HM}$ (, in which the extra predicate constant $X$ does not occur). Infinitary derivations ====================== Given an $\widehat{ID}^{i}({\cal HM})$-derivation $D_{0}$ of an ${\cal HM}$-sentence $R_{0}$, let us first embed it to an infinitary derivation in an infinitary calculus $\widehat{ID}^{i\infty}({\cal HM})$. $$\lnot A:\Leftrightarrow A\to \bot.$$ Let $N$ denote a number which is big enough so that any formula occurring in $D_{0}$ has logical complexity(, which is defined by the number of occurrences of logical connectives) smaller than $N$. In what follows any formula occurring in infinitary derivations which we are concerned, has logical complexity less than $N$. The derived objects in the calculus $\widehat{ID}^{i\infty}({\cal HM})$ are [*sequents*]{} $\Gamma\Rightarrow A$, where $A$ is a [*sentence*]{} (in the language of $\widehat{ID}^{i}({\cal HM})$) and $\Gamma$ denotes a finite set of [*sentences*]{}, where each closed term $t$ is identified with its value $\bar{n}$, the $n$th numeral. $\bot$ stands ambiguously for false equations $t=s$ with closed terms $t,s$ having different values. $\top$ stands ambiguously for true equations $t=s$ with closed terms $t,s$ having same values. The [*initial sequents*]{} are $$\Gamma,I(t)\Rightarrow I(t)\, ; \mbox{\hspace{5mm}} \Gamma,\bot\Rightarrow A\, ; \Gamma\Rightarrow \top$$ These are regarded as inference rules with empty premiss(upper sequent). The [*inference rules*]{} are $(L\lor)$, $(R\lor)$, $(L\land)$, $(R\land)$, $(L\to)$, $(R\to)$, $(L\exists)$, $(R\exists)$, $(L\forall)$, $(R\forall)$, $(LI)$, $(RI)$, $(cut)$, and the repetition rule $(Rep)$. These are standard ones. 1. $$\infer[(LI)] {\Gamma,I(t) \Rightarrow C} {\Gamma,\Phi(I,t) \Rightarrow C} \: ;\: \infer[(RI)] {\Gamma\Rightarrow I(t)} {\Gamma\Rightarrow \Phi(I,t)}$$ 2. $$\infer[(L\lor)] {\Gamma,A_{0}\lor A_{1}\Rightarrow C} { \Gamma,A_{0}\Rightarrow C & \Gamma,A_{1}\Rightarrow C } \: ;\: \infer[(R\lor)] {\Gamma\Rightarrow A_{0}\lor A_{1}} {\Gamma\Rightarrow A_{i}} \,(i=0,1)$$ 3. $$\infer[(L\land)] {\Gamma,A_{0}\land A_{1}\Rightarrow C} { \Gamma,A_{0}\land A_{1},A_{i}\Rightarrow C } \, (i=0,1) \: ;\: \infer[(R\land)] {\Gamma\Rightarrow A_{0}\land A_{1}} { \Gamma\Rightarrow A_{0} & \Gamma\Rightarrow A_{1} }$$ 4. $$\infer[(L\to)] {\Gamma,A\to B\Rightarrow C} { \Gamma,A\to B\Rightarrow A & \Gamma,B\Rightarrow C } \: ;\: \infer[(R\to)] {\Gamma\Rightarrow A\to B} {\Gamma,A\Rightarrow B}$$ 5. $$\infer[(L\exists)] {\Gamma,\exists x B(x)\Rightarrow C} { \cdots & \Gamma,B(\bar{n})\Rightarrow C & \cdots (n\in\omega) } \: ;\: \infer[(R\exists)] {\Gamma\Rightarrow \exists x B(x)} {\Gamma\Rightarrow B(\bar{n})}$$ 6. $$\infer[(L\forall)] {\Gamma,\forall x B(x)\Rightarrow C} {\Gamma,\forall x B(x),B(\bar{n})\Rightarrow C} \: ;\: \infer[(R\forall)] {\Gamma\Rightarrow \forall x B(x)} { \cdots & \Gamma\Rightarrow B(\bar{n}) & \cdots (n\in\omega) }$$ 7. $$\infer[(cut)] {\Gamma,\Delta\Rightarrow C} { \Gamma\Rightarrow A & \Delta,A\Rightarrow C }$$ 8. $$\infer[(Rep)] {\Gamma\Rightarrow C}{\Gamma\Rightarrow C}$$ The [*depth*]{} of an infinitary derivation is defined to be the depth of the well founded tree. As usual we see the following proposition. Recall that $N$ is an upper bound of logical complexities of formulas occurring in the given finite derivation $D_{0}$ of ${\cal HM}$-sentence $R_{0}$. \[prp:embed\] 1. \[prp:embed1\] There exists an infinitary derivation $D_{1}$ of $R_{0}$ such that its depth is less than $\omega^{2}$ and the logical complexity of any sentence, in particular cut formulas occurring in $D_{1}$ is less than $N$. 2. \[prp:embed2\] By a partial cut-elimination, there exist an infinitary derivation $D_{2}$ of $R_{0}$ and an ordinal $\alpha_{0}<\varepsilon_{0}$ such that the depth of the derivation $D_{2}$ is less than $\alpha_{0}$ and any cut formula occurring in $D_{2}$ is an atomic formula $I(t)$(, and the logical complexity of any formula occurring in it is less than $N$). Let ${\cal HM}(I)$ denote the class of sentences obtained from sentences in ${\cal HM}$ by substituting the predicate $I^{\Phi}$ for the predicate $X$. \[df:rank\] [The]{} rank $rk(A)$ [of a sentence]{} $A$ [is defined by]{} $$rk(A):= \left\{ \begin{array}{ll} 0 & \mbox{{\rm if} } A\in{\cal H} \\ 1 & \mbox{{\rm if }} A\in {\cal HM}(I)\setminus{\cal H} \\ 2 & \mbox{{\rm otherwise}} \end{array} \right.$$ [For inference rules]{} $J$[, the]{} rank $rk(J)$ [of]{} $J$ [is defined to be the rank of the cut formula if]{} $J$ [is a cut inference. Otherwise]{} $rk(J):=0$. [For derivations]{} $D$[, the]{} rank $rk(D)$ [of]{} $D$ [is defined to be the maximum rank of the cut inferences in it.]{} Let $\vdash^{\alpha}_{r}\Gamma\Rightarrow C$ mean that there exists an infinitary derivation of $\Gamma\Rightarrow C$ such that its depth is at most $\alpha$, and its rank is less than $r$(, and and the logical complexity of any formula occurring in it is less than $N$). \[lem:quickelim\] Let $C_{0}$ denote an ${\cal HM}$-sentence, and $\Gamma_{0}$ a finite set of ${\cal H}$-sentences. Suppose that $\vdash^{\alpha}_{2}\Gamma_{0}\Rightarrow C_{0}$. Then $\vdash^{\omega^{\alpha}+1}_{1}\Gamma_{0}\Rightarrow C_{0}$. Assuming the Theorem \[lem:quickelim\], we can show the Theorem \[th:main\] as follows. Suppose an ${\cal HM}$-sentence $C_{0}$ is provable in $\widehat{ID}^{i}({\cal HM})$. By Proposition \[prp:embed\] we have $\vdash^{\alpha_{0}}_{2}\Rightarrow C_{0}$ for a big enough number $N$ and an $\alpha_{0}<\varepsilon_{0}$. Then Theorem \[lem:quickelim\] yields $\vdash^{\beta_{0}}_{1}\Rightarrow C_{0}$ for $\beta_{0}=\omega^{\alpha_{0}}+1<\varepsilon_{0}$. Let ${\rm Tr}_{N}(x)$ denote a partial truth definition for formulas of logical complexity less than $N$, cf. [@Troelstra], 1.5.4. By transfinite induction up to $\beta_{0}$, cf. Lemma \[lem:KB\], we see ${\rm Tr}_{N}(C_{0})$. Note that any sentence occurring in the witnessed derivation for $\vdash^{\beta_{0}}_{1}\Rightarrow C_{0}$ has logical complexity less than $N$, and it is either an ${\cal H}$-sentence or an ${\cal HM}$-sentence. Specifically there occurs no fixed point formula $I(t)$ in it. Now since everything up to this point is formalizable in HA, we have ${\rm Tr}_{N}(C_{0})$, and hence $C_{0}$ in HA. This shows the Theorem \[th:main\]. A proof of Theorem \[lem:quickelim\] is given in the next section. Quick cut-elimination for monotone cuts with Harrop side formulas ================================================================= Our plan of the proof of Theorem \[lem:quickelim\] is as follows. Pick the leftmost cut $J$ of rank 1: $$\infer[(cut)J] {\Gamma,\Delta\Rightarrow C} { \infer*[D_{\ell}]{\Gamma\Rightarrow A}{} & \infer*[D_{r}]{\Delta,A\Rightarrow C}{} }$$ with $rk(A)=1$. Then $\vdash^{\alpha}_{1}\Gamma\Rightarrow A$ and $\vdash^{\beta}_{2}\Delta,A\Rightarrow C$ for some $\alpha$ and $\beta$. Moreover since for the end sequent $\Gamma_{0}\Rightarrow C_{0}$, $\Gamma_{0}\subseteq{\cal H}$ and $C_{0}\in{\cal HM}$, we see that any sentence in $\Gamma\cup\Delta$ is in ${\cal H}$, and $C\in{\cal HM}$ by $\vdash^{\alpha_{0}}_{2}\Gamma_{0}\Rightarrow C_{0}$. Since $\Gamma$ consists solely in Harrop formulas, we can invert the derivation $D_{\ell}$, and climb up the derivation $D_{r}$ with inverted $D_{\ell}$. This results in a derivation of $\Gamma,\Delta\Rightarrow C$ of depth $dp(D_{\ell})+dp(D_{r})$. Iterating this eliminations, we could get a derivation of rank 0, and of depth at most exponential of the depth of the given derivation. Though intuitively this would suffice to believe in Theorem \[lem:quickelim\], we have to prove two facts: first why the iteration eventually terminates? second give a succinct argument to the estimated increase of depth. These are not entirely trivial tasks. It turns out that we need to proceed along the Kleene-Brouwer ordering on well founded trees instead of the depths. Let us explore this. Let us fix a witnessed derivation $D_{2}$ of $\vdash^{\alpha_{0}}_{2}\Gamma_{0}\Rightarrow C_{0}$. Let $(T_{2},<_{T_{2}})$ denote the wellordering, where $T_{2}\subseteq{}^{<\omega}\omega$ is the naked tree of $D_{2}$ and $<_{T_{2}}$ the Kleene-Brouwer ordering on $T_{2}$. Let us consider infinitary derivations equipped with additional informations as in [@Mintsfinite]. \[df:derivation\] [An]{} infinitary derivation [is a sextuple]{} $D=(T,Seq,Rule, rk,ord,kb)$ [which enjoys the following conditions. The naked tree of]{} $D$ [is denoted]{} $T=T(D)$. 1. $T\subseteq{}^{<\omega}\omega$ [is a tree in the sense that there exists a root]{} $r\in T$ [with]{} $\forall a\in T(r\subseteq a)$ [and]{} $\forall a, b(r\subseteq a\subseteq b\in T \Rightarrow a\in T)$. [It is]{} not assumed [that the empty node]{} $\emptyset$ [is to be the root nor]{} $a*\langle n\rangle\in T \,\&\, m<n \Rightarrow a*\langle m\rangle\in T$. 2. $Seq(a)$ [for]{} $a\in T$ [denotes the sequent situated at the node]{} $a$. [If]{} $Seq(a)$ [is a sequent]{} $\Gamma\Rightarrow C$[, then it is denoted]{} $$a:\Gamma\Rightarrow C .$$ 3. $Rule(a)$ [for]{} $a\in T$ [denotes the name of the inference rule with its lower sequent]{} $Seq(a)$. 4. $rk(a)$ [for]{} $a\in T$ [denotes the rank of the inference rule]{} $rk(a)$. 5. $ord(a)$ [for]{} $a\in T$ [denotes the ordinal]{}$<\varepsilon_{0}$ [attached to]{} $a$. 6. [The quintuple]{} $(T,Seq,Rule, rk,ord)$ [has to be locally correct with respect to]{} $\widehat{ID}^{i\infty}({\cal HM})$ [and for being well founded tree]{} $T$. [Besides these conditions an extra information is provided by a]{} labeling function $kb$. $kb:T\to T_{2}$ [is a function such that for]{} $a,b\in T$ 1. $$\label{eq:kb1} kb(a)\neq kb(b) \Rightarrow [a<_{T}b \Leftrightarrow kb(a)<_{T_{2}}kb(b)]$$ [for the]{} Kleene-Brouwer ordering $<_{T}$ [on]{} $T$. 2. $$\label{eq:kb2} a\subset b \Rightarrow kb(b)<_{T_{2}}kb(a)$$ [where]{} $a\subset b$ [means that]{} $a$ [is a proper initial segment of]{} $b$ [for]{} $a,b\in{}^{<\omega}\omega$. 3. [Let]{} $c\in T$ [be a node with]{} $rk(c)=1$[, and]{} $a,b\in T$ [nodes such that]{} $c*\langle \ell\rangle\subseteq a$ [and]{} $c*\langle r\rangle\subseteq b$ [for]{} $\ell<r$. [(This means that]{} $Seq(a)$ [\[]{}$Seq(b)$[\] is in the left \[right\] upper part of the cut inference]{} $Rule(c)$[.) Suppose that the]{} right cut formula $A$ [in the antecedent of]{} $Seq(c*\langle r\rangle)$ [has an]{} ancestor [in]{} $Seq(b)$. [Then]{} $$\label{eq:kb3} kb(a)\neq kb(b)$$ The condition (\[eq:kb2\]) to $kb$ ensures us that the depth of $T$ is at most the order type of $<_{T_{2}}$. It is easy to see that Kleene-Brouwer ordering $<_{T_{2}}$ is a well ordering, and its order type is bounded by $\omega^{\alpha}+1$ for the depth $\alpha$ of the primitive recursive and wellfounded tree $T_{2}$. \[lem:KB\] The transfinite induction schema (for arithmetical formulas) along the Kleene-Brouwer ordering $<_{T_{2}}$ is provable in HA. [**Proof**]{}. Since the transfinite induction schema along a standard $\varepsilon_{0}$-ordering is provable in HA up to each $\alpha<\varepsilon_{0}$, the same holds for the tree ordering $\{(b,a): a\subset b, a,b\in T_{2}\}$ (bar induction). Now let $X$ be a formula, and assume that $X$ is progressive with respect to $<_{T_{2}}$: $$\forall a\in T_{2}[\forall b<_{T_{2}}a\, X(b) \to X(a)] .$$ Let $${\sf j}[X](a):\Leftrightarrow \forall y\in T_{2}[y\supseteq a \to \forall x<_{T_{2}}y\, X(x) \to \forall x<_{T_{2}}a\, X(x)] .$$ Then we see that ${\sf j}[X]$ is progressive with respect to the tree ordering. Therefore ${\sf j}[X](r)$ for the root $r\in T_{2}$, and by letting $y$ to be the leftmost leaf in $T_{2}$ we have $\forall x<_{T_{2}}r\, X(x)$. The progressiveness of $X$ with respect to $<_{T_{2}}$ yields $X(r)$. $\Box$ The following Lemmas are seen as usual. \[lem:weakfalse\] Let $D=(T,Seq,Rule, rk,ord,kb)$ be a derivation of rank 1, and of a sequent $\Gamma\Rightarrow A$ such that $\Gamma\subseteq{\cal H}$ and $A\in{\cal HM}\,\&\, rk(A)=1$. For any $\Delta$, there exists a derivation $D*\Delta=(T,Seq*\Delta,Rule, rk,ord,kb)$ of the sequent $\Gamma,\Delta\Rightarrow A$. \[lem:inversion\](Inversion Lemma)\ Let $D=(T,Seq,Rule, rk,ord,kb)$ be a derivation of rank 1, and of a sequent $\Gamma\Rightarrow A$ such that $\Gamma\subseteq{\cal H}$ and $A\in{\cal HM}\,\&\, rk(A)=1$. 1. \[lem:inversion1\] If $A\equiv B_{0}\lor B_{1}$, then there exists a derivation $D_{i}=(T,Seq_{i},Rule_{i}, rk,ord,kb)$ of rank 1 and of a sequent $\Gamma\Rightarrow B_{i}$ for an $i=0,1$. 2. \[lem:inversion2\] If $A\equiv B_{0}\land B_{1}$, then there exist derivations $D_{i}=(T_{i},Seq_{i},Rule_{i}, rk,ord,kb)$ of rank 1 and of sequents $\Gamma\Rightarrow B_{i}$ for any $i=0,1$, where $T_{i}\subseteq T$ by pruning. 3. \[lem:inversion3\] If $A\equiv \exists x B(x)$, then there exists a derivation $D_{n}=(T,Seq_{n},Rule_{n}, rk,ord,kb)$ of rank 1 and of a sequent $\Gamma\Rightarrow B(\bar{n})$ for an $n\in\omega$. 4. \[lem:inversion4\] If $A\equiv \forall x B(x)$, then there exist derivations $D_{n}=(T_{n},Seq_{n},Rule_{n}, rk,ord,kb)$ of rank 1 and of a sequent $\Gamma\Rightarrow B(\bar{n})$ for any $n\in\omega$, where $T_{n}\subseteq T$ by pruning. 5. \[lem:inversion5\] If $A\equiv B_{0}\to B_{1}$, then there exist a derivation $D^{\prime}=(T,Seq^{\prime},Rule^{\prime}, rk,ord,kb)$ of rank 1 and of sequents $\Gamma,B_{0}\Rightarrow B_{1}$. 6. \[lem:inversion6\] If $A\equiv I(t)$, then there exists a derivation $D^{\prime}=(T,Seq^{\prime},Rule^{\prime}, rk,ord,kb)$ of rank 1 and of sequents $\Gamma\Rightarrow \Phi(I,t)$. \[df:KBJ\] [For each cut inference]{} $J$ [in a derivation]{} $D$, $KB(J)$ [denotes]{} $kb(a)\in T_{2}$ [with the]{} left upper node $a$ [of]{} $J$[:]{} $$\infer[J]{\Gamma,\Delta\Rightarrow C} { a :\Gamma\Rightarrow A & \Delta,A\Rightarrow C }$$ Let us define a cut-eliminating operator $ce_{1}(D)$ for derivations $D=(T,Seq,Rule, rk,ord,kb)$ of rank 1 and of an end sequent $\Gamma_{0}\Rightarrow C_{0}$ with $\Gamma_{0}\subseteq{\cal H}$ and $C_{0}\in {\cal HM}$. If $D$ is of rank 0, then $ce_{1}(D):=D$. Assume that $D$ contains a cut inference of rank 1. Pick the leftmost cut of rank 1: $$D= \infer*{\Gamma_{0}\Rightarrow C_{0}} { \infer[(cut)] {\Gamma,\Delta\Rightarrow C} { \infer*[D_{\ell}]{\Gamma\Rightarrow A}{} & \infer*[D_{r}]{\Delta,A\Rightarrow C}{} } }$$ The leftmostness means that $KB(J)$ is least in the Kleene-Brouwer ordering $<_{T_{2}}$. By recursion on the depth [^2] of the derivation $D_{r}$ we define a derivation $ce_{2}(D_{\ell},D_{r})$ of $\Gamma,\Delta\Rightarrow C$. Then $ce_{1}(D)$ is obtained from $D$ by pruning $D_{\ell}$ and replacing $D_{r}$ by $ce_{2}(D_{\ell},D_{r})$, i.e., by grafting $ce_{2}(D_{\ell},D_{r})$ onto the trunk of $D$ up to $\Gamma,\Delta\Rightarrow C$. As in Lemma 3.2, [@Mintsmono] the construction of $ce_{2}(D_{\ell},D_{r})$ is fairly standard, leaving the resulted cut inferences of rank 0, but has to performed parallely. Let $\Gamma\cup\Delta\subseteq{\cal H}$ and $\mbox{\boldmath$A$} =A_{1},\ldots,A_{k}$ be a finite sequence of ${\cal HM}$-sentences. Let $\mbox{\boldmath$D$} _{\ell}=D_{\ell,1},\ldots,D_{\ell,k}$ be rank 0 derivations of $\Gamma\Rightarrow A_{i}$, and $D_{r}$ a rank 1 derivation of $\Delta,\mbox{\boldmath$A$} \Rightarrow C$. We will eliminate the cuts with the cut formulas $A_{i}$ in parallel. $ce_{2}(\mbox{\boldmath$D$} _{\ell},D_{r})$ is defined from the resulting derivation, denoted $E$ by recursion. $$D_{a}= \infer[(cut)] {a: \Gamma,\Delta\Rightarrow C} { \infer*[\mbox{\boldmath$D$} _{\ell}]{\mbox{\boldmath$b$} : \Gamma\Rightarrow \mbox{\boldmath$A$} }{} & \infer*[D_{r}]{c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C}{} }$$ denotes the series of cut inferences: $$\infer {a: \Gamma,\Delta\Rightarrow C} { \infer*[D_{\ell,k}]{b_{k}: \Gamma\Rightarrow A_{k}}{} & \infer*{c_{k}: \Gamma,\Delta,A_{k}\Rightarrow C} { \infer{c_{2}: \Gamma,\Delta,A_{2},\ldots,A_{k}\Rightarrow C} { \infer*[D_{\ell,1}]{b_{1}: \Gamma\Rightarrow A_{1}}{} & \infer*[D_{r}]{c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C}{} } } }$$ 1. \[1\] If $\Delta,\mbox{\boldmath$A$} \Rightarrow C$ is an initial sequent such that one of the cases $C\equiv\top$, $\bot\in\Delta$ or $C\in\Delta$ occurs, then $\Delta\Rightarrow C$, and hence $\Gamma,\Delta\Rightarrow C$ is still the same kind of initial sequent. For example $$\infer[(Rep)] {a:\Gamma,\Delta\Rightarrow \top} { \infer*{c_{k}: \Gamma,\Delta\Rightarrow \top} { \infer[(Rep)]{c_{2}: \Gamma,\Delta\Rightarrow \top} {c_{1}: \Gamma,\Delta\Rightarrow \top} } }$$ The $T(E)$ is defined by $$d\in T(E) \Leftrightarrow d\in T(D) \,\&\, \forall i(b_{i}\not\subseteq d) .$$ 2. \[2\] If $\Delta,\mbox{\boldmath$A$} \Rightarrow C$ is an initial sequent with the principal formula $\mbox{\boldmath$A$} \ni A_{i}\equiv C\equiv I(t)$, then $E$ is defined to be $$\infer[(Rep)] {a:\Gamma,\Delta\Rightarrow C} { \infer*{c_{k}: \Gamma,\Delta\Rightarrow C} { \infer[(Rep)]{c_{i+1}: \Gamma,\Delta\Rightarrow C} { \infer*[D_{\ell,i}*\Delta]{b_{i}: \Gamma,\Delta\Rightarrow C}{} } } }$$ where $D_{\ell,i}*\Delta$ is obtained from $D_{\ell,i}$ by weakening, cf. Lemma \[lem:weakfalse\]. $$d\in T(E) \Leftrightarrow d\in T(D) \,\&\, \forall j\neq i(b_{j}\not\subseteq d) \,\&\, c_{i}\not\subseteq d .$$ 3. \[3\] If $A_{i}\in\mbox{\boldmath$A$} $ is of rank 0, then do nothing for the cut inference of $A_{i}$. In each of the above cases $T(E)\subseteq T(D_{a})$. The labeling function $kb_{E}$ for $E$ is defined to be the restriction of $kb_{D_{a}}$ to $T(E)$. In what follows assume that $\Delta,\mbox{\boldmath$A$} \Rightarrow C$ is a lower sequent of an inference rule $J$. 4. \[4\] If the principal formula of $J$ is not in $\mbox{\boldmath$A$} $, then lift up $\mbox{\boldmath$D$} _{\ell}$: $$\infer{a:\Gamma,\Delta\Rightarrow C} { \mbox{\boldmath$b$} :\Gamma\Rightarrow \mbox{\boldmath$A$} & \infer[(J)] {c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C}{\cdots & c_{1,i}: \Delta_{i},\mbox{\boldmath$A$} \Rightarrow C_{i} & \cdots} }$$ where $\mbox{\boldmath$b$} =b_{k},\ldots,b_{1}$ with $b_{j}=c_{j+1}*\langle \ell_{j}\rangle\, (c_{k+1}:=a)$, and $c_{j}=c_{j+1}*\langle r_{j}\rangle$ for some $\ell_{j}<r_{j}$, and $c_{1,i}=c_{1}*\langle n_{i}\rangle$ for some $n_{i}$ with $i<j\Rightarrow n_{i}<n_{j}$. $E$ is defined as follows. $$\infer[(J)] {a: \Gamma,\Delta\Rightarrow C} { \cdots & \infer {a_{i}:\Gamma,\Delta_{i}\Rightarrow C_{i}} {\mbox{\boldmath$b$} _{i}: \Gamma\Rightarrow \mbox{\boldmath$A$} & c^{\prime}_{1,i}: \Delta_{i},\mbox{\boldmath$A$} \Rightarrow C_{i}} & \cdots }$$ where $a_{i}=a*\langle n_{i}\rangle$, $\mbox{\boldmath$b$} _{i}=b_{k,i},\ldots,b_{1,i}$ with $b_{j,i}=c^{\prime}_{j+1,i}*\langle \ell_{j}\rangle\, (c^{\prime}_{k+1,i}:=a_{i})$, and $c^{\prime}_{j,i}=c^{\prime}_{j+1,i}*\langle r_{j}\rangle$. The labeling function $kb_{E}$ is defined by $$\begin{aligned} kb_{E}(a*\langle n_{i}\rangle) & = & kb_{D_{a}}(a*\langle r_{k}\rangle)=kb_{D_{a}}(c_{k}), \\ kb_{E}(a*\langle n_{i}\rangle*\langle r_{k},\ldots, r_{j+1},\ell_{j}\rangle*d)& = & kb_{D_{a}}(a*\langle r_{k},\ldots, r_{j+1},\ell_{j}\rangle*d) \, (1\leq j\leq k), \\ kb_{E}(a*\langle n_{i}\rangle*\langle r_{k},\ldots, r_{1}\rangle*d)& = & kb_{D_{a}}(a*\langle r_{k},\ldots, r_{1}\rangle*\langle n_{i}\rangle*d)\end{aligned}$$ 5. \[5\] Finally suppose that the principal formula of $J$ is a cut formula $A_{i}\in\mbox{\boldmath$A$} $ of $rk(A_{i})=1$. Use the Inversion Lemma \[lem:inversion\]. 1. \[5a\] The case when $A_{i}\equiv \exists x B(x)\in\mbox{\boldmath$A$} $. For simplicity suppose $i=1$. $$\infer[(L\exists)] {c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C} { \cdots & \infer*[D_{r,n}]{c_{1,n}: \Delta, \mbox{\boldmath$A$} _{1}, B(\bar{n})\Rightarrow C}{} & \cdots }$$ where $A_{1}\not\in\mbox{\boldmath$A$} _{1}$. By Inversion Lemma \[lem:inversion\].\[lem:inversion3\] pick an $n$ such that $\Gamma\Rightarrow B(\bar{n})$ is provable without changing the naked tree. $E$ is defined as follows. $$\infer {a: \Gamma,\Delta\Rightarrow C} { \mbox{\boldmath$b$} _{1}: \Gamma\Rightarrow \mbox{\boldmath$A$} _{1} & \infer{c_{2}: \Gamma,\Delta,\mbox{\boldmath$A$} _{1}\Rightarrow C} { b_{1}: \Gamma\Rightarrow B(\bar{n}) & \infer[(Rep)]{c_{1}: \Delta, \mbox{\boldmath$A$} _{1}, B(\bar{n})\Rightarrow C} { \infer*[D_{r,n}]{c_{1,n}: \Delta, \mbox{\boldmath$A$} _{1}, B(\bar{n})\Rightarrow C}{} } } }$$ 2. \[5b\] The case when $A_{i}\equiv H\to A_{0}\in\mbox{\boldmath$A$} $ with an $H\in{\cal H}$ and an $A_{0}\in{\cal HM}$. For simplicity suppose $i=1$. $$\infer[(L\to)] {c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C} { c_{1,\ell}: \Delta,\mbox{\boldmath$A$} \Rightarrow H & c_{1,r}: \Delta,\mbox{\boldmath$A$} _{1},A_{0}\Rightarrow C }$$ where $A_{1}\not\in\mbox{\boldmath$A$} _{1}$, and for $m=\ell, r$, $c_{1,m}=c_{1}*\langle j_{m}\rangle$ with $j_{\ell}<j_{r}$. $E$ is defined as follows. [$$\infer {a: \Gamma,\Delta \Rightarrow C} { \infer{c_{k,0}: \Gamma,\Delta\Rightarrow H} { \mbox{\boldmath$b$} _{0}: \Gamma\Rightarrow \mbox{\boldmath$A$} & \infer[(Rep)]{c_{1,0}: \Delta,\mbox{\boldmath$A$} \Rightarrow H}{c_{1,0,\ell}: \Delta,\mbox{\boldmath$A$} \Rightarrow H} } & \infer{c_{k,1}: \Gamma,\Delta,H\Rightarrow C} { \mbox{\boldmath$b$} _{1}: \Gamma\Rightarrow \mbox{\boldmath$A$} _{1} & \hspace{-10mm} \infer{c_{2,1}: \Gamma,\Delta,H,\mbox{\boldmath$A$} _{1}\Rightarrow C} { b_{1,1}: \Gamma,H\Rightarrow A_{0} & \infer[(Rep)]{c_{1,1}: \Delta,\mbox{\boldmath$A$} _{1}\cup\{A_{0}\}\Rightarrow C}{c_{1,1,r}: \Delta,\mbox{\boldmath$A$} _{1}\cup\{A_{0}\}\Rightarrow C} } } }$$ ]{} where $\Gamma,H\Rightarrow A_{0}$ by inversion. For $m=0,1$, $c_{j,m}=c_{j+1,m}*\langle 2r_{j}+m\rangle$ for $1\leq j\leq k$ with $c_{k+1,m}=a$, and $\mbox{\boldmath$b$} _{m}=b_{k,m},\ldots,b_{1,m}$ with $b_{j,m}=a*\langle 2r_{k}+m,\ldots, 2r_{j}+m,2\ell_{j}+m\rangle$ and $c_{1,0,\ell}=c_{1,0}*\langle j_{\ell}\rangle$, $c_{1,1,r}=c_{1,1}*\langle j_{r}\rangle$. The labeling function is defined by $$\begin{aligned} kb_{E}(c_{j,m}) & = & kb_{D_{a}}(c_{j}) \label{eq:labelex} \\ kb_{E}(b_{j,m}*d) & = & kb_{D_{a}}(b_{j}*d), \, (m=0,1) \nonumber\end{aligned}$$ 3. \[5c\] The case when $A_{i}\equiv \forall x B(x) \in\mbox{\boldmath$A$} $. For simplicity suppose $i=1$. $$\infer[(L\forall)] {c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C} { c_{1,n}: \infer*[D_{r,n}]{\Delta, \mbox{\boldmath$A$} , B(\bar{n})\Rightarrow C}{} }$$ with $c_{1,n}=c_{1}*\langle j_{n}\rangle$. $E$ is defined as follows. $$\infer {a: \Gamma,\Delta\Rightarrow C} { \mbox{\boldmath$b$} : \Gamma\Rightarrow \mbox{\boldmath$A$} & \infer{c_{1}: \Gamma,\Delta,\mbox{\boldmath$A$} \Rightarrow C} { b_{1,1}: \Gamma\Rightarrow B(\bar{n}) & \infer*[D_{r,n}]{c^{\prime}_{1,n}: \Delta, \mbox{\boldmath$A$} , B(\bar{n})\Rightarrow C}{} } }$$ where $b_{1,1}=c_{1}*\langle 2 j_{n}\rangle$ and $c^{\prime}_{1,n}=c_{1}*\langle 2 j_{n}+1\rangle$. The labeling function is defined by $$\label{eq:labelex5b} kb_{E}(b_{1,1}*d) = kb_{D_{a}}(b_{1}*d)$$ and $$kb_{E}(c^{\prime}_{1,n})=kb_{D_{a}}(c_{1,n}) .$$ 4. \[5d\] The case when $A_{i}\equiv B_{0}\lor B_{1}\in\mbox{\boldmath$A$} $. For simplicity suppose $i=1$. $$\infer[(L\lor)] {c_{1}: \Delta,\mbox{\boldmath$A$} \Rightarrow C} { \infer*[D_{r,0}]{c_{1,0}: \Delta, \mbox{\boldmath$A$} _{1}, B_{0}\Rightarrow C}{} & \infer*[D_{r,1}]{c_{1,1}: \Delta, \mbox{\boldmath$A$} _{1}, B_{1}\Rightarrow C}{} }$$ where $A_{1}\not\in\mbox{\boldmath$A$} _{1}$. By Inversion Lemma \[lem:inversion\].\[lem:inversion1\] pick an $n=0,1$ such that $\Gamma\Rightarrow B_{n}$ is provable without changing the naked tree. Suppose that $n=0$. $E$ is defined as follows. $$\infer {a: \Gamma,\Delta\Rightarrow C} { \mbox{\boldmath$b$} _{1}: \Gamma\Rightarrow \mbox{\boldmath$A$} _{1} & \infer{c_{2}: \Gamma,\Delta,\mbox{\boldmath$A$} _{1}\Rightarrow C} { b_{1}: \Gamma\Rightarrow B_{0} & \infer[(Rep)]{c_{1}: \Delta, \mbox{\boldmath$A$} _{1}, B_{0}\Rightarrow C} { \infer*[D_{r,0}]{c_{1,0}: \Delta, \mbox{\boldmath$A$} _{1}, B_{0}\Rightarrow C}{} } } }$$ Note that the new cut inference for $B_{0}$ may be of rank 0. 5. \[5e\] The case when $A_{i}\equiv B_{0}\land B_{1} \in\mbox{\boldmath$A$} $ is treated as in the case \[5c\] for universal quantifier. \[clm:kb\] The resulting derivation $ce_{1}(D)$ can be labeled enjoying the conditions (\[eq:kb1\]), (\[eq:kb2\]) and (\[eq:kb3\]). [**Proof**]{}. Let $D_{a}$ be the trunk ending with the leftmost cut of rank 1 in $D$. First observe that the labels $\{kb_{E}(b): b\in T(E)\}\subseteq\{kb_{D_{a}}(b):b\in T(D_{a})\}$. Therefore it suffices to see that $E$, and hence $ce_{2}(\mbox{\boldmath$D$} _{\ell},D_{r})$ enjoys the three conditions if $D_{a}$ does. Note that (the naked tree of) $E$ is constructed from $D_{r}$ by appending trees $\mbox{\boldmath$D$} _{\ell}$ only where a right cut formula $A_{i}$ has an ancestor which is either a formula of rank 0 or a principal formula of an initial sequent $\Phi,I(t)\Rightarrow I(t)$. In the latter case the ancestor has to be the formula $I(t)$ in the antecedent. $E$ enjoys the first (\[eq:kb1\]) since $D_{a}$ does the first (\[eq:kb1\]). $E$ enjoys the second (\[eq:kb2\]) since $D_{a}$ does the third (\[eq:kb3\]). $E$ enjoys the third (\[eq:kb3\]) since $D_{a}$ does the third (\[eq:kb3\]), and the first (\[eq:kb1\]). Let us examine cases. Consider the case \[4\] when $\mbox{\boldmath$D$} _{\ell}$ is lifted up. We have to show $$kb_{E}(e)<_{T_{2}}kb_{E}(c^{\prime}_{j,i})$$ for $e$ such that $b_{j,i}\subseteq e$. Then $kb_{E}(e)=kb_{E}(a*\langle n_{i}\rangle*\langle r_{k},\ldots, r_{j+1},\ell_{j}\rangle*d)= kb_{D_{a}}(a*\langle r_{k},\ldots, r_{j+1},\ell_{j}\rangle*d)=kb_{D_{a}}(b_{j}*d)$ and $kb_{E}(c^{\prime}_{j,i})=kb_{D_{a}}(c_{j})$. Since the right cut formula $A_{i}$ has an ancestor in $c_{1,i}:\Delta,\mbox{\boldmath$A$} \Rightarrow C_{i}$, $kb_{E}(e)<_{T_{2}}kb_{E}(c^{\prime}_{j,i})$ follows from (\[eq:kb3\]) for $D_{a}$. Next consider the case \[5b\]. Although $b_{j,m}, c_{j,m}$ are duplicated for $m=0,1$, and $kb_{E}(c_{j,0}) = kb_{D_{a}}(c_{j})=kb_{E}(c_{j,1})$, $kb_{E}(b_{j,1}*d) = kb_{D_{a}}(b_{j}*d)=kb_{E}(b_{j,0}*d)$ by (\[eq:labelex\]), (\[eq:labelex5b\]) these are harmless for (\[eq:kb3\]) since the juncture is a cut of rank 0, $rk(H)=0$. Finally consider the case \[5c\]. By (\[eq:labelex5b\]) we have $$kb_{E}(b_{1,1}*d) = kb_{D_{a}}(b_{1}*d)=kb_{E}(b_{1}*d)$$ but the right cut formula $A_{i}$ has no ancestor in $b_{1,1}:\Gamma\Rightarrow B(\bar{n})$. Thus (\[eq:kb3\]) is enjoyed. Next for (\[eq:kb2\]) we have $$kb_{D_{a}}(c_{j})<_{T_{2}}kb_{D_{a}}(b_{1}*d)=kb_{E}(b_{1,1}*d)$$ by (\[eq:kb2\]) and (\[eq:kb3\]) in $D_{a}$. Finally for (\[eq:kb1\]) assume $j\neq 1$ and $$kb_{D_{a}}(b_{1}*d)=kb_{E}(b_{1,1}*d)\neq kb_{E}(b_{j}*e)= kb_{D_{a}}(b_{j}*e) .$$ Then by (\[eq:kb1\]) in $D_{a}$ we have $b_{j}*e<_{T(D_{a})}b_{1}*d$, and hence $kb_{D_{a}}(b_{j}*e) <_{T_{2}}kb_{D_{a}}(b_{1}*d)$. $\Box$ This ends the construction of the cut-eliminating operator $ce_{1}(D)$. Finally we show Theorem \[lem:quickelim\]. Given a derivation $D_{2}=(T_{2},Seq,Rule, rk,ord,kb)$ of $\Gamma_{0}\Rightarrow C_{0}$ of rank 1, and assume $\Gamma_{0}\subseteq{\cal H}$ and $C_{0}\in {\cal HM}$. $(T_{2},<_{T_{2}})$ denotes the Kleene-Brouwer ordering on the naked tree $T_{2}$. Let $KB(D):=KB(J)$ for the leftmost cut inference $J$ of rank 1 if such a $J$ exists. Otherwise $KB(D)$ denote the largest element in $T_{2}$ with respect to $<_{T_{2}}$, i.e., the root of $T_{2}$. Then we see that $KB(D)<_{T_{2}}KB(ce_{1}(D))$ if $D$ contains a cut inference of rank 1. Suppose as the induction hypothesis that any cut inferences $J$ of rank 1 has been eliminated for $KB(J)<a$, and let $D$ denote such a derivation. Also assume that $a$ is a node of a cut inference of rank 1. Then in $ce_{1}(D)$ the cut inference is eliminated. This proves the Theorem \[lem:quickelim\] by induction along the Kleene-Brouwer ordering $<_{T_{2}}$, cf. Lemma \[lem:KB\]. [99]{} T. Arai, Some results on cut-elimination, provable well-orderings, induction and reflection, Annals of Pure and Applied Logic vol. 95 (1998), pp. 93-184. T. Arai, Non-elementary speed-ups in logic calculi, Mathematical Logic Quarterly vol. 6(2008), pp. 629-640. W. Buchholz, An intuitionistic fixed point theory, Arch. Math. Logic 37(1997), pp. 21-27. S. Feferman, Iterated inductive fixed-point theories:Applications to Hancock’s conjecture, in:G. Metakides, ed., Patras Logic Symposion (North-Holland, Amsterdam, 1982), pp. 171-196. N. Goodman, Relativized realizability in intuitionistic arithmetic of all finite types, J. Symb. Logic 43 (1978), pp. 23-44. G.E. Mints, Finite investigations of transfinite derivations, in: Selected Papers in Proof Theory (Bibliopolis, Napoli, 1992), pp. 17-72. G. E. Mints, Quick cut-elimination for monotone cuts, in Games, logic, and constructive sets(Stanford, CA, 2000), CSLI Lecture Notes, 161, CSLI Publ., Stanford, CA, 2003, pp. 75-83. C. Rüede and T. Strahm, Intuitionistic fixed point theories for strictly positive operators, Math. Log. Quart. 48(2002), pp. 195-202. A.S. Troelstra, Metamathematical Investigation of Intuitionistic Arithmetic and Analysis. Lecture Notes in Mathematics 344 (Springer, Berlin Heidelberg New York, 1973). [^1]: Dedicated to the occasion of Grisha Mints’ 70th birthday [^2]: As in [@Mintsfinite] we see that the operators $ce_{1}, ce_{2}$ are primitive recursive. We don’t need this fact.
--- abstract: 'We report the first demonstration of thermally controlled soliton modelocked frequency comb generation in microresonators. By controlling the electric current through heaters integrated with silicon nitride microresonators, we demonstrate a systematic and repeatable pathway to single- and multi-soliton modelocked states without adjusting the pump laser wavelength. Such an approach could greatly simplify the generation of modelocked frequency combs and facilitate applications such as chip-based dual-comb spectroscopy.' author: - Chaitanya Joshi - 'Jae K. Jang' - Kevin Luke - Xingchen Ji - 'Steven A. Miller' - Alexander Klenner - Yoshitomo Okawachi - Michal Lipson - 'Alexander L. Gaeta' title: Thermally Controlled Comb Generation and Soliton Modelocking in Microresonators --- Optical frequency comb generation is a revolutionary technology that enables new capabilities in spectroscopy [@Diddams2007], time and frequency metrology [@Udem2002], optical arbitrary waveform generation [@Jiang2007], low-noise radio frequency (RF) signal generation [@Fortier2011], and optical clockwork [@DiddamsClockwork; @Newbury2011]. Recently, there has been a significant development in frequency comb technology based on microresonators with demonstrations in calcium fluoride [@Savchenkov2008], magnesium fluoride (MgF$_2$) [@Liang11; @Herr2014], silica [@DelHaye2007; @Jiang2012; @vahala], aluminum nitride [@Jung14], diamond [@Haussmann2014], silicon [@Griffith2015], and silicon nitride (Si$_3$N$_4$) [@Foster11; @Wang13; @Brasch2016; @kasturi; @huang2015; @Xue2015]. Si$_3$N$_4$ has emerged as a particularly attractive platform for chip-scale frequency comb generation, since it uses a CMOS-compatible fabrication process [@Moss2013] and allows for integration of electronics and optical elements in a compact, robust, and portable device. This will open up applications of frequency combs to a wider range of environments as compared to current frequency comb sources that are mostly limited to controlled laboratory environments. In order to utilize microresonator-based combs for precision time and frequency applications, the comb output must be in the low-noise modelocked state [@kasturi; @Herr2014]. In microresonators, the generation of a single- or multi-soliton state in the ring corresponds to passive modelocking and ultrashort pulse formation. To date, single-soliton states have been generated in microresonators using pump frequency tuning in MgF$_2$ [@Herr2014], silica [@vahala], and Si$_3$N$_4$ [@kasturi], and with pump power control in Si$_3$N$_4$ [@Brasch2016]. A theoretical study of a definitive route to soliton modelocking in microresonators by varying the frequency or the power of the pump laser has been previously described [@Lamont13; @Matsko11; @Coen13; @Chembo2013; @Villegas15]. By tuning the frequency of the pump laser, the generated comb transitions through a sequence of distinct states to ultimately reach the low-noise soliton state [@Luo15]. However, the use of laser frequency tuning in comb generation has drawbacks. The performance of the comb is limited by the linewidth and the amplitude noise of the pump since the dynamics of frequency comb generation is governed by parametric four-wave mixing (FWM) [@Herr2012; @fwmphase]. Tunable lasers suffer from the drawback that they are relatively noisy and have a broader linewidth that is usually of the order of a hundred kHz. In contrast, fixed-frequency lasers can be operated with significantly lower noise and narrower linewidths than tunable lasers as the laser cavity is monolithic, and the lack of moving components eliminates sources of noise. Additionally, by locking the output to a frequency reference, the linewidth can be further reduced. Recent demonstrations of locked fixed-frequency lasers have shown linewidths of $\leqslant$40 mHz [@Kessler2012]. Using a locked low-noise and narrow linewidth fixed-frequency laser as the pump in place of tunable lasers will significantly reduce the noise on the generated comb lines. In addition, with the pump laser frequency fixed, the only uncertain parameter to fully determine the frequencies of the comb lines becomes the free spectral range (FSR). Locking the FSR will allow for a fully-stabilized comb where the frequency of each comb line can be determined. Furthermore, control of the cavity resonance frequency rather than the pump frequency allows for the simultaneous generation of modelocked frequency combs in multiple resonators on a single chip using a single fixed-frequency pump laser. This is essential for applications such as dual-comb spectroscopy [@Keilmann04] that requires two frequency comb sources that have a slightly different FSR. Thermal tuning of the resonance has previously been demonstrated using pump power control [@Brasch2016], electro-optic tuning [@Jung14],and integrated heaters [@Xue2015] for comb generation. Here, we report the first demonstration of soliton modelocking in Si$_3$N$_4$ microresonators using integrated heaters for thermal control of the cavity resonance. Current control of integrated heaters results in a change in the waveguide refractive index due to the thermo-optic effect, which changes the resonant frequency of the cavity [@Cunningham10]. We present a repeatable and systematic method for achieving low-noise single- and multi-soliton states using a narrow linewidth fixed-frequency laser as the pump. ![(a) The pump power transmission as the tunable laser frequency is scanned across the resonance. The step-like structure characteristic of soliton formation is indicated by the arrow. (b) The measured optical spectrum for a single-soliton modelocked state with the fitted sech$^2$-pulse spectrum (blue dashed line). The 3 dB bandwidth of the soliton is 24 nm.[]{data-label="fig:piezo"}](Fig1.eps){width="\linewidth"} In our experiment, we use an oxide-clad Si$_3$N$_4$ microring resonator with a FSR of 200 GHz and a cross section of 950 $\times$ 1500 nm. The waveguide cross section is chosen such that a region of anomalous group-velocity dispersion (GVD) exists near the pump wavelength [@Okawachi14]. Initially, we characterize the resonator using a tunable laser at 1540 nm. We amplify the laser using an erbium-doped fiber amplifier (EDFA) and couple 56 mW of power into the bus waveguide for comb generation. We change the detuning of the laser with respect to the resonance frequency of the microresonator by scanning the laser frequency using piezoelectric tuning. We monitor the transmitted power at the pump mode using a fast photodiode ($\geqslant$12.5 GHz) and observe the optical spectrum on an optical spectrum analyzer (OSA). We modulate the laser frequency using a triangular waveform and record the pump transmission over one resonance scan as seen in Fig. \[fig:piezo\](a). We observe the characteristic step-like structure indicative of the transition into modelocked soliton states as demonstrated in previous work [@Herr2014]. Furthermore, the measured optical spectrum is in agreement with the fitted sech$^2$ spectrum denoted by the dashed blue curve in Fig. \[fig:piezo\](b). ![Experimental setup for generation and characterization of soliton modelocked states in Si$_3$N$_4$ microresonators. We characterize the optical spectrum, RF amplitude noise spectrum, and transmitted pump power simultaneously. Integrated resistive heaters are used to tune the resonance frequency to generate frequency combs.[]{data-label="fig:setup"}](Fig2.eps){width="\linewidth"} To demonstrate the feasibility of generating soliton modelocked combs using thermal tuning, we use a continuous-wave fixed-frequency laser with a narrow linewidth of 1 kHz at a wavelength of 1559.79 nm as the pump laser. We amplify the output using a high power EDFA and couple 71 mW into the bus waveguide using a lensed fiber. For comb generation, the nearest resonance frequency is tuned by varying electric current through the integrated platinum resistive heaters, which have an electrical resistance of 240 $\Omega$. We require about 150 mW of electrical power to tune the nearest resonance frequency close to the pump laser frequency. We monitor the optical spectrum, RF spectrum, and transmitted pump power of the generated comb. Figure \[fig:setup\] shows the setup used for generation and characterization of frequency combs using thermal tuning. The free-space output is collected using a combination of an aspheric lens and a collimator and coupled into a fiber. The light is then split 80:20 using a fiber power splitter, and the smaller fraction of the power is sent to the OSA to record the generated comb spectrum as it transitions through the comb formation dynamics. The remaining power is sent to a wavelength division multiplexing (WDM) filter with a 100-GHz transmission window centred at the pump wavelength. The transmitted light through the filter is sent to a fast photodiode ($\geqslant$12.5 GHz) to monitor the pump transmission as the resonance is tuned. The reflected light from the WDM filter is sent to a second fast photodide that is used to monitor the RF amplitude noise on the generated comb close to DC (0-900 MHz) using a RF spectrum analyzer at a resolution bandwidth of 100 kHz. ![Oscilloscope trace of the pump transmission as the current on the integrated heater is modulated with a triangular waveform. The steps indicated by the arrows are characteristic of transitions between different multi-soliton states.[]{data-label="fig:trace"}](Fig3.eps){width="\linewidth"} We apply a triangular modulation to the heater current to scan the cavity resonance near the laser wavelength. This corresponds to a 5 mW change in the electrical power applied to the heater. Similar to the case with laser frequency tuning, we observe the characteristic steps in the pump power transmission (Fig. \[fig:trace\]), in which each step is indicative of a transition from a higher to a lower number of solitons. The final step consists of a transition from the single-soliton state to the laser frequency dropping out of the cavity resonance. We study the evolution of the comb generation process and observe transitions into various comb states as we change the resonance frequency with respect to the laser frequency (Fig. \[fig:evol\]). As the power in the resonator builds up, we see the primary sidebands form at the parametric gain peak due to degenerate FWM \[Fig. \[fig:evol\](i)\]. The RF amplitude noise at this stage is low since it corresponds to parametric oscillation for a single signal and idler pair. Tuning the resonance further, we see mini-comb formation \[Fig. \[fig:evol\](ii)\] with natively spaced lines near the primary sidebands. The interaction of the separate mini-combs within the cavity manifests on the RF spectrum as a sharp spike. Subsequently, we observe the transition into the broadband high-noise regime \[Fig. \[fig:evol\](iii)\], and the RF noise peak also broadens. Finally, the system undergoes a transition to the single-soliton state \[Fig. \[fig:evol\](iv)\] with the reduction of the RF noise and the optical spectrum showing the characteristic shape of a sech$^2$-pulse spectrum. It is important to note that the single-soliton state can be achieved by scanning through the cavity resonance using thermal tuning at a sufficiently high speed. The temperature of the ring depends on the coupled pump power and the thermal time constant of the ring. The speed of the thermal scan affects the rate at which the coupled pump power in the ring changes. Thus the speed of the scan affects the temperature variation in the ring. The soliton state occurs at a certain equilibrium temperature inside the ring, and when we scan the resonance frequency at a slow rate (200 Hz) using a triangular modulation, we observe that the steps corresponding to the soliton formation are narrow and are not consistent from one scan to the next. Here, the scan is significantly slower than the thermal time constant, and the temperature of the ring rises above the equilibrium soliton temperature, which prevents the system from reaching the soliton state consistently. At higher scan speeds (e.g., 10 kHz), we see the steps on the pump transmission are wider and consistent from one scan to the next. Here, the thermal scan speed is closer to the thermal time constant of the ring, and the corresponding variation in temperature of the ring is smaller. The system is consistently able to reach the equilibrium temperature and the soliton state. Similar behavior has been previously reported in [@Herr2014] where the speed of the pump frequency scan affects the reproducibility of the soliton states. ![Evolution of the generated frequency comb spectrum as the cavity resonance is tuned by varying the heater current. The optical and RF spectra as the comb evolves correspond to (i) the initial cascaded FWM, (ii) the mini-comb formation, (iii) the broadband high-noise regime, with the plateau-like optical spectrum and broad noise peak and (iv) the low-noise single-soliton state with a fitted sech$^2$-spectral profile (blue dashed curve). The 3 dB bandwidth of the soliton is 20 nm.[]{data-label="fig:evol"}](Fig4.eps){width="\linewidth"} We start the scan with the laser frequency blue detuned with respect to the resonance frequency of the ring and apply a downward ramp that is at a speed that enables the formation of the soliton state as explained above. This ramp blue shifts the resonance and the comb evolves as shown in Fig. \[fig:evol\]. At the end of the ramp before terminating the scan, we apply a small rise in the current that corresponds to a red shift of the resonance. We repeat this current modulation every 200 ms and record a persistence trace of the transmitted pump power lasting 3 seconds. The transmission trace indicates clearly that the system reaches the same final soliton state over all 15 scans as seen in Fig. \[fig:control\](a). The tuning curve of the current modulation can be seen in Fig. \[fig:control\](b). We observe that the red detuning prior to terminating the scan makes the formation of the soliton state more repeatable as compared to a ramp signal without the red shift. The repeatability of the soliton state is affected by drift of the input fiber coupling that leads to fluctuations in the coupled power. In a packaged device, the issue of input coupling fluctuations will be eliminated since the pump laser will not physically drift with respect to the bus waveguide. A similar result was recently demonstrated using pump frequency tuning with the ’backward tuning’ method that allows for repeatable soliton formation [@Karpov2016]. Our tuning curve for the resonance frequency \[Fig. \[fig:control\](b)\] is analogous to the ’backward tuning’ method presented in that work with a blue shift using the downward slope and a subsequent red shift due to the upward ramp that allows repeatable soliton formation. Furthermore, we can also switch from a higher number of solitons to a lower number of solitons by slowly increasing the heater current and red shifting the resonance once it is in a stable multi-soliton state. ![(a) A persistence trace of the pump transmission recorded over 3 seconds. We see that the comb returns to the same soliton state over 15 consecutive traces. The modulation signal sent to the current source is shown in (b). The downward slope corresponds to a blue-shift of the resonance. The abrupt increase in current red-shifts the resonance and leads to the repeatable generation of the soliton state.[]{data-label="fig:control"}](Fig5.eps){width="\linewidth"} We can choose the final state of the frequency comb by adjusting the red shift before we terminate the scan. By modifying this termination point, we observe different multi-soliton states. The relative positions of the multiple solitons in the ring result in modulations on the sech$^2$-spectral profile. The measured multi-soliton spectra are depicted in Fig. \[fig:multisol\]. Of particular interest is the spectrum depicted in Fig. \[fig:multisol\](a) where every other comb line in the spectrum is extinguished. This is indicative of a two-soliton state with the two solitons exactly half a roundtrip apart, corresponding to harmonic modelocking [@Hirano1969]. ![Measured spectra for three different multi-soliton states (a), (b), and (c). The modulations on the spectra are due to the spectral interference among multiple modelocked solitons within one roundtrip of the cavity (blue dashed line indicates a fitted sech$^2$ envelope for a single-soliton). The spectrum in (a) is indicative of a two soliton state with the pulses half a round trip apart.[]{data-label="fig:multisol"}](Fig6.eps){width="\linewidth"} In conclusion, we report the first demonstration of low-noise single-soliton states in Si$_3$N$_4$ microring resonators using thermal control of integrated heaters. We demonstrate a systematic and repeatable pathway to tune into single- and multi-soliton states by changing the electrical power on the heaters by 5 mW from 150 mW. The system also allows for progressive switching from a higher number of solitons in the cavity toward a single-soliton state. Thermal control enables the use of low-noise fixed-frequency lasers which will lead to monolithic design of a fully integrated chip-scale comb source. Furthermore, the technique will enable the simultaneous generation of multiple modelocked combs from a single pump source, which is critical for the realization of coherent spectroscopic applications such as dual-comb spectroscopy. **Funding.** Air Force Office of Scientific Research (FA9550-15-1-0303); Defense Advanced Research Projects Agency (W31P4Q-15-1-0015); A.K. acknowledges a postdoc fellowship from the Swiss National Science Foundation (P2EZP2\_162288) **Acknowledgements.** This work was performed in part at the Cornell Nano-Scale Facility, a member of the National Nanotechnology Infrastructure Network, which is supported by the NSF. We thank J. Ye and B. Bjork for useful discussion. [10]{} S. A. Diddams, L. Hollberg, and V. Mbele, Nature **445**, 627 (2007). T. Udem, R. Holzwarth, and T. W. Hansch, Nature **416**, 233 (2002). Z. Jiang, C.-B. Huang, D. E. Leaird, and A. M. Weiner, Nature Photonics **1**, 463 (2007). T. M. Fortier, M. S. Kirchner, F. Quinlan, J. Taylor, C. J. Bergquist, T. Rosenband, N. Lemke, A. Ludlow, Y. Jiang, C. Oates, and S. A. Diddams, Nature Photonics **5**, 425 (2011). S. A. Diddams, T. Udem, J. C. Bergquist, E. A. Curtis, R. E. Drullinger, L. Hollberg, W. M. Itano, W. D. Lee, C. W. Oates, K. R. Vogel, and D. J. Wineland, Science **293**, 825 (2001). N. R. Newbury, Nature Photonics **5**, 186 (2011). A. A. Savchenkov, A. B. Matsko, V. S. Ilchenko, I. Solomatine, D. Seidel, and L. Maleki, Physical Review Letters **101**, 093902 (2008). W. Liang, A. A. Savchenkov, A. B. Matsko, V. S. Ilchenko, D. Seidel, and L. Maleki, Optics Letters **36**, 2290 (2011). T. Herr, V. Brasch, J. D. Jost, C. Y. Wang, N. M. Kondratiev, M. L. Gorodetsky, and T. J. Kippenberg, Nature Photonics **8**, 145 (2014). P. Del’Haye, A. Schliesser, O. Arcizet, T. Wilken, R. Holzwarth, and T. J. Kippenberg, Nature **450**, 1214 (2007). J. Li, H. Lee, T. Chen, and K. J. Vahala, Physical Review Letters **109**, 233901 (2012). X. Yi, Q.-F. Yang, K. Y. Yang, M.-G. Suh, and K. Vahala, Optica **2**, 1078 (2015). H. Jung, K. Y. Fong, C. Xiong, and H. X. Tang, Optics Letters **39**, 84 (2014). B. Hausmann, I. Bulu, V. Venkataraman, P. Deotare, and M. Loncar, Nature Photonics **8**, 369 (2014). A. G. Griffith, R. K. Lau, J. Cardenas, Y. Okawachi, A. Mohanty, R. Fain, Y. H. D. Lee, M. Yu, C. T. Phare, C. B. Poitras, A. L. Gaeta, and M. Lipson, Nature Communications **6**, 6299 (2015). M. A. Foster, J. S. Levy, O. Kuzucu, K. Saha, M. Lipson, and A. L. Gaeta, Optics Express **19**, 14233 (2011). P.-H. Wang, Y. Xuan, L. Fan, L. T. Varghese, J. Wang, Y. Liu, X. Xue, D. E. Leaird, M. Qi, and A. M. Weiner, Optics Express **21**, 22441 (2013). V. Brasch, M. Geiselmann, T. Herr, G. Lihachev, M. H. P. Pfeiffer, M. L. Gorodetsky, and T. J. Kippenberg, Science **351**, 357 (2016). K. Saha, Y. Okawachi, B. Shim, J. S. Levy, R. Salem, A. R. Johnson, M. A. Foster, M. R. E. Lamont, M. Lipson, and A. L. Gaeta, Optics Express **21**, 1335 (2013). S.-W. Huang, J. Yang, J. Lim, H. Zhou, M. Yu, D.-L. Kwong, and C. W. Wong, Scientific Reports **5**, 13355 (2015). X. Xue, Y. Xuan, Y. Liu, P.-H. Wang, S. Chen, J. Wang, D. E. Leaird, M. Qi, and A. M. Weiner, Nature Photonics **9**, 594 (2015). D. J. Moss, R. Morandotti, A. L. Gaeta, and M. Lipson, Nature Photonics **7**, 597 (2013). M. R. E. Lamont, Y. Okawachi, and A. L. Gaeta, Optics Letters **38**, 3478 (2013). A. B. Matsko, A. A. Savchenkov, W. Liang, V. S. Ilchenko, D. Seidel, and L. Maleki, Optics Letters **36**, 2845 (2011). S. Coen, H. G. Randle, T. Sylvestre, and M. Erkintalo, Optics Letters **38**, 37 (2013). Y. K. Chembo and C. R. Menyuk, Physical Review A **87**, 053852 (2013). J. A. Jaramillo-Villegas, X. Xue, P.-H. Wang, D. E. Leaird, and A. M. Weiner, Optics Express **23**, 9618 (2015). K. Luo, J. K. Jang, S. Coen, S. G. Murdoch, and M. Erkintalo, Optics Letters **40**, 3735 (2015). T. Herr, K. Hartinger, J. Riemensberger, C. Y. Wang, E. Gavartin, R. Holzwarth, M. L. Gorodetsky, and T. J. Kippenberg, Nature Photonics **6**, 480 (2012). R. Hui and A. Mecozzi, Applied Physics Letters **60**, 2454 (1992). T. Kessler, C. Hagemann, C. Grebing, T. Legero, U. Sterr, F. Riehle, M. J. Martin, L. Chen, and J. Ye, Nature Photonics **6**, 687 (2012). F. Keilmann, C. Gohle, and R. Holzwarth, Optics Letters **29**, 1542 (2004). J. E. Cunningham, I. Shubin, X. Zheng, T. Pinguet, A. Mekis, Y. Luo, H. Thacker, G. Li, J. Yao, K. Raj, and A. V. Krishnamoorthy, Optics Express **18**, 19055 (2010). Y. Okawachi, M. R. E. Lamont, K. Luke, D. O. Carvalho, M. Yu, M. Lipson, and A. L. Gaeta, Optics Letters **39**, 3535 (2014). M. Karpov, H. Guo, E. Lucas, A. Kordts, M. H. P. Pfeiffer, G. Lichachev, V. E. Lobanov, M. L. Gorodetsky, and T. J. Kippenberg, arXiv:1601.05036 (2016). J. Hirano and T. Kimura, IEEE Journal of Quantum Electronics **5**, 219 (1969).
--- abstract: 'The results of an Amplitude Analysis of the world data on integrated and differential cross-sections on $\gamma\gamma\to\pi\pi$ are presented, following the publication of the Belle charged pion results.' address: | IPPP, Physics Department, Durham University,\ Durham DH1 3LE, U.K. author: - 'M. R. Pennington' title: Illuminating hadron structure by scattering light on light --- INTRODUCTION ============ Two photon production of hadronic resonances is one of the clearest ways of revealing their composition, as illustrated in Fig. 1. The nature of the isoscalar scalars seen in $\pi\pi$ scattering below 2 GeV, the $f_0(600)$ or $\sigma$, $f_0(980)$, $f_0(1370)$, $f_0(1510)$, $f_0(1720),\, \cdots$, remains an enigma [@klempt; @mp-menu]. While models abound in which some are ${\overline q}q$, some ${\overline {qq}}qq$, sometimes one is a ${\overline K}K$-molecule, and one a glueball, definitive statements are few and far between. Since photons couple to neutral hadrons through their charged constituents, as in Fig. 1, their two photon width is a measure of the charge of these [@barnes; @barnesKK; @achasov; @hanhart; @menn; @mpreviews]. For instance, if the $f_0(980)$ is an ${\overline s}s$ state its radiative width is 0.2 keV [@barnes], while if it is a ${\overline K}K$-molecule, this is 0.6 keV [@barnesKK] depending on the specifics of the model [@hanhart]. Can experiment distinguish these possibilities? ![Two photon decay rate of a meson in a quark picture is the modulus squared of the amplitude for $\,\gamma\gamma\,$ to produce a $\,{\overline q}q\,$ pair and for these to bind by strong dynamics to form the hadron.](pennington_fig1.eps){width="10pc"} ![Comparison of the cross-section results for $\,\gamma\gamma\to\pi^+\pi^-\,$ from Mark II [@boyer], Cello [@harjes] and Belle [@abe]. In each case the cross-section is integrated over $|\cos \theta^*|\,\le\,0.6$. $E$ is the $\gamma\gamma$ c.m. energy.](pennington_fig2.eps){width="16pc"} ![The favoured Amplitude compared with the Crystal Ball results on $\gamma\gamma\to\pi^0\pi^0$. Their 1988 data [@cb88] are integrated over $|\cos \theta^*| \le 0.8$, while the 1992 data [@cb92] (not shown) with increased statistics cover $|\cos \theta^*| \le 0.7$.](pennington_fig3.eps){width="16pc"} The key features of data on $\pi^+\pi^-$ production [@boyer; @harjes; @abe], Fig. 2, are a large enhancement just above threshold, controlled by the one-pion exchange Born term, then a small structure (rather confused in Fig. 2) near 1 GeV associated with the $f_0(980)$, followed by a clear $f_2(1270)$ peak. The $\pi^0\pi^0$ cross-section, measured (and normalised alone) by Crystal Ball [@cb88; @cb92], Fig. 3, is in contrast small from threshold up to 900 MeV. At 1 GeV, there is small shoulder and then it is dominated by the $f_2(1270)$ signal too. Such dominance reflects the ease with which tensor mesons are formed by two spin-1 photons. But how much of this peak is really pure spin-2? Data on $\gamma\gamma$ production cover only 60-80% of the angular range, making a complete partial wave separation tricky. However, as we will recall below, by making the most of the general properties of $S$-matrix theory and knowledge of final state hadronic interactions, a determination of the individual spin components becomes feasible. Such a separation of the $\pi^+\pi^-$ and $\pi^0\pi^0$ results published in the 20th century revealed two classes of solutions [@boglione]: one in which the $f_0(980)$ appeared as a [*peak*]{} with a radiative width of $0.13-0.36$ keV, the other the same state appeared as a [*dip*]{} with a width of $\sim 0.32$ keV. With data in c.m. energy bins of 20 MeV, both are possible. The advent of high luminosity $e^+e^-$ colliders with an intense programme of study of heavy flavour decays has now produced two photon data of unprecedented statistics. The Belle collaboration [@mori; @abe] have published results on $\gamma\gamma\to\pi^+\pi^-$ in 5 MeV bins above 800 MeV. These show a very clear peak for the $f_0(980)$, Fig. 4. Analysis of just their integrated cross-section by Belle [@abe] finds its radiative width to be $205 ^{+95+147}_{-83-117}$ eV. AMPLITUDE ANALYSIS ================== Here we present the results of an Amplitude Analysis [@belle-mp] of all these data including the angular information [@boyer; @harjes; @behrend; @cb88; @cb92; @abe]. The $\pi\pi$ system can be formed in both $I=0$ and $I=2$ states. The near threshold cross-section is dominated by the Born amplitude, which means that though we expect the $I=2$ $s$-channel to have no resonances, it is comparable to the $I=0$ component in all low energy partial waves. Consequently we have to treat the $\pi^+\pi^-$ and $\pi^0\pi^0$ channels simultaneously. Though there are now more than 2000 datapoints in the charged channel below 1.5 GeV, we only have 126 in the neutral channel, and we have to weight them more equally to ensure that the isospin components are reliably separable. A key role in such an analysis is played by analyticity, unitarity and crossing symmetry. When these are combined with the low energy theorem for Compton scattering [@low], and constraints from chiral dynamics, ![The favoured Solution compared with the Belle results [@abe] on $\gamma\gamma\to \pi^+\pi^-$ integrated over $|\cos \theta^*| \le 0.6$.](pennington_fig4.eps){width="16pc"} these anchor the partial wave amplitudes close to $\pi\pi$ threshold [@morgam], and help to make up for the limited angular coverage in experiments [@mpdaphne; @belle-mp]. Moreover, unitarity imposes a connection between the $\gamma\gamma\to\pi\pi$ partial wave amplitudes and the behaviour of the hadronic processes with $\pi\pi$ final states. As shown in Fig. 5, the relation involves a sum over all kinematically allowed intermediate states $n$’ in Fig. 5. 1 GeV marks a divide, below the sum is saturated by the $\pi\pi$ intermediate state, while above the ${\overline K}K$ channel is critically important. Beyond 1.4-1.5 GeV multipion processes start to contribute as $\rho\rho$ threshold is passed. Little is known about the $\pi\pi\to\rho\rho$ channel in each partial wave. Consequently, we restrict attention to the region below 1.44 GeV, where $\pi\pi$ and ${\overline K}K$ intermediate states dominate. The hadronic scattering amplitudes, ${{\cal T}^I_{J}}$, for $\pi\pi\to\pi\pi$ and ${\overline KK}\to\pi\pi$ are known [@bpamps] and so enable the unitarity constraint of Fig. 5 to be realised in practice and in turn allow an Amplitude Analysis to be performed. ![Unitarity relation for each partial wave of $\gamma\gamma\to\pi\pi$.](pennington_fig5.eps){width="17pc"} The $\gamma\gamma$ partial waves with definite isospin $I$, spin $J$ and helicity $\lambda$, ${\cal F}^I_{J,\lambda}(s)$, are parametrised in terms of the real functions ${\alpha_i}^{IJ}_\lambda(s)$, where $s$ is the square of the c.m. energy $E$: $$\begin{aligned} \nonumber &&{{\cal F}^I_{J,\lambda}}(s;\gamma\gamma\to\pi\pi)\;=\;\\[3pt] \nonumber &&{\hspace{1.2cm}}{\alpha_1}^I_{J\lambda}(s)\,{{\cal T}^I_{J}}(s;{\pi\pi\to\pi\pi})\\[3pt] &&{\hspace{1.8cm}}+{\alpha_2}^I_{J\lambda}(s)\,{{\cal T}^I_{J}}(s;{{\overline K}K\to\pi\pi})\, .\end{aligned}$$ The functions $\alpha(s)$ represent the coupling to each hadronic intermediate state with the appropriate quantum numbers. These couplings, having no right hand cut, are parametrised as smooth functions of $s$, and these form the basis for this energy-dependent Amplitude Analysis. With data in 5 MeV intervals from Belle, such continuity is sensible. For simplicity, we will denote the partial waves ${\cal F}^{I=0}_{J,\lambda}$ by $J_\lambda$. State ----------------------------------- ----------------- ------------------ ------------------ Pole positions (GeV) $0.441 -i0.272$ $1.001 - i0.016$ $1.276 - i0.094$ $\Gamma (R\to\gamma\gamma)$ (keV) $3.1\pm 0.5$ $0.42$ $3.14\pm 0.20$ \ These widths are determined from the pole residues using Eq. 2. See Ref. [@belle-mp] for the details. The world data can be fitted adequately by a range of solutions [@belle-mp]: a range, in which there is a significant ambiguity in the relative amount of helicity zero amplitudes between $S$ and $D_0$ waves, particularly above 900 MeV. This is a consequence of the data covering only 60-80% of the angular range, and hence such waves are not orthogonal to each other in the integrated cross-sections. Nevertheless, there is a favoured solution, which is the one we illustrate here. Others are described in the analysis paper [@belle-mp]. In Figs. 3 and 4, we show how this favoured solution describes the integrated cross-section on $\gamma\gamma\to\pi^0\pi^0$ from Crystal Ball, and on $\gamma\gamma\to\pi^+\pi^-$ from Belle. The resulting amplitude has $I=0$ partial wave cross-sections shown in Fig. 6 for the dominant waves, $S$, $D_0$ and $D_2$ ([*i.e. $J_\lambda$*]{}). Near threshold their contributions are those of the Born term, modified by largely calculable final state interactions [@morgam]. The $S$-wave shows a clear $f_0(980)$ signal and then a broad enhancement ![Contributions of the dominant $I=0$ partial wave components, $J_\lambda$, to the full integrated cross-sections for the favoured Solution.](pennington_fig6.eps){width="16pc"} that might be identified with the $f_0(1370)$. Above 900 MeV, the $D$ wave is dominated by the $f_2(1270)$, which is produced mainly in the helicity two state, expected for a tensor ${\overline q}q$ meson [@schrempp]. TWO PHOTON WIDTHS ================= In Fig. 7, we show the Argand plot of the favoured $I=J=0$ amplitude, with its clear $f_0(980)$ loop. By continuing these amplitudes to the resonance pole position on the appropriate unphysical sheet, we determine the two photon coupling of the $f_0(980)$ and for the $D$-wave the $f_2(1270)$. These pole residues are the only process-independent definition of their couplings, $g_\gamma$. These give a measure of their radiative widths through the commonly used formula: $$\Gamma(R\to\gamma\gamma)\;=\;\frac {\alpha^2}{4 (2J+1) m_R}\;| g_{\gamma}|^2 \quad ,$$ where $\alpha \sim 1/137$ is the fine structure constant. This gives the values listed in Table 1. As described in Ref. [@belle-mp], the full range of acceptable solutions gives a radiative width for the $f_0(980)$ from 100 to 540 eV, with 415 eV favoured. This range allows ${\overline s}s$, ${\overline K}K$ and ${\overline{qq}}qq$ compositions to be possible [@barnes; @barnesKK; @achasov; @hanhart; @menn; @mpreviews]. However, the favoured solution accords with none of these specific alternatives. ![Argand plot for the favoured $\gamma\gamma\to\pi\pi$ $I=0$ $S$-wave amplitude. The labels mark the energy every 0.1 GeV, with smaller dots every 25 MeV. The amplitude moves quickly between 950 and 1000 MeV because of the $f_0(980)$, with kinks” at the two ${\overline K}K$ thresholds. ](pennington_fig7.eps){width="16.pc"} Achasov and Shestakov [@achasov-belle] have analysed the Belle integrated cross-sections in terms of amplitudes in which key resonances have both direct and meson loop contributions, and they find $\Gamma(f_0\to\gamma\gamma) \simeq 0.2$ keV, which supports their model of the $f_0(980)$ as a ${\overline {qq}}qq$ state. Here we have presented the amplitude favoured by world data on both integrated and differential cross-sections for both charged and neutral pion production. As discussed in more detail in Ref. [@belle-mp], this is one of a range of amplitudes that provide an acceptable description of current experiments. The forthcoming $\gamma\gamma\to\pi^0\pi^0$ data from Belle [@nak] should have the power to reduce this range considerably. Once these are finalised the inclusion of these data in an Amplitude Analysis will hopefully lead to a consistent set of two photon widths for the low mass isoscalar states. For the $f_0(600)$ or $\sigma$ in Table 1, we have little new to add [@mp-prl; @oller] since the Belle data only start at 800 MeV, Figs. 2,4, and so the analysis relies on older data [@boyer; @cb88] — see Ref. [@belle-mp] for the discussion. To reduce still further the uncertainty in its $\gamma\gamma$ width requires precision charged and neutral pion data between threshold and 700 MeV [@mpreviews]. The introduction of appropriate taggers in an upgraded DA$\Phi$NE machine at Frascati [@daphne2] may well make this feasible. Two photon couplings are a key window on the detailed structure of low mass scalar states. Such couplings are likely to be just as critical in determining the nature of the scalar field(s) that break electroweak symmetry. Perhaps this/these scalar(s) that await discovery in the $0.1-1$ TeV region will be just as complex in their structure as those at $0.1-1$ GeV, making the present study of the states of QCD an important guide. The author acknowledges partial support of the EU-RTN Programme, Contract No. MRTN–CT-2006-035482, Flavianet” for this work. [9]{} E. Klempt and A. Zaitsev, Phys. Rep. [**454**]{} (2007) 1, arXiv:0708.4016 \[hep-ph\]. M. R. Pennington, Structure of the Scalars”, arXiv:0711.1435 \[hep-ph\]. T. Barnes, Phys. Lett. [**165B**]{} (1985) 434. T. Barnes, Proc. [*IXth Int. Workshop on Photon-Photon Collisions*]{} (San Diego, 1992), ed. D. Caldwell and H. P. Paar (World Scientific, 1992), p. 263. N. N. Achasov, S. A. Devyanin and G. N. Shestakov, Z. Phys. [**C16**]{} (1982) 55; N. N. Achasov and A. V. Kiselev, Phys. Rev. [**D76**]{} (2007) 077501 \[hep-ph/0606268\]; Yu. S. Kalashnikova [*et al.*]{}, arXiv:0711.2902. C. Hanhart, Yu. S. Kalashnikova, A. E. Kudryavtsev and A. V. Nefediev, Phys. Rev. [**D75**]{} (2007) 074015 \[hep-ph/0701214\]. G. Mennessier, S. Narison and W. Ochs, arXiv:0804:4452 \[hep-ph\]. M. R. Pennington, Mod. Phys. Lett. [**A22**]{} (2007) 1439, arXiv:0705.3314 \[hep-ph\]. J. Boyer [*et al.*]{} \[Mark II\], Phys. Rev. [**D42**]{} (1990) 1350. J. Harjes, Ph.D. thesis, submitted to the University of Hamburg. T. Mori [*et al.*]{} \[Belle\], Phys. Rev. [**D75**]{} (2007) 051101 \[hep-ex/0610038\]. H. Marsiske [*et al.*]{}, Phys. Rev. [**D41**]{} (1990) 3324. J. K. Bienlein, Proc. [*IXth Int. Workshop on Photon-Photon Collisions*]{} (San Diego 1992), ed. D. Caldwell and H. P. Paar (World Scientific, 1992), p. 241. M. Boglione and M. R. Pennington, Eur. Phys. J. [**C9**]{} (1999) 11. T. Mori [*et al.*]{} \[Belle\], J. Phys. Soc. Jap. [**76**]{} (2007) 074102, arXiv:0704.3538 \[hep-ex\]. M. R. Pennington, T. Mori, S. Uehara and Y. Watanabe, Amplitude Analysis of high statistics results on $\gamma\gamma\to\pi^+\pi^-$ and the two photon width of isoscalar states”, arXiv:0803.3389 (EPJC to be published). H. J. Behrend [*et al.*]{} \[CELLO\], Z. Phys. [**C56**]{} (1992) 381. F. E. Low, Phys. Rev. [**96**]{} (1954) 1428. D. Morgan and M.R. Pennington, Phys. Lett. [**B272**]{} (1991) 134; D. H. Lyth, Nucl. Phys. [**B30**]{} (1971) 195; G. Mennessier, Z. Phys. [**C16**]{} (1983) 241; G. Mennessier and T.N. Truong, Phys. Lett. [**177B**]{} (1986) 195. M. R. Pennington, [*DA$\Phi$NE Physics Handbook*]{}, ed. L. Maiani, G. Pancheri and N. Paver (INFN, Frascati, 1992) pp. 379-418; [*Second DA$\Phi$NE Physics Handbook*]{}, ed. L. Maiani [*et al.*]{} (pub. INFN, Frascati, 1995) pp. 169-190. M. E. Boglione, AIP Conf. Proc. [**756**]{} (2005) 318 \[hep-ph/0412034\]; M. E. Boglione and M. R. Pennington (in preparation). B. Schrempp-Otto, F. Schrempp and T. F. Walsh, Phys. Lett. [**36**]{} (1971) 463. N. N. Achasov and G. N. Shestakov, Phys. Rev. [**D77**]{} (2008) 074020, arXiv:0712.0885. H. Nakazawa \[Belle\], these proceedings. M. R. Pennington, Phys. Rev. Lett. [**97**]{} (2006) 011601 \[hep-ph/0604212\]. J. A. Oller, L. Roca and C. Schat, Phys. Lett. [**B659**]{} (2008) 201 \[arXiv:0708.1659, hep-ph\]. F. Anulli [*et al*]{}, [*DA$\Phi$NE Physics Handbook*]{}, ed. L. Maiani, G. Pancheri and N. Paver (INFN, Frascati, 1992) pp. 435-444; A. Courau, [*Second DA$\Phi$NE Physics Handbook*]{}, ed. L. Maiani [*et al.*]{} (pub. INFN, Frascati, 1995) pp. 597-606; F. Anulli [*et al.*]{}, [*ibid*]{}. pp. 607-622.
--- abstract: 'In this paper, we consider a 3d cubic focusing nonlinear schrödinger equation (NLS) with slowing decaying potentials. Adopting the variational method of Ibrahim-Masmoudi-Nakanishi [@IMN], we obtain a condition for scattering. It is actually sharp in some sense since the solution will blow up if it’s false. The proof of blow-up part relies on the method of Du-Wu-Zhang [@DWZ].' address: - 'Qing Guo, College of Science, Minzu University of China, Beijing, 100081, P.R. China' - ' Hua Wang, School of Mathematics and Statistics and Hubei Province Key Laboratory of Mathematical Physics, Central China Normal University, Wuhan, 430079, P.R. China' - 'Xiaohua Yao, School of Mathematics and Statistics and Hubei Province Key Laboratory of Mathematical Physics, Central China Normal University, Wuhan, 430079, P.R. China' author: - 'Qing Guo,  Hua Wang  and Xiaohua Yao' title: Dynamics of the focusing 3D cubic NLS with slowly decaying potential --- Introduction ============ In this paper, we consider a 3d cubic focusing NLS with slowly decaying potentials ($\rm{NLS_{k}}$) $$\label{1.1} \left\{ \begin{aligned} i&\partial_{t}u-H_{\alpha}u+|u|^{2}u=0,\;\;(t,x) \in {{\bf{R}}\times{\bf{R}}^{3}}, \\ u&(0, x)=u_{0}(x)\in H^{1}({\bf{R}}^{3}), \end{aligned}\right.$$ where $u: {\bf R}\times {\bf R}^{3}\rightarrow {\bf C}$ is a complex-valued function, $H_{\alpha}=-\Delta+V(x)$ and $V(x)=\frac{k}{|x|^{\alpha}}$ with $k>0$ and $1<\alpha\leq 2$. Throughout this paper, we use the symbol $V(x)$ instead of $\frac{k}{|x|^{\alpha}}$ since we frequently use the general property of $V$: $V\geq 0$, $x\cdot\nabla V\leq 0$, $2V+x\cdot V\geq 0$ and $3x\cdot V+x\nabla^{2}V x^{T}\leq 0$. As $\frac{k}{|x|^{\alpha}}>0$ and $\frac{k}{|x|^{\alpha}} \in L_{loc}^{1}$, $H_{\alpha}$ is defined as a unique self-adjoint operator associated with the non-negative quadratic form $<(-\Delta+\frac{k}{|x|^{\alpha}})f, f>$ on $C_{0}^{\infty}({\bf R}^{3})$. Moreover, $H_{\alpha}$ is purely absolutely continuous and has no eigenvalue. Since $k>0$, the kernel $e^{-tH_{\alpha}}(x, y)$ of $e^{-tH_{\alpha}}$ satisfies the upper Gaussian estimate [@S] i.e., for $\forall t>0$, $\forall x$, $y\in {\bf R}^{3}$, $$\begin{aligned} \label{heatkernel} 0\leq e^{-tH_{\alpha}}(x, y)\leq e^{t\Delta}(x, y)=(4\pi t)^{-\frac{3}{2}}e^{-\frac{|x-y|^{2}}{4t}},\end{aligned}$$ which implies that Hardy inequality , Mikhlin multiplier theorem and Littlewood-Paley theory (Bernstein inequalities Lemma \[Bernstein\], Littlewood-Paley decomposition Lemma \[LPdecompostion\] and square function estimates Lemma \[square\]) associated with $H_{\alpha}$. Hence it follows from Hardy inequality and Stein complex interpolation that the standard Sobolev norms and the Sobolev norms associated with $H_{\alpha}$ are equivalent (see Lemma \[Sobolev\]). Recently, Mizutani [@M] showed that $e^{-itH_{\alpha}}$ satisfies global-in-time Strichartz estimates for any admissible pairs. Combining the Sobolev norm equivalence and the Strichartz estimates and following the same line of the proof of Theorem 2.15 and Remark 2.16 of [@KMVZ] yield that ($\rm{NLS_{k}}$) is locally well-posed and scatters in $H^{1}({\bf R}^{3})$. [@KMVZ] \[localwellposedness\] Let $u_{0}\in H^{1}({\bf R}^{3})$. Then the following are true. \(i) There exist $T=T(\|u_{0}\|_{H^{1}})>0$ and a unique solution $u\in C((-T, T), H^{1}({\bf R}^{3}))$ of ($\rm{NLS_{k}}$). \(ii) There exists $\epsilon_{0}>0$ such that if for $0<\epsilon<\epsilon_{0}$, $$\|e^{-itH_{\alpha}}u_{0}\|_{L_{t,x}^{5}({\bf R}^{+}\times{\bf R}^{3})}<\epsilon,$$ then the solution $u$ of ($\rm{NLS_{k}}$) is global in the positive time direction and satisfies $$\begin{aligned} \|u\|_{L_{t,x}^{5}({\bf R}^{+}\times{\bf R}^{3})}\lesssim\epsilon.\end{aligned}$$ The similar result holds in the negative time direction. \(iii) For any $\phi\in H^{1}({\bf R}^{3})$, then there exist $T>0$ and a solution $u\in C((T, +\infty), H^{1}({\bf R}^{3}))$ of ($\rm{NLS_{k}}$) such that $$\lim_{t\rightarrow+\infty}\|u(t)-e^{-itH_{\alpha}}\phi\|_{H^{1}({\bf R}^{3})}=0.$$ The similar result holds in the negative time direction. \(iv) If $u:{\bf R}\times{\bf R}^{3}\rightarrow {\bf C}$ is a global solution of ($\rm{NLS_{k}}$) with $$\begin{aligned} \label{scatteringbound} \|u\|_{L_{t,x}^{5}({\bf R}\times{\bf R}^{3})}<+\infty,\end{aligned}$$ then the solution $u(t)$ scatters in $H^{1}$. That is, there exists $\phi_{\pm}\in H^{1}({\bf R}^{3})$ such that $$\lim_{t\rightarrow\pm\infty}\|u(t)-e^{-itH_{\alpha}}\phi_{\pm}\|_{H^{1}({\bf R}^{3})}=0.$$ Moreover, the $H^{1}$ solution $u$ obeys the mass and energy conservation laws: $$\begin{aligned} \label{mass} M(u)=\displaystyle\int_{{\bf R}^{3}}|u(t, x)|^{2}dx=M(u_{0}),\end{aligned}$$ and $$\begin{aligned} \label{energy} E(u)=E_{k}(u)=\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla u(x)|^{2}dx +\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}V(x)|u(x)|^{2}dx -\frac{1}{4}\displaystyle\int_{{\bf R}^{3}}|u(x)|^{4}dx=E(u_{0}).\end{aligned}$$ In the case $k=0$, Holmer-Roudenko [@HR] and Duyckaerts-Holmer-Roudenko [@DHR] employed the concentration-compactness approach of Kenig-Merle [@KM] to obtain sharp criteria between scattering and blow up for ($\rm{NLS_{0}}$) in terms of conservation laws ( and ) and the ground state $Q$, which is the unique positive radial exponential decaying solution of the elliptic equation $$\begin{aligned} \label{ellipticequation} \Delta Q-Q+Q^{3}=0.\end{aligned}$$ Fang-Xie-Canzenave [@FXC] and Akahor-Nawa [@AN] extended the result in [@HR; @DHR] to the general power and dimensions. Subsequently, Killip-Murphy-Visan-Zhang [@KMVZ] established a corresponding sharp threshold between scattering and blow up for ($\rm{NLS_{k}}$) with $k>-\frac{1}{4}$ and $\alpha=2$. Recently, Miao-Zhang-Zheng [@MZZ] used the interaction Morawetz-type estimates and the equivalence of Sobolev norms to prove all solutions scatter for ($\rm{NLS_{k}}$) with $k>0$, $\alpha=1$ and $-|u|^{p-1}u$ $(\frac{7}{3}<p<5)$ in place of $|u|^{2}u$ (i.e., nonlinear Schrödinger equation with repulsive Coulomb potential in the defocusing). The goal of this paper is to extend the sharp scattering criterion in [@KMVZ] from $\alpha=2$ to $1<\alpha\leq 2$ when $k>0$ in some sense. Obviously, for $1<\alpha<2$, the equation ($\rm{NLS_{k}}$) doesn’t enjoy scaling invariant. Therefore, we cannot apply scaling as indicated in [@KMVZ] to get a critical element ( a minimal blow up solution). Hence, we shall adopt the variational argument based on the work of Ibrahim-Masmoudi-Nakanishi [@IMN] to overcome the difficulty. Recently, the same argument have been applied to the focusing mass-supercritical nonlinear Schrödinger equation with repulsive Dirac delta potential on the real line (see [@II]). To state our main result, we introduce some notation now. We define the functional $S_{k}$ as $$\begin{aligned} \label{action} S_{k}(\varphi):=E(\varphi)+\frac{1}{2}M(\varphi)=\frac{1}{2}\|\varphi\|_{{\mathcal H}_{k}^{1}}-\frac{1}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx,\end{aligned}$$ where $$\begin{aligned} \label{Sobolev1} \|\varphi\|_{{\mathcal H}_{k}^{1}}^{2}=\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx +\displaystyle\int_{{\bf R}^{3}}V(x)|\varphi(x)|^{2}dx +\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{2}dx,\end{aligned}$$ which is equivalent to $\|\varphi\|_{H^{1}}$ by $k>0$ by Hardy’s inequality. Denote the scaling quantity $\varphi_{\lambda}^{a, b}$ by $$\begin{aligned} \label{scalingquantity} \varphi_{\lambda}^{a, b}:=e^{a\lambda}\varphi(e^{-b\lambda}x),\end{aligned}$$ where $(a, b)$ satisfies the condition $$\begin{aligned} \label{parameter} a>0, \;\; b\leq 0,\;\; 2a+b> 0\;\; 2a+3b\geq 0,\;\; (a,b)\neq (0,0).\end{aligned}$$ We define the scaling derivative of $S_{k}(\varphi_{\lambda}^{a, b})$ at $\lambda=0$ by $K_{k}^{a, b}(\varphi)$. $$\begin{aligned} \label{functionalK} K_{k}^{a, b}(\varphi)&:={\mathcal{L}}^{a,b}S_{k}(\varphi)=\frac{d}{d\lambda}\Big{|}_{\lambda=0}S_{k}(\varphi_{\lambda}^{a, b})\nonumber\\ &=\frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx +\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}V|\varphi(x)|^{2}dx +\frac{b}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx\\ &\;\;+\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{2}dx -\frac{4a+3b}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\nonumber\end{aligned}$$ In particular, when $(a, b)=(3, -2)$, $$\begin{aligned} \label{functionalP} P_{k}(\varphi):=K_{k}^{3,-2}(\varphi)=2\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx -\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx -\frac{3}{2}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx,\end{aligned}$$ which is related with the Virial identity of ($\rm{NLS_{k}}$) with $\phi(x)=|x|^{2}$, and when $(a, b)=(3, 0)$, $$\begin{aligned} \label{functionalI} I_{k}(\varphi):=\frac{1}{3}K_{k}^{3,0}(\varphi)=\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx +\displaystyle\int_{{\bf R}^{3}}V|\varphi(x)|^{2}dx +\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{2}dx -\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx.\end{aligned}$$ We note that, to get existence of minimal blow-up solutions, we need to use the functional $I_{k}$ instead of $P_{k}$. so that we can apply the linear profile decomposition Lemma \[linearprofile\]. The sharp threshold quantity $n_{k}$ are determined by the following minimizing problem $$\begin{aligned} \label{threshold} n_{k}=\inf\{S_{k}(\varphi):\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}, P_{k}(\varphi)=0\}.\end{aligned}$$ When $k=0$, $n_{0}$ is positive and is achieved by $Q$, which is a unique radial solution of (see [@AN]). The sharp criteria between scattering and blow up as mentioned above can be described by $n_{0}$. We state the results of [@HR; @DHR; @FXC; @AN] in terms of $n_{0}$ as follows. [@HR; @DHR; @FXC; @AN] \[scattering0\] Let $u_{0}\in H^{1}({\bf R}^{3})$ satisfy $S_{0}(u_{0})<n_{0}$. \(i) If $P_{0}\geq 0$, then the solution $u$ of ($\rm{NLS_{0}}$) is global and scatters. \(ii) If $P_{0}< 0$ and $u_{0}$ is radial or $xu_{0}\in L^{2}({\bf R}^{3})$, the the solution $u$ of ($\rm{NLS_{0}}$) blows up in finite time in both time directions. Furthermore, if $\psi\in H^{1}({\bf R}^{3})$ satisfying $\frac{1}{2}\|\psi\|_{H^{1}}^{2}<n_{0}$, then there exists a global solution of ($\rm{INLS_{0}}$) that scatters to $\psi$ in the positive time direction. The analogous statement holds in the negative time direction. When $k>0$, we prove that $n_{k}=n_{0}$ and $n_{k}$ is never attained (see Lemma \[attain\]). For succinctness, we next define two subsets of $H^{1}({\bf R}^{3})$ as follows: $$\begin{aligned} \label{n+} {\mathcal{N}}^{+}:=\{\varphi\in H^{1}({\bf R}^{3}): S_{k}(\varphi)<n_{0}, P_{k}(\varphi)\geq 0\}\end{aligned}$$ and $$\begin{aligned} \label{n-} {\mathcal{N}}^{-}:=\{\varphi\in H^{1}({\bf R}^{3}): S_{k}(\varphi)<n_{0}, P_{k}(\varphi)< 0\}.\end{aligned}$$ Now we state our main result. \[scattering1\] Let $u$ be the solution of ($\rm{NLS_{k}}$) on $(-T_{min}, T_{max})$, where $(-T_{min}, T_{max})$ is the maximal life-span. \(i) If $u_{0}\in {\mathcal{N}}^{+}$, then $u$ is global well-posedness, $u(t)\in {\mathcal{N}}^{+}$ for any $t\in {\bf R}$ and scatters. \(ii) If $u_{0}\in {\mathcal{N}}^{-}$, then $u(t)\in {\mathcal{N}}^{-}$ for any $t\in (-T_{min}, T_{max})$ and one of the following four statements holds true: \(a) $T_{max}<+\infty$ and $\lim_{t\uparrow T_{max}}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(b) $T_{min}<+\infty$ and $\lim_{t\downarrow -T_{min}}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(c) $T_{max}=+\infty$ and there exists a sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $t_{n}\rightarrow+\infty$ and $\lim_{t_{n}\uparrow +\infty}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(d) $T_{min}=+\infty$ and there exists a sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $t_{n}\rightarrow-\infty$ and $\lim_{t_{n}\downarrow -\infty}\|\nabla u(t)\|_{L^{2}}=+\infty$. Here the blow-up result is proved by the method of Du-Wu-Zhang [@DWZ]. This present paper is organized as follows. We fix notations at the end of Section 1. In Section 2, as preliminaries, we state some our required lemmas, including Sobolev norm equivalence, Strichartz estimates, stability theory, Littlewood-Paley theory, some limit lemmas between $H_{\alpha}^{n}$ and $H_{\alpha}^{\infty}$, linear profile decomposition and nonlinear profiles for $|x_{n}|\rightarrow+\infty$. In Section 3, using the variational idea of Ibrahim-Masmoudi-Nakanishi [@IMN], we obtain that if $\psi\in \mathcal{N}^{+}$, $P_{k}(\psi)$ and $I_{k}(\psi)$ have the same sign and $S_{k}(\psi)$ is equivalent to $\|\psi\|_{H^{1}}$ and if $\psi\in \mathcal{N}^{\pm}$, $P_{k}(\psi)$ has the uniform bounds, which play a vital role to get blow-up and scattering results. In Section 4, using the upper bound of $P_{k}(\psi)$ for $\psi\in \mathcal{N}^{-}$ and adopting the method of Du-Wu-Zhang [@DWZ], we establish blow-up part of Theorem \[scattering1\]. Global part of Theorem \[scattering1\] can be obtained by the lower bound of $P_{k}(\psi)$ for $\psi\in \mathcal{N}^{+}$ and local well-posedness $i$ of Theorem \[localwellposedness\]. In the last section, we show the scattering part of Theorem \[scattering1\] in two steps. In Step 1, by contradiction, if scattering fails, then a critical element must exist. In Step 2, we utilize lower bound of $P_{k}(\psi)$ for $\psi\in \mathcal{N}^{+}$ to preclude the critical element. Putting the last two sections together completes the proof of Theorem \[scattering1\]. **Notations:**: We fix notations used throughout the paper. In what follows, we write $A\lesssim B$ to signify that there exists a constant $c$ such that $A\leq cB$, while we denote $A\sim B$ when $A\lesssim B\lesssim A$. Given a real number $\alpha$, $\alpha-=\alpha-\epsilon$ for $0<\epsilon\ll 1$. Let $L_{I}^{q}L_{x}^{r}$ be the space of measurable functions from an interval $I\subset {\bf R}$ to $L_{x}^{r}$ whose $L_{I}^{q}L_{x}^{r}$- norm $ \|\cdot\|_{L_{I}^{q}L_{x}^{r}}$ is finite, where $$\begin{aligned} \|u\|_{L_{I}^{q}L_{x}^{r}}=\Big(\displaystyle\int_{I}\|u(t)\|_{L_{x}^{r}}^{q}dt\Big)^{\frac{1}{r}}.\end{aligned}$$ When $I={\bf R}$, we may use $L_{t}^{q}L_{x}^{r}$ instead of $L_{I}^{q}L_{x}^{r}$, respectively. In particular, when $q=r$, we may simply write them as $L_{t,x}^{q}$, respectively. Moreover, the Fourier transform on ${\bf R}^{3}$ is defined by $\hat{f}(\xi)=(2\pi)^{-\frac{3}{2}}\displaystyle\int_{{\bf R}^{3}} e^{-ix\cdot\xi}f(x)dx$. Define the inhomogeneous Sobolev space $H^{s}({\bf R}^{3})$ and and the homogeneous Sobolev space $\dot{H}^{s}({\bf R}^{3})$, respectively, by norms $$\|f\|_{H^{s}({\bf R}^{3})}=\|(1+|\xi|^{2})^{\frac{s}{2}}\hat{f}(\xi)\|_{L^{2}({\bf R}^{3})}=\|(1+\Delta)^{\frac{s}{2}} f\|_{L^{2}({\bf R}^{3})}$$ and $$\|f\|_{\dot{H}^{s}({\bf R}^{3})}=\||\xi|^{s}\hat{f}(\xi)\|_{L^{2}({\bf R}^{3})}=\|\Delta^{\frac{s}{2}} f\|_{L^{2}({\bf R}^{3})}.$$ Denote the inhomogeneous Sobolev space and homogeneous Sobolev space adapted to $H_{\alpha}$ by ${\mathcal{H}}_{k}^{s, p}({\bf R}^{3})$ and ${\mathcal{\dot H}}_{k}^{s, p}({\bf R}^{3})$, respectively, with norms $$\|f\|_{{\mathcal{H}}_{k}^{s, p}({\bf R}^{3})}=\|(1+H_{\alpha})^{\frac{s}{2}} f\|_{L^{p}({\bf R}^{3})}$$ and $$\|f\|_{{\mathcal{\dot H}}_{k}^{s, p}({\bf R}^{3})}=\|H_{\alpha}^{\frac{s}{2}} f\|_{L^{p}({\bf R}^{3})}.$$ ${\mathcal{H}}_{k}^{s}({\bf R}^{3})$ and ${\mathcal{\dot H}}_{k}^{s}({\bf R}^{3})$ are shorthand by ${\mathcal{H}}_{k}^{s, 2}({\bf R}^{3})$ and ${\mathcal{\dot H}}_{k}^{s, 2}({\bf R}^{3})$, respectively. Given $p\geq 1$, let $p'$ be the conjugate of $p$, that is $\frac{1}{p}+\frac{1}{p'}=1$. [**Acknowledgement**]{}  The first author is financially supported by the China National Science Foundation (No.11301564, 11771469), the second author is financially supported by the China National Science Foundation ( No. 11771165 and 11571131), and the third author is financially supported by the China National Science Foundation( No. 11771165). Preliminaries {#sec-2} ============= As we’ve mentioned in the introduction, the heat kernel associated with $H_{\alpha}$ satisfies , so Mikhlin multiplier theorem holds, which implies that for $\forall 1<p<\infty$, $$\begin{aligned} \label{Lpbound} \|f\|_{L^{p}}\lesssim \|(1+H_{\alpha})f\|_{L^{p}}.\end{aligned}$$ And we have the following Hardy type inequality for $H_{\alpha}$ (e.g., see [@KMVZS] for $\alpha=2$). $$\begin{aligned} \label{Hardy} \Big\||x|^{-s}f\Big\|_{L^{p}({\bf R}^{3})}\lesssim \|H_{\alpha}^{\frac{s}{2}}f\|_{L^{p}({\bf R}^{3})}\lesssim \|(1+H_{\alpha})^{\frac{s}{2}}f\|_{L^{p}({\bf R}^{3})},\end{aligned}$$ where $0<s<3$ and $1<p<\frac{3}{s}$. Using and Stein complex interpolation yields the following Sobolev norm equivalence (see [@H] for $V\geq 0$ and $V\in L^{\frac{3}{2}}$ and [@ZZ; @KMVZS; @MZZ] for $\alpha=2$). \[Sobolev\] Let $k>0$ and $0<\alpha< 2$, $1<p<\frac{3}{s}$ and $0\leq s\leq 2$, then $$\begin{aligned} \label{ihomogeneous} \|(1+H_{\alpha})^{\frac{s}{2}}f\|_{L^{p}({\bf R}^{3})}\sim \|(1-\Delta)^{\frac{s}{2}}f\|_{L^{p}({\bf R}^{3})}.\end{aligned}$$ As the heat kernel associated with $H_{\alpha}$ satisfies , the kernel of Riesz potentials $(1+H_{\alpha})^{-\frac{s}{2}}$ $$(1+H_{\alpha})^{-\frac{s}{2}}(x, y)=\frac{1}{\Gamma(\frac{s}{2})}\displaystyle\int_{0}^{+\infty}e^{-t(1+H_{\alpha})}(x, y)t^{\frac{s}{2}-1}dt$$ satisfies $$|(1+H_{\alpha})^{-\frac{s}{2}}(x, y)|\lesssim |x-y|^{s-3},$$ which, by Hardy-Littlewood-Sobolev inequality, implies that $$\begin{aligned} \label{HLS} \|(1+H_{\alpha})^{-\frac{s}{2}}f\|_{L^{\frac{3p}{3-ps}}}\lesssim \|f\|_{L^{p}}.\end{aligned}$$ It suffices to prove that with $s=2$ and $1<p<\frac{3}{2}$ holds. Indeed, if it is true, then follows from Stein complex interpolation and the $L^{p}$-boundedness of $(1+H_{\alpha})^{iy}$ with $\forall y\in {\bf R}$ and $1<p<+\infty$ (which can be obtained by and Sikora-Wright [@SW]). Let $\chi(x)$ be a smooth compact supported function such that $\chi(x)=1$ for $|x|\leq 1$ and $\chi(x)=0$ for $|x|\geq 2$. On one hand, using Hölder inequality and Sobolev embedding yields that $$\begin{aligned} \|(1+H_{\alpha})f\|_{L^{p}}&\leq \|(1-\Delta)f\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}f\Big\|_{L^{p}}\nonumber\\ &\leq \|(1-\Delta)f\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}\chi f\Big\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}(1-\chi) f\Big\|_{L^{p}}\nonumber\\ &\lesssim \|(1-\Delta)f\|_{L^{p}}+\Big\||x|^{-\alpha}\chi\Big\|_{L^{\frac{3}{2}}}\|f\|_{L^\frac{3p}{3-2p}}+\|f\|_{L^{p}}\nonumber\\ &\lesssim \|(1-\Delta)f\|_{L^{p}}.\end{aligned}$$ On the other hand, using Hölder inequality, and with $s=2$ gives that $$\begin{aligned} \|(1-\Delta)f\|_{L^{p}}&\leq \|(1+H_{\alpha})f\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}f\Big\|_{L^{p}}\nonumber\\ &\leq \|(1+H_{\alpha})f\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}\chi f\Big\|_{L^{p}}+k\Big\|\frac{1}{|x|^{\alpha}}(1-\chi) f\Big\|_{L^{p}}\nonumber\\ &\lesssim \|(1+H_{\alpha})f\|_{L^{p}}+\Big\||x|^{-\alpha}\chi\Big\|_{L^{\frac{3}{2}}}\|f\|_{L^\frac{3p}{3-2p}}+\|f\|_{L^{p}}\nonumber\\ &\lesssim \|(1+H_{\alpha})f\|_{L^{p}}.\end{aligned}$$ Thus, we get with $s=2$ and $1<p<\frac{3}{2}$ and then conclude the proof. Recently, Mizutani [@M] proved that the solution to free Schrödinger equations with a class of slowly decaying repulsive potentials including $k|x|^{-\alpha}$ with $k>0$ and $0<\alpha<2$ satisfies global-in-time Strichartz estimates for any admissible pairs. Besides, it is well known that Strichartz estimates for free Schrödinger equation with inverse-square potentials were established by Burq-Planchon-Stalker-Tahvildar-Zadeh [@BPSTZ]. Hence, we have the following global-in-time Strichartz estimate. [@BPSTZ; @M] \[Strichartz\] Let $k>0$ and $0<\alpha\leq 2$. Then the solution $u$ of $iu_{t}-H_{\alpha}u=F$ with initial data $u_{0}$ obeys $$\begin{aligned} \label{Strichartzestimates} \|u\|_{L_{t}^{q}L_{x}^{r}}\lesssim \|u_{0}\|_{L_{x}^{2}}+\|F\|_{L_{t}^{\tilde{q}'}L_{x}^{\tilde{r}'}}\end{aligned}$$ for any $2\leq q, \tilde{q}\leq \infty$ with $\frac{2}{q}+\frac{3}{r}=\frac{2}{\tilde{q}}+\frac{3}{\tilde{r}}=\frac{3}{2}$. Once we have Strichartz estimates Lemma \[Strichartz\] and Sobolev norm equivalence Lemma \[Sobolev\], the local well-posedness Theorem \[localwellposedness\] and stability result Lemma \[stability\] for ($\rm{NLS_{k}}$) can be obtained by the same proofs as in Theorem 2.15 and Theorem 2.17 of [@KMVZ], respectively. [@KMVZ] \[stability\] For $k>0$ and $0<\alpha\leq 2$. Let $\tilde{u}$ be the solution of $$\label{perturbation} \left\{ \begin{aligned} i&\tilde{u}_{t}-H_{\alpha}\tilde{u}+|\tilde{u}|^{2}\tilde{u}=e,\;\;(t,x) \in {I\times{\bf{R}}^{3}}, \\ \tilde{u}&(0, x)=\tilde{u}_{0}(x)\in H^{1}({\bf{R}}^{3}), \end{aligned}\right.$$ for some ’error’ $e$. Given $u_{0}\in H^{1}({\bf{R}}^{3})$ and assume $$\begin{aligned} \label{twoupperbound} \|u_{0}\|_{H^{1}}+\|\tilde{u}_{0}\|_{H^{1}}\leq A \;\;\text{and}\;\; \|\tilde{u}\|_{L_{t,x}^{5}}\leq M\end{aligned}$$ for some $A$, $M>0$. For any given $\frac{1}{2}\leq s<1$, there exists $\epsilon_{0}=\epsilon_{0}(A, M)$ such that if $0<\epsilon<\epsilon_{0}$ and $$\begin{aligned} \label{error} \|u_{0}-\tilde{u}_{0}\|_{H^{s}}+\Big\|(1-\Delta)^{\frac{s}{2}}e\Big\|_{N(I)}<\epsilon,\end{aligned}$$ where $$N(I):= L_{I,x}^{\frac{10}{7}}+L_{I}^{\frac{5}{3}}L_{x}^{\frac{30}{23}}+L_{I}^{1}L_{x}^{2},$$ then there exists a solution $u$ of ($\rm{NLS_{k}}$) such that $$\begin{aligned} \label{difference} \|u-\tilde{u}\|_{S_{\alpha}^{s}(I)}\lesssim_{A, M}\epsilon,\end{aligned}$$ $$\begin{aligned} \label{oneupperbound} \|u\|_{S_{\alpha}^{1}(I)}\lesssim_{A, M}1,\end{aligned}$$ where $$S_{\alpha}^{s}(I)=L_{I}^{2}{\mathcal{H}}_{\alpha}^{s,6}\cap L_{I}^{\infty}{\mathcal{H}}_{\alpha}^{s}.$$ Since Mikhlin multiplier theorem for $H_{\alpha}$ holds, naturally, we have the Littlewood-Paley theory associated with $H_{\alpha}$ (see [@KMVZS] for $\alpha=2$). We first give the definition of Littlewood-Paley projection via the heat kernel as follows: For $N\in 2^{\bf{Z}}$, $$\begin{aligned} \label{projection} P_{N}:=e^{-\frac{1}{N^{2}}H_{\alpha}}-e^{-\frac{4}{N^{2}}H_{\alpha}}.\end{aligned}$$ We next state Littlewood-Paley decomposition, square function estimate and Bernstein estimates (see [@KMVZS] for $\alpha=2$). \[LPdecompostion\] Let $1<p<\infty$. If $k>0$ and $0<\alpha\leq 2$, then $$\begin{aligned} \label{decompostioneq} f=\sum_{N\in 2^{{\bf{Z}}}}P_{N}f\end{aligned}$$ as elements of $L^{p}({\bf R}^{3})$. In particular, the sum converges in $L^{p}({\bf R}^{3})$. \[square\] Let $0\leq s<2$ and $1<p<+\infty$. If $k>0$ and $0<\alpha\leq 2$, then $$\begin{aligned} \label{squareestimate} \Big\|(\sum_{N\in 2^{{\bf{Z}}}}N^{2s}|P_{N}f|^{2})^{\frac{1}{2}}\Big\|_{L^{p}({\bf R}^{3})}\sim \|(H_{\alpha})^{\frac{s}{2}}\|_{L^{p}({\bf R}^{3})}.\end{aligned}$$ \[Bernstein\] Let $1<p\leq q<+\infty$ and $s\in {\bf R}$. If $k>0$ and $0<\alpha\leq 2$, then \(i) $$\begin{aligned} \label{Bernsteinpp} \|P_{N}f\|_{L^{p}({\bf R}^{3})}\lesssim \|f\|_{L^{p}({\bf R}^{3})}.\end{aligned}$$ \(ii) $$\begin{aligned} \label{Bernsteinpq} \|P_{N}f\|_{L^{q}({\bf R}^{3})}\lesssim N^{3(\frac{1}{p}-\frac{1}{q})}\|f\|_{L^{p}({\bf R}^{3})}.\end{aligned}$$ \(iii) $$\begin{aligned} \label{Bernsteinsim} \|(H_{\alpha})^{\frac{s}{2}}P_{N}f\|_{L^{p}({\bf R}^{3})}\sim N^{s}\|f\|_{L^{p}({\bf R}^{3})}.\end{aligned}$$ In order to establish linear profile decomposition associated with $e^{-itH_{\alpha}}$ and find a critical element, we apply the argument of [@KMVZE] to get some convergence results. For convenience, define two operators: Let $\{x_{n}\}_{n=1}^{+\infty}\subset {\bf R}^{3}$, $$\begin{aligned} \label{operatortranslation} H_{\alpha}^{n}:=-\Delta +V(x+x_{n})\;\;\text{and}\;\; H_{\alpha}^{\infty}:=\left\{ \begin{array}{rcl} -\Delta+V(x+\bar{x})& &{ x_{n}\rightarrow \bar{x}\in {\bf{R}}^{3}}\\ -\Delta\quad\quad\quad& &{ |x_{n}|\rightarrow +\infty} \end{array}\right.\end{aligned}$$ Obviously, $\tau_{x_{n}}H_{\alpha}^{n}\psi=H_{\alpha}\tau_{x_{n}}\psi$, where $\tau_{y}\psi(x)=\psi(x-y)$. So $H_{\alpha}$ doesn’t commute with $\tau_{x}$. $H_{\alpha}^{\infty}$ can be regarded as limits of $H_{\alpha}^{n}$ in the following sense (see [@KMVZE] for $\alpha=2$). \[limit\] Let $k>0$ and $1<\alpha< 2$. Assume $t_{n}\rightarrow\bar{t}\in {\bf R}$. and $\{x_{n}\}_{n=1}^{+\infty}\rightarrow\bar{x}\in{\bf R}^{3}$ or $|x_{n}|\rightarrow+\infty$. Then $$\begin{aligned} \label{operatorlimit} \lim_{n\rightarrow+\infty}\|(H_{\alpha}^{n}-H_{\alpha}^{\infty})\psi\|_{H^{-1}}=0,\;\;\forall \psi\in H^{1}.\end{aligned}$$ $$\begin{aligned} \label{grouplimit} \lim_{n\rightarrow+\infty}\|(e^{-it_{n}H_{\alpha}^{n}}-e^{-i\bar{t}H_{\alpha}^{\infty}})\psi\|_{H^{-1}}=0,\;\;\forall \psi\in H^{-1}.\end{aligned}$$ $$\begin{aligned} \label{fractionaloperatorlimit} \lim_{n\rightarrow+\infty}\|((1+H_{\alpha}^{n})^{\frac{1}{2}}-(1+H_{\alpha}^{\infty})^{\frac{1}{2}})\psi\|_{L^{2}}=0,\;\;\forall \psi\in H^{1}.\end{aligned}$$ For any $2<q\leq+\infty$ and $\frac{2}{q}+\frac{3}{r}=\frac{3}{2}$. $$\begin{aligned} \label{Strichartzlimit} \lim_{n\rightarrow+\infty}\|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{q}L_{x}^{r}}=0,\;\;\forall \psi\in L^{2}.\end{aligned}$$ If $\bar{x}\neq 0$, then for any $t>0$, $$\begin{aligned} \label{heatlimit} \lim_{n\rightarrow+\infty}\|(e^{-tH_{\alpha}^{n}}-e^{-tH_{\alpha}^{\infty}})\delta_{0}\|_{H^{-1}}=0.\end{aligned}$$ Here we only give the proof of because the others can be obtained by using the same method of Lemma 3.3 in [@KMVZE], where in the proof of - we need to replace $\dot{H}^{1}$, $\dot{H}^{-1}$, $H_{\alpha}^{n}$, $H_{\alpha}^{\infty}$ and homogeneous Sobolev equivalence Theorem 2.2 in [@KMVZE] with $H^{1}$, $H^{-1}$, $1+H_{\alpha}^{n}$, $1+H_{\alpha}^{\infty}$ and inhomogeneous Sobolev equivalence Lemma \[Sobolev\], respectively and in the proof of we need to replace $\dot{H}^{-1}$ by $H^{-1}$. It only suffices to prove in the case $(q, r)=(\infty, 2)$, since the general case can be obtained by interpolating with the end-point Strichartz estimates (i.e., $(q, r)=(2, 6)$). By density and Strichartz estimates, let $\psi$ be a smooth function with compact support, so that for $\forall M>0$, $$\begin{aligned} \label{dispersive} |(e^{-it\Delta}\psi)(x)|\lesssim_{\psi}\langle t\rangle^{-\frac{3}{2}}\Big(1+\frac{|x|}{\langle t\rangle}\Big)^{-M}.\end{aligned}$$ By the definition of $H_{\alpha}^{\infty}$, we consider two cases: $|x_{n}|\rightarrow+\infty$ and $x_{n}\rightarrow\bar{x}$. For the first case $|x_{n}|\rightarrow+\infty$, we have $$\begin{aligned} e^{it\Delta}\psi&=e^{-itH_{\alpha}^{n}}\psi+i\displaystyle\int_{0}^{t}e^{-i(t-s)H_{\alpha}^{n}}V(x+x_{n})e^{is\Delta}\psi ds\\ &=e^{-itH_{\alpha}^{n}}\psi+i\displaystyle\int_{0}^{t}e^{-i(t-s)H_{\alpha}^{n}}(\chi_{|x+x_{n}|\leq R}+\chi_{|x+x_{n}|> R})V(x+x_{n})e^{is\Delta}\psi ds,\end{aligned}$$ where $R$ is a sufficiently large number that will be chosen later and $\chi_{A}$ is a characteristic function on a set $A$. Hence by Strichartz estimates and dispersive estimates , we find that $$\begin{aligned} \label{twocases} \|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{\infty}L_{x}^{2}}&\lesssim \|\chi_{|x+x_{n}|\leq R}V(x+x_{n})e^{it\Delta}\psi\|_{L_{t,x}^{\frac{10}{7}}} +\|\chi_{|x+x_{n}|> R}V(x+x_{n})e^{it\Delta}\psi\|_{L_{t}^{q}L_{x}^{r}}\nonumber\\ &\lesssim \Big\|\chi_{|x+x_{n}|\leq R}V(x+x_{n})\langle t\rangle^{-\frac{3}{2}}\Big(1+\frac{|x|}{\langle t\rangle}\Big)^{-M}\Big\|_{L_{t,x}^{\frac{10}{7}}}\nonumber\\ &\quad+\Big\|\chi_{|x+x_{n}|> R}V(x+x_{n})\langle t\rangle^{-\frac{3}{2}}\Big(1+\frac{|x|}{\langle t\rangle}\Big)^{-M}\Big\|_{L_{t}^{q}L_{x}^{r}}\nonumber\\ &:=I_{1}+I_{2},\end{aligned}$$ where $(q,r)=(1,2)$ if $1<\alpha\leq \frac{3}{2}$ and $(q, r)=(\frac{10}{7}, \frac{10}{7})$ if $\frac{3}{2}<\alpha\leq 2$. We note that $|x_{n}|\geq R$ for $n$ large enough. So for the first part $I_{1}$, $R<|x|\sim |x_{n}|$. If $|x|\leq \langle t\rangle$, then $|x_{n}|\lesssim \langle t\rangle$. Therefore, $$\begin{aligned} \label{nlarge1} I_{1}\lesssim \|V(x+x_{n})\|_{L_{x}^{\frac{10}{7}}(|x+x_{n}|\leq R)}\Big\||t|^{-\frac{3}{2}}\Big\|_{L_{t}^{\frac{10}{7}}(|t|\gtrsim |x_{n}|)}\lesssim R^{-\alpha+\frac{21}{10}}|x_{n}|^{-\frac{4}{5}}.\end{aligned}$$ If $|x|> \langle t\rangle$, $$I_{1}\lesssim \|\chi_{|x+x_{n}|\leq R}V(x+x_{n})\langle t\rangle^{-\frac{3}{2}+M}|x|^{-M}\|_{L_{t,x}^{\frac{10}{7}}}.$$ When $|t|\leq 1$, it is easy to get $$\begin{aligned} \label{nlarge2} I_{1}\lesssim R^{-\alpha+\frac{21}{10}}|x_{n}|^{-M}.\end{aligned}$$ When $|t|>1$ and $M$ sufficiently large, $$\|\langle t\rangle^{-\frac{3}{2}+M}\|_{L_{t}^{\frac{10}{7}}(|t|\lesssim |x|)}\lesssim |x|^{-\frac{3}{2}+M+\frac{7}{10}},$$ so $$\begin{aligned} \label{nlarge3} I_{1}\lesssim R^{-\alpha+\frac{21}{10}}|x_{n}|^{-\frac{4}{5}}.\end{aligned}$$ Putting , and together gives that $I_{1}$ tends to 0 as $n$ approaches $+\infty$, regardless of $R>0$. For the second part $I_{2}$, we consider two subcases: $1<\alpha\leq\frac{3}{2}$ and $\frac{3}{2}<\alpha\leq 2$. If $1<\alpha\leq \frac{3}{2}$, $(q,r)=(1,2)$. When $|x|\leq \langle t\rangle$, using Hölder inequality, we have $$\begin{aligned} \label{Rlarge1} I_{2}\lesssim \|V(x+x_{n})\|_{L_{x}^{\frac{3}{\alpha-}}(|x+x_{n}|> R)}\Big\||t|^{-\frac{3}{2}}\Big\|_{L_{t}^{1}L_{x}^{\frac{6}{3-2(\alpha-)}}(|x|\leq\langle t\rangle)}\lesssim R^{1-\frac{\alpha}{\alpha-}}.\end{aligned}$$ When $|x|> \langle t\rangle$, $$\begin{aligned} \label{Rlarge2} I_{2}\lesssim \|V(x+x_{n})\|_{L_{x}^{\frac{3}{\alpha-}}(|x+x_{n}|> R)}\Big\||t|^{-\frac{3}{2}}|x|^{-M}\langle t\rangle^{M}\Big\|_{L_{t}^{1}L_{x}^{\frac{6}{3-2(\alpha-)}}(|x|>\langle t\rangle)}\lesssim R^{1-\frac{\alpha}{\alpha-}}.\end{aligned}$$ It follows from and that $I_{2}$ can be chosen arbitrarily small if we take $R$ sufficiently large. If $\frac{3}{2}<\alpha\leq 2$, $(q, r)=(\frac{10}{7}, \frac{10}{7})$. When $|x|\leq \langle t\rangle$, using Hölder inequality, we have $$\begin{aligned} \label{Rlarge3} I_{2}\lesssim \|V(x+x_{n})\|_{L_{x}^{2}(|x+x_{n}|> R)}\Big\||t|^{-\frac{3}{2}}\Big\|_{L_{t}^{\frac{10}{7}}L_{x}^{5}(|x|\leq\langle t\rangle)}\lesssim R^{-\alpha+\frac{3}{2}}.\end{aligned}$$ When $|x|> \langle t\rangle$, $$\begin{aligned} \label{Rlarge4} I_{2}\lesssim \|V(x+x_{n})\|_{L_{x}^{2}(|x+x_{n}|> R)}\Big\||t|^{-\frac{3}{2}}|x|^{-M}\langle t\rangle^{M}\Big\|_{L_{t}^{\frac{10}{7}}L_{x}^{5}(|x|>\langle t\rangle)}\lesssim R^{-\alpha+\frac{3}{2}}.\end{aligned}$$ It follows from and that $I_{2}$ can be chosen arbitrarily small if we take $R$ sufficiently large. So we get in the case $|x_{n}|\rightarrow+\infty$. Now we turn to the other case $x_{n}\rightarrow\bar{x}$. Here we obtain that $$\begin{aligned} e^{-itH_{\alpha}^{\infty}}\psi&=e^{-itH_{\alpha}^{n}}\psi+i\displaystyle\int_{0}^{t}e^{-i(t-s)H_{\alpha}^{n}}(V(x+x_{n})-V(x+\bar{x}))e^{-isH_{\alpha}^{\infty}}\psi ds\end{aligned}$$ Hence by Strichartz estimates , we have $$\begin{aligned} \|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{\infty}L_{x}^{2}}&\lesssim \|(V(x+x_{n})-V(x+\bar{x}))e^{-itH_{\alpha}^{\infty}}\psi\|_{L_{t}^{2}L_{x}^{\frac{6}{5}}}\nonumber\\ &=\|(V(x+x_{n}-\bar{x})-V(x))e^{-itH_{\alpha}}\tau_{\bar{x}}\psi\|_{L_{t}^{2}L_{x}^{\frac{6}{5}}}\end{aligned}$$ Replacing $x_{n}-\bar{x}$ by $x_{n}$ and $\tau_{\bar{x}}\psi$ by $\psi$, so we can suppose that $\bar{x}=0$. Hence, for $\forall \epsilon>0$, $|x_{n}|<\epsilon$ when $n$ is large enough. Besides, by Newton-Leibniz formula, $$|V(x+x_{n})-V(x)|\lesssim |x_{n}|\displaystyle\int_{0}^{1}|x+(1-t)x_{n}|^{-\alpha-1}dt.$$ Using Hölder inequality and Strichartz estimates yields that $$\begin{aligned} \|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{\infty}L_{x}^{2}} &\lesssim \|(V(x+x_{n})-V(x))\|_{L_{x}^{\frac{15}{11}}(|x|\leq 2\epsilon)}\|e^{-itH_{\alpha}}\psi\|_{L_{t}^{2}L_{x}^{10}}\nonumber\\ &\quad+\|(V(x+x_{n})-V(x))\|_{L_{x}^{\frac{3}{2}}(|x|> 2\epsilon)}\|e^{-itH_{\alpha}}\psi\|_{L_{t}^{2}L_{x}^{6}}\nonumber\\ &\lesssim\epsilon^{\frac{1}{5}}\|\psi\|_{{\mathcal H}_{k}^{\frac{1}{10}}}+|x_{n}|\epsilon^{-(\alpha-1)},\end{aligned}$$ which implies that $$\begin{aligned} \lim_{n\rightarrow+\infty}\|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{\infty}L_{x}^{2}}\lesssim \epsilon^{\frac{1}{5}}.\end{aligned}$$ Since $\epsilon$ can be chosen arbitrarily, $$\begin{aligned} \lim_{n\rightarrow+\infty}\|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t}^{\infty}L_{x}^{2}}=0.\end{aligned}$$ Thus, we get in the case $x_{n}\rightarrow\bar{x}$. Once getting and , we follow the same proof of Corollary 3.4 for $\alpha=2$ in [@KMVZE] and replace $\dot{H}^{1}$ and $H_{\alpha}^{n}$ with $H^{1}$ and $1+H_{\alpha}^{n}$ in the procedure of the proof, respectively, to get the following decaying estimates. \[decay\] Let $k>0$, $1<\alpha<2$. Given $\psi\in H^{1}$, $t_{n}\rightarrow\pm\infty$ and any sequence $\{x_{n}\}_{n=1}^{+\infty}\subset {\bf R}^{3}$. We have $$\begin{aligned} \label{decay6} \lim_{n\rightarrow+\infty}\|e^{-it_{n}H_{\alpha}^{n}}\psi\|_{L_{x}^{6}}=0.\end{aligned}$$ Moreover, if $\psi\in H^{1}$, then for $2<p<\infty$, $$\begin{aligned} \label{decayp} \lim_{n\rightarrow+\infty}\|e^{-it_{n}H_{\alpha}^{n}}\psi\|_{L_{x}^{p}}=0.\end{aligned}$$ Using , Sobolev equivalence Lemma \[Sobolev\] and interpolation yields the following convergence (see the same result and the detailed proof for $\alpha=2$ in [@KMVZ]). \[Strichartzlimitlemma\] Let $x_{n}\rightarrow\bar{x}\in {\bf R}^{3}$ or $x_{n}\rightarrow\pm\infty$. Then for $\forall\psi\in H^{1}$, $k>0$, $1<\alpha<2$, $$\begin{aligned} \label{Strichartzlimit5} \lim_{n\rightarrow+\infty}\|(e^{-itH_{\alpha}^{n}}-e^{-itH_{\alpha}^{\infty}})\psi\|_{L_{t,x}^{5}}=0.\end{aligned}$$ To get the parameters of linear profile decomposition are asymptotically orthogonal, we finally need two weak convergence results Lemma \[weak1\] and Lemma \[weak2\]. Since they are the direct consequences of Lemma \[limit\] and the detailed proof can be found in Lemma 3.8 and Lemma 3.9 for $\alpha=2$ in [@KMVZE] with a small modification by replacing $\dot{H}^{1}$ and $\Delta$ with $H^{1}$ and $1+\Delta$, respectively, and using the inequality $\|\sqrt{H_{\alpha}^{n}}\psi_{n}\|_{L^{2}}\lesssim \|\psi_{n}\|_{H^{1}}$, where the implicit constant is independent of $n$, we omit their proof here. \[weak1\] Let $\psi_{n}\in H^{1}({\bf R}^{3})$ satisfy $\psi_{n}\rightharpoonup 0$ in ${H}^{1}({\bf R}^{3})$ and let $t_{n}\rightarrow\bar{t}\in {\bf R}$. Then for $k>0$, $1<\alpha\leq2$, $$\begin{aligned} \label{twiceweak} e^{-it_{n}H_{\alpha}^{n}}\psi_{n}\rightharpoonup 0 \;\;\text{in}\;\; {H}^{1}({\bf R}^{3}).\end{aligned}$$ \[weak2\] Let $\psi\in H^{1}({\bf R}^{3})$ and let $\{(t_{n}, y_{n})\}\subset {\bf R}\times{\bf R}^{3}$, $|t_{n}|\rightarrow\infty$ or $|x_{n}|\rightarrow\infty$ . Then for $k>0$, $1<\alpha\leq2$, $$\begin{aligned} \label{onceweak} (e^{-it_{n}H_{\alpha}^{n}}\psi)(\cdot+y_{n})\rightharpoonup 0 \;\;\text{in}\;\; {H}^{1}({\bf R}^{3}).\end{aligned}$$ Use Lemma \[limit\], Lemma \[decay\], Lemma \[weak1\], Lemma \[weak2\] and Littlewood-Paley theory Lemma \[LPdecompostion\]- Lemma \[Bernstein\] and follow the proof of Proposition 5.1 for $\alpha=2$ in [@KMVZ] to give the following linear profile decomposition. Similar to the above, we need to use ${\mathcal{H}}_{k}^{1}$, $1+H_{\alpha}^{n}$, $1+H_{\alpha}^{\infty}$, and $H^{-1}$ to replace ${\mathcal{\dot{H}}}_{k}^{1}$, $H_{\alpha}^{n}$, $H_{\alpha}^{\infty}$ and $\dot{H}^{-1}$, respectively, in some appropriate places (e.g. in the proof of ). \[linearprofile\] Let $\{\phi_{n}\}_{n=1}^{+\infty}$ be a uniformly bounded sequence in $H^{1}({\bf R}^{3})$. Then there exist $M^{*}\in {\bf N}\cup\{+\infty\}$, a subsequence of $\{\phi_{n}\}_{n=1}^{M^{*}}$, which is denoted by itself, such that for $k>0$, $1<\alpha\leq2$, the following statements hold. \(1) For each $1\leq j\leq M\leq M^{*}$, there exist (fixed in $n$) a profile $\psi^{j}$ in $H^{1}({\bf R}^{3})$, a sequence (in $n$) of time shifts $t_{n}^{j}$ and a sequence (in $n$) of space shifts $x_{n}^{j}$, and there exists a sequence (in $n$) of remainder $W_{n}^{M}$ in $H^{1}({\bf R}^{3})$ such that $$\begin{aligned} \label{decomposition} \phi_{n}=\sum_{j=1}^{M}e^{it_{n}^{j}H_{\alpha}}\tau_{x_{n}^{j}}\psi^{j}+W_{n}^{M}:=\sum_{j=1}^{M}\psi_{n}^{j}+W_{n}^{M}.\end{aligned}$$ \(2) For each $1\leq j\leq M$, $$\begin{aligned} \label{zeroinfty} \text{either}\; t_{n}^{j}=0\; \text{for any}\; n\in {\bf N}\;\;\; \text{or}\; \lim_{n\rightarrow+\infty}t_{n}^{j}=\pm\infty\\ \text{either}\; x_{n}^{j}=0\; \text{for any}\; n\in {\bf N}\;\;\; \text{or}\; \lim_{n\rightarrow+\infty}|x_{n}^{j}|=+\infty.\end{aligned}$$ \(3) The time and space sequence have a pairwise divergence property. Namely, for $1\leq j\neq k\leq M$, $$\begin{aligned} \label{divergence} \lim_{n\rightarrow+\infty}(|t_{n}^{j}-t_{n}^{k}|+|x_{n}^{j}-x_{n}^{k}|)=+\infty.\end{aligned}$$ \(4) The remainder sequence has the following asymptotic smallness property and weak convergence property: $$\begin{aligned} \label{smallness} \lim_{M\rightarrow M^{*}}\overline{\lim}_{n\rightarrow+\infty}\|e^{-itH_{\alpha}}W_{n}^{M}\|_{L_{t,x}^{5}}=0\end{aligned}$$ and $$\begin{aligned} \label{remainderweak} \tau_{-x_{n}^{M}}e^{-it_{n}^{M}H_{\alpha}}W_{n}^{M}\rightharpoonup 0 \;\;\text{in}\;\; H^{1},\;\;\text{as}\;\; n\rightarrow+\infty.\end{aligned}$$ \(5) For each fixed $M$, we have the asymptotic Pythagorean expansion as follows: $$\begin{aligned} \label{expansion0} \|\phi_{n}\|_{L^{2}}^{2}=\sum_{j=1}^{M}\|\psi^{j}\|_{L^{2}}^{2}+\|W_{n}^{M}\|_{L^{2}}^{2}+o_{n}(1),\end{aligned}$$ $$\begin{aligned} \label{expansion1} \|\phi_{n}\|_{{\mathcal{\dot{H}}}_{k}^{1}}^{2}=\sum_{j=1}^{M}\|\tau_{x_{n}^{j}}\psi^{j}\|_{{\mathcal{\dot{H}}}_{k}^{1}}^{2}+\|W_{n}^{M}\|_{{\mathcal{\dot{H}}}_{k}^{1}}^{2}+o_{n}(1)\end{aligned}$$ and $$\begin{aligned} \label{expansion4} \|\phi_{n}\|_{L^{4}}^{4}=\sum_{j=1}^{M}\|\psi_{n}^{j}\|_{L^{4}}^{4}+\|W_{n}^{M}\|_{L^{4}}^{4}+o_{n}(1),\end{aligned}$$ where $o_{n}(1)\rightarrow 0$ as $n\rightarrow+\infty$. Specially, $$\begin{aligned} \label{expansions} S_{k}(\phi_{n})=\sum_{j=1}^{M}S_{k}(\psi_{n}^{j})+S_{k}(W_{n}^{M})+o_{n}(1)\end{aligned}$$ and $$\begin{aligned} \label{expansioni} I_{k}(\phi_{n})=\sum_{j=1}^{M}I_{k}(\psi_{n}^{j})+I_{k}(W_{n}^{M})+o_{n}(1).\end{aligned}$$ Following the proof of Theorem 6.1 for $\alpha=2$ in [@KMVZ] and using Theorem \[scattering0\], Lemma \[stability\], Lemma \[limit\] and Lemma \[Strichartzlimitlemma\], replacing $\frac{1}{|x|^{2}}$ and homogenous Sobolev spaces with $\frac{1}{|x|^{\alpha}}$ inhomogeneous Sobolev spaces, respectively, in some appropriate places gives the following lemma on nonlinear profiles when $|x_{n}|\rightarrow+\infty$. \[nonlinearprofile\] Let $k>0$, $1<\alpha\leq2$, $\psi\in H^{1}({\bf R}^{3})$ and the time sequence $t_{n}\equiv 0$ or $t_{n}\rightarrow\pm\infty$ such that $$\begin{aligned} \label{time0} \text{if}\;\; t_{n}\equiv 0,\;\;S_{0}(\psi)<n_{0}\;\;\text{ and}\;\; P_{0}(\psi)\geq 0\end{aligned}$$ and $$\begin{aligned} \label{timeinfty} \text{if}\;\; t_{n}\rightarrow\pm\infty,\;\;\frac{1}{2}\|\psi\|_{H^{1}}^{2}<n_{0}.\end{aligned}$$ Let $$\psi_{n}=e^{-it_{n}H_{\alpha}}\tau_{x_{n}}\psi,$$ where the space sequence $x_{n}$ satisfies $|x_{n}|\rightarrow+\infty$. Then taking $n$ large enough, we have that the solution $u(t):=NLS_{k}(t)\psi_{n}$ of ($\rm{NLS_{k}}$) with initial data $u_{0}=\psi_{n}$ is global and satisfies $$\|NLS_{k}(t)\psi_{n}\|_{S_{\alpha}^{1}(I)}\lesssim_{\|\psi\|_{H^{1}}}1.$$ Moreover, for $\forall \epsilon >0$, there exist a positive number $N=N(\epsilon)$ and a smooth compact supported function $\chi_{\epsilon}$ on ${\bf R}\times {\bf R}^{3}$ satisfying $$\begin{aligned} \label{dense} \|NLS_{k}(t)\psi_{n}(x)-\chi_{\epsilon}(t+t_{n}, x-x_{n})\|_{Z}<\epsilon\;\;\text{for}\;\;n\geq N,\end{aligned}$$ where the norm $$\|f\|_{Z}:=\|f\|_{L_{t,x}^{5}}+\|f\|_{L_{t,x}^{\frac{10}{3}}}+\|f\|_{L_{t}^{\frac{30}{7}}L_{x}^{\frac{90}{31}}}+\|f||_{L_{t}^{\frac{30}{7}}{\mathcal{H}}_{k}^{\frac{31}{60},\frac{90}{31}}}$$ Variational characterization {#section3} ============================ We start with proving the positivity of $K_{k}^{a, b}$ near $0$ in the $H^{1}({\bf R}^{3})$. \[positive\] If the uniform $L^{2}$-bounded sequence $\varphi_{n}\in H^{1}({\bf R}^{3})\setminus \{0\}$ satisfies $\lim_{n\rightarrow+\infty}\|\nabla \varphi_{n}\|_{L^{2}}=0$, then for sufficiently large $n\in {\bf N}$, $K_{k}^{a, b}(\varphi_{n})>0$. By the fact that $-2V\leq x\cdot\nabla V\leq 0$ and $V\geq 0$, we always have that for large enough $n$ $$\begin{aligned} K_{k}^{a, b}(\varphi_{n}) &=\frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi_{n}(x)|^{2}dx +\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}V|\varphi_{n}(x)|^{2}dx +\frac{b}{2}\displaystyle\int_{{\bf R}^{3}}x\cdot\nabla V |\varphi_{n}(x)|^{2}dx\\ &\;\;+\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{2}dx -\frac{4a+3b}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{4}dx\\ &\geq \frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi_{n}(x)|^{2}dx +\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{2}dx -\frac{4a+3b}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{4}dx\\ &\geq \frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi_{n}(x)|^{2}dx -\frac{4a+3b}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{4}dx\\ &\geq \frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi_{n}(x)|^{2}dx -\frac{4a+3b}{4}c\Big(\displaystyle\int_{{\bf R}^{3}}|\varphi_{n}(x)|^{2}dx\Big)\Big(\displaystyle\int_{{\bf R}^{3}}|\nabla\varphi_{n}(x)|^{2}dx\Big)^{\frac{3}{2}}\\ &\geq \frac{2a+b}{4}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi_{n}(x)|^{2}dx>0,\end{aligned}$$ where we have used the Gagliardo-Nirenberg inequality in the line before last and the assumptions $\lim_{n\rightarrow+\infty}\|\nabla \varphi_{n}\|_{L^{2}}=0$ and uniform boundedness of $\|\varphi_{n}\|_{L^{2}}$ in the last line. Simple computation gives $$\|\nabla \varphi_{\lambda}^{a, b}\|_{L^{2}}=e^{\frac{1}{2}(2a+b)\lambda}\|\nabla \varphi\|_{L^{2}},$$ which implies that $$\begin{aligned} \label{gradient0} \lim_{\lambda\rightarrow-\infty}\|\nabla \varphi_{\lambda}^{a, b}\|_{L^{2}}=0.\end{aligned}$$ So it follows from Lemma \[positive\] that for sufficiently small $\lambda<0$, $$\begin{aligned} \label{positiveK} K_{k}^{a, b}(\varphi_{\lambda}^{a, b})>0.\end{aligned}$$ For brevity, let $\overline{\mu}=2a+b$ and $\underline{\mu}=2a+3b$. Next we introduce the functional $$J_{k}^{a,b}(\varphi):=\frac{1}{\overline{\mu}}(\overline{\mu}-{\mathcal{L}}^{a,b})S_{k}(\varphi) =\frac{1}{\overline{\mu}}(\overline{\mu}S_{k}(\varphi)-K_{k}^{a, b}(\varphi)).$$ The lemma shows that the positivity of $J_{k}^{a,b}(\varphi)$ and the monotonicity of $J_{k}^{a,b}(\varphi_{\lambda}^{a, b})$ in $\lambda$. \[monotonicity\] For any $\varphi\in H^{1}({\bf R}^{3})$, we have the following two identities: $$\begin{aligned} \label{positiveJ} \overline{\mu}J_{k}^{a,b}(\varphi)&=(\overline{\mu}-{\mathcal{L}}^{a,b})S_{k}^{a,b}(\varphi)\nonumber\\ &=-b\displaystyle\int_{{\bf R}^{3}}V|\varphi|^{2}dx-\frac{b}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V)|\varphi|^{2}dx-b\|\varphi\|_{L^{2}}^{2}+\frac{a+b}{2}\|\varphi\|_{L^{4}}^{4}.\end{aligned}$$ and $$\begin{aligned} \label{monotonicityJ} ({\mathcal{L}}^{a,b}-\overline{\mu})(\underline{\mu}-{\mathcal{L}}^{a,b})S_{k}^{a,b}(\varphi)=b^{2}\displaystyle\int_{{\bf R}^{3}}(-3x\cdot\nabla V-x\nabla^{2} Vx^{T})|\varphi|^{2}dx, +(a+b)a\|\varphi\|_{L^{4}}^{4}\end{aligned}$$ where $\nabla^{2} V$ is Hessian matric of $V$. By simple computations, we have $${\mathcal{L}}^{a,b}\|\nabla\varphi\|_{L^{2}}^{2}=\overline{\mu}\|\nabla\varphi\|_{L^{2}}^{2},\;\; {\mathcal{L}}^{a,b}\|\varphi\|_{L^{2}}^{2}=\underline{\mu}\|\varphi\|_{L^{2}}^{2},\;\; {\mathcal{L}}^{a,b}\|\varphi\|_{L^{4}}^{4}=(4a+3b)\|\varphi\|_{L^{4}}^{4},$$ $${\mathcal{L}}^{a,b}\Big(\displaystyle\int_{{\bf R}^{3}}V|\varphi|^{2}dx\Big)=\underline{\mu}\displaystyle\int_{{\bf R}^{3}}V|\varphi|^{2}dx+b\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V)|\varphi|^{2}dx,$$ and $${\mathcal{L}}^{a,b}\Big(\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V)|\varphi|^{2}dx\Big)=2(a+b)\displaystyle\int_{{\bf R}^{3}}x\cdot\nabla V|\varphi|^{2}dx +b\displaystyle\int_{{\bf R}^{3}}x\nabla^{2} Vx^{T}|\varphi|^{2}dx,$$ which imply that and . We conclude the proof. \[positive-monotonicity\] By the definition of $V$, we have $2V+x\cdot V\geq 0$ and $3x\cdot\nabla V+x\nabla^{2}Vx^{T}\leq 0$, which together with and yield that $J_{k}^{a,b}(\varphi)> 0$ and ${\mathcal{L}}^{a,b}J_{k}^{a,b}(\varphi)\geq 0$ for any $\varphi\in H^{1}({\bf R}^{3})\setminus\{0\}$ which implies that the function $\lambda \mapsto J_{k}^{a,b}(\varphi_{\lambda}^{a, b})$ is increasing. Using the functional $K_{k}^{a,b}$ with $(a, b)$ satisfying , we introduce a general minimizing problem: $$\begin{aligned} \label{thresholdab} n_{k}^{a,b}=\inf\{S_{k}(\varphi):\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}, K_{k}^{a,b}(\varphi)=0\}.\end{aligned}$$ In particular, when $(a,b)=(3,-2)$, it is namely $n_{k}$ in . In fact, we shall show that $$\begin{aligned} \label{threshold4} n_{k}^{a,b}=n_{0}^{a,b}=n_{k}=n_{0}.\end{aligned}$$ To this end, we introduce another one general minimizing problem in terms of the positive functional $J_{0}^{a,b}$ for any ${a, b}$ satisfying : $$\begin{aligned} \label{thresholdJ} j^{a,b}=\inf\{J_{0}^{a,b}(\varphi):\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}, K_{0}^{a,b}(\varphi)\leq 0\}.\end{aligned}$$ In the following lemma, we shall show a relation between the two minimizers $n_{0}^{a, b}$ in (i.e., $k=0$) and $j^{a,b}$ in . \[j=n0\] $$\begin{aligned} \label{jab=n0} j^{a,b}=n_{0}^{a, b}.\end{aligned}$$ On one hand, for any $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$ with $K_{0}^{a,b}(\varphi)\leq 0$, there are two possibilities: $K_{0}^{a,b}(\varphi)= 0$ and $K_{0}^{a,b}(\varphi)< 0$. When $K_{0}^{a,b}(\varphi)=0$, $S_{0}(\varphi)=J_{0}^{a,b}(\varphi)$ by the definition of $J_{k}^{a, b}$. Hence, $$\begin{aligned} \label{onehand} n_{0}^{a, b}\leq J_{0}^{a,b}(\varphi).\end{aligned}$$ When $K_{0}^{a,b}(\varphi)=K_{0}^{a,b}(\varphi_{0}^{a, b})< 0$, using the continuity of $K_{0}^{a, b}(\varphi_{\lambda}^{a, b})$ in $\lambda$ and the fact that for sufficiently small $\lambda<0$ such that $K_{0}^{a, b}(\varphi_{\lambda}^{a, b})>0$ holds by yields that there exists a $\lambda_{0}<0$ such that $K_{0}^{a, b}(\varphi_{\lambda_{0}}^{a, b})=0$. Using Remark \[positive-monotonicity\], we get $$S_{0}(\varphi_{\lambda_{0}}^{a, b})=J_{0}^{a, b}(\varphi_{\lambda_{0}}^{a, b})\leq J_{0}^{a,b}(\varphi_{0}^{a, b})=J_{0}^{a,b}(\varphi)$$ Hence, we still have . Altogether, for any $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$ with $K_{0}^{a,b}(\varphi)\leq 0$, we have . By the definition of $j^{a, b}$, we have $n_{0}^{a,b}\leq j^{a,b}$. On the other hand, for any $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$ with $K_{0}^{a,b}(\varphi)=0$. Of course, $K_{0}^{a,b}(\varphi)\leq 0$ and $S_{0}(\varphi)=J_{0}^{a,b}(\varphi)$, which implies that $j^{a, b}\leq n_{0}^{a,b}$. Thus, we complete the proof. Next we shall apply Lemma \[j=n0\] to get $n_{k}^{a,b}=n_{0}^{a,b}$. \[n=n0\] $$\begin{aligned} \label{nab=n0} n_{k}^{a,b}=n_{0}^{a, b}.\end{aligned}$$ On one hand, for any $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$ with $K_{k}^{a,b}(\varphi)= 0$, we have $S(\varphi)=J^{a,b}(\varphi)$ by the definition of $J_{k}^{a, b}$. Since $V\geq 0$ and $2V+x\cdot\nabla V\geq 0$, we get $K_{0}^{a,b}(\varphi)\leq K_{k}^{a,b}(\varphi)=0$ and $J_{0}^{a,b}(\varphi)\leq J_{k}^{a,b}(\varphi)$, which together with implies that $$n_{0}^{a, b}=j^{a,b}\leq J_{k}^{a,b}(\varphi).$$ Also as $K^{a,b}(\varphi)= 0$, taking infimum on both sides of the above inequality yields that $$n_{0}^{a, b}\leq n_{k}^{a, b}.$$ On the other hand, we review that $Q$ is the positive radial exponential decaying solution of the elliptic equation , so there exists a constant $c$ such that $Q(x)\lesssim e^{-c|x|}$ for any $x\in {\bf R}^{3}$ (e.g. See Theorem 8.1.1 in [@C]. Here we only need the decaying property of $Q$, not necessarily exponential decaying). Let $x_{n}$ be a sequence satisfying $|x_{n}|\rightarrow+\infty$. Hence, for any given $R>0$, there exists $N=N(R)>0$, for any $n\geq N$, we have $|x_{n}|\geq 2R$. we claim that $$\begin{aligned} \label{V01} \displaystyle\int_{{\bf R}^{3}}V(x)|Q(x-x_{n})|^{2}dx\rightarrow 0\;\;\text{as}\;\; n\rightarrow +\infty.\end{aligned}$$ Indeed, $$\begin{aligned} \displaystyle\int_{{\bf R}^{3}}V(x)|Q(x-x_{n})|^{2}dx &\leq \displaystyle\int_{|x|\leq R}V(x)|Q(x-x_{n})|^{2}dx +\displaystyle\int_{|x|> R}V(x)|Q(x-x_{n})|^{2}dx\\ &:=I_{1}+I_{2}.\end{aligned}$$ For the first part $I_{1}$, when $n\geq N$, $|x-x_{n}|\geq \frac{|x_{n}|}{2}$ and then $$|Q(x-x_{n})|\leq \sup_{|x|\geq\frac{|x_{n}|}{2}}|Q(x)|:=C(|x_{n}|)\rightarrow 0$$ as $n$ tends to $+\infty$. Therefore, $$\begin{aligned} I_{1}\lesssim C(|x_{n}|)\displaystyle\int_{|x|\leq R}V(x)dx\lesssim R^{(3-\alpha)}C(|x_{n}|).\end{aligned}$$ For the second part $I_{2}$, $V(x)\lesssim R^{-\alpha}$, which implies that $$I_{2}\lesssim R^{-\alpha}\displaystyle\int_{{\bf R}^{3}}|Q(x)|^{2}dx.$$ Combining the above two parts yields that the claim holds true. Noticing that $x\cdot\nabla V=(-\alpha) V$, thus we also have $$\begin{aligned} \label{nablaV0} \displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V)|Q(x-x_{n})|^{2}dx\rightarrow 0\;\;\text{as}\;\; n\rightarrow +\infty.\end{aligned}$$ Using the definition of $K_{k}^{a, b}$ and $K_{0}^{a, b}(Q)=0$ and taking $\theta$ sufficiently large, we have $$K_{k}^{a, b}(\tau_{x_{n}}Q)>K_{0}^{a, b}(\tau_{x_{n}}Q)=K_{0}^{a, b}(Q)=0$$ and $$K_{k}^{a, b}(\theta\tau_{x_{n}}Q)<0,$$ from which it follows that there must be $\theta_{n}>1$ satisfying $K_{k}^{a, b}(\theta_{n}\tau_{x_{n}}Q)=0$. We claim that $$\begin{aligned} \label{theta1} \lim_{n\rightarrow +\infty}\theta_{n}=1.\end{aligned}$$ In fact, by $K_{0}^{a, b}(Q)=0$, we have $$\frac{2a+b}{2}\displaystyle\int_{{\bf R}^{3}}|\nabla Q(x)|^{2}dx +\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}|Q(x)|^{2}dx =\frac{4a+3b}{4}\displaystyle\int_{{\bf R}^{3}}|Q(x)|^{4}dx,$$ which implies that $$\begin{aligned} K_{k}^{a, b}(\theta_{n}\tau_{x_{n}}Q)&=\theta_{n}^{2}\Big[\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}V|Q(x-x_{n})|^{2}dx +\frac{b}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |Q(x-x_{n})|^{2}dx\\ &\;\;+\frac{4a+3b}{4}(1-\theta_{n}^{2})\displaystyle\int_{{\bf R}^{3}}|Q(x)|^{4}dx\Big]=0.\end{aligned}$$ Hence, $$\frac{2a+3b}{2}\displaystyle\int_{{\bf R}^{3}}V|Q(x-x_{n})|^{2}dx +\frac{b}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |Q(x-x_{n})|^{2}dx +\frac{4a+3b}{4}(1-\theta_{n}^{2})\displaystyle\int_{{\bf R}^{3}}|Q(x)|^{4}dx=0.$$ Taking limit in the above quality and using and gives the claim . Using and yields that $$\lim_{n\rightarrow+\infty}S_{k}(\theta_{n}\tau_{x_{n}}Q)=S_{0}(Q)=n_{0}^{a,b},$$ which together with $K_{k}^{a, b}(\theta_{n}\tau_{x_{n}}Q)=0$ implies that $$n_{k}^{a, b}\leq n_{0}^{a,b}.$$ We conclude the proof. Lemma \[n=n0\] shows that $n_{k}^{a,b}=n_{0}^{a,b}=S_{0}(Q)$, so $n_{k}^{a,b}$ and $n_{0}^{a,b}$ don’t depend on the parameters $a$ and $b$. Therefore, holds true. It is known that $n_{0}$ is attained by $Q$. However, the following lemma shows that $n_{k}$ is never attained for any $k>0$. \[attain\] $n_{k}$ is never attained for any $k>0$. Suppose that there exists a $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$ such that $n_{k}$ is attained by $\varphi$. Namely, $P_{k}(\varphi)=0$ and $S_{k}(\varphi)=n_{k}$. As $\varphi\in H^{1}({\bf R}^{3})\setminus \{0\}$, $\lim_{|x|\rightarrow +\infty}\varphi(x)=0$. Following the proof of , we have $$\begin{aligned} \label{V0} \displaystyle\int_{{\bf R}^{3}}V(x)|\varphi(x-x_{n})|^{2}dx\rightarrow 0\;\;\text{as}\;\; n\rightarrow +\infty.\end{aligned}$$ where $|x_{n}|\rightarrow+\infty$ as $n\rightarrow +\infty$. Hence, we have $$\begin{aligned} -\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\tau_{x_{n}}\varphi|^{2}dx\; \text{ is positive and tends to zero as}\;\; n\rightarrow +\infty\end{aligned}$$ and $$\begin{aligned} 2\displaystyle\int_{{\bf R}^{3}}V|\tau_{x_{n}}\varphi|^{2}dx +\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V )|\tau_{x_{n}}\varphi|^{2}dx\; \text{ is positive and tends to zero as}\;\; n\rightarrow +\infty.\end{aligned}$$ Therefore, for $n$ sufficiently large, we have $$\begin{aligned} \label{K0} P_{k}(\tau_{x_{n}}\varphi)<P_{k}(\varphi)=0\end{aligned}$$ and $$\begin{aligned} \label{J} J_{k}^{3, -2}(\tau_{x_{n}}\varphi)<J_{k}^{3,-2}(\varphi)=S_{k}(\varphi)=n_{k}.\end{aligned}$$ By , for sufficiently small $\lambda<0$, we have $P_{k}((\tau_{x_{n}}\varphi)_{\lambda}^{3, -2})>0$, which combined with implies that there exists a $\lambda_{0}<0$ such that $P_{k}((\tau_{x_{n}}\varphi)_{\lambda_{0}}^{3, -2})=0$. Using Remark \[positive-monotonicity\] and , we get $$n_{k}=S_{k}((\tau_{x_{n}}\varphi)_{\lambda_{0}}^{3, -2})=J_{k}^{3, -2}((\tau_{x_{n}}\varphi)_{\lambda_{0}}^{3, -2}) < J_{k}^{3,-2}((\tau_{x_{n}}\varphi)_{0}^{3, -2})=J_{k}^{3,-2}(\tau_{x_{n}}\varphi)<n_{k},$$ which is impossible. Thus we complete the proof. To get the fact that $P_{k}(\varphi)$ in and $I_{k}(\varphi)$ in have the same sign under the condition $S_{k}(\varphi)<n_{0}$ with $\varphi\in H^{1}({\bf R}^{3})$, we introduce ${\mathcal{N}}_{a,b}^{\pm}\subset H^{1}({\bf R}^{3})$ defined by $$\begin{aligned} \label{nab+} {\mathcal{N}}_{a,b}^{+}:=\{\varphi\in H^{1}({\bf R}^{3}): S_{k}(\varphi)<n_{0}, K_{k}^{a,b}(\varphi)\geq 0\}\end{aligned}$$ and $$\begin{aligned} \label{nab-} {\mathcal{N}}_{a,b}^{-}:=\{\varphi\in H^{1}({\bf R}^{3}): S_{k}(\varphi)<n_{0}, K_{k}^{a,b}(\varphi)< 0\}.\end{aligned}$$ It is easy to see that ${\mathcal{N}}_{3,-2}^{\pm}={\mathcal{N}}^{\pm}$ in and . The above fact can be obtained if we show that ${\mathcal{N}}_{a,b}^{\pm}$ are independent of $(a, b)$ (i.e., ${\mathcal{N}}_{a,b}^{\pm}={\mathcal{N}}^{\pm}$), which follows from the contractivity of ${\mathcal{N}}_{a,b}^{+}$. \[independentofab\] Let $(a, b)$ satisfy , then ${\mathcal{N}}_{a,b}^{\pm}$ are independent of $(a, b)$. The proof is similar to the one of Lemma 2.9 in [@IMN] (see also Lemma 2.15 in [@II]), so we omit it. The following lemma shows that for any element $\varphi$ in ${\mathcal{N}}^{+}$, $S_{k}(\varphi)\sim \|\varphi\|_{{\mathcal{H}}_{k}^{1}}\sim \|\varphi\|_{H^{1}}$. \[equivalentSH\] Let $\varphi\in {\mathcal{N}}^{+}$, then $$\begin{aligned} \label{equivalenceSH} \frac{1}{4}\|\varphi\|_{{\mathcal{H}}_{k}^{1}}^{2}\leq S_{k}(\varphi)\leq\frac{1}{2}\|\varphi\|_{{\mathcal{H}}_{k}^{1}}^{2}.\end{aligned}$$ By Lemma \[independentofab\], we have $I_{k}(\varphi)\geq 0$, which implies that $$\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\leq \|\varphi\|_{{\mathcal{H}}_{k}^{1}}^{2}.$$ Thus, we have $$\frac{1}{2}\|\varphi\|_{{\mathcal{H}}_{k}^{1}}^{2}\geq S_{k}(\varphi)=\frac{1}{2}\|\varphi\|_{{\mathcal H}_{k}^{1}}-\frac{1}{4}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx \geq \frac{1}{4}\|\varphi\|_{{\mathcal{H}}_{k}^{1}}^{2},$$ which is namely . We complete the proof. Finally, we obtain the corresponding uniform bounds on $P_{k}(\varphi)$ when $\varphi\in {\mathcal{N}}^{\pm}$, which plays a vital role in the proof of Theorem \[scattering1\]. \[uniformbounds\] 1\. Let $\varphi\in {\mathcal{N}}^{-}$, then $$\begin{aligned} \label{upperbound} P_{k}(\varphi)\leq -4\Big(n_{0}-S_{k}(\varphi)\Big)\end{aligned}$$ 2\. Let $\varphi\in {\mathcal{N}}^{+}$, then $$\begin{aligned} \label{lowerbound} P_{k}(\varphi)\geq \min\Big\{4\Big(n_{0}-S_{k}(\varphi)\Big), \frac{2}{5}\Big(\|\nabla\varphi\|_{L^{2}}^{2}-\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx)\Big)\Big\}.\end{aligned}$$ For any $\varphi\in H^{1}({\bf R}^{3})$, define $s(\lambda):=S_{k}(\varphi_{\lambda}^{3,-2})$, then $$\begin{aligned} &s(\lambda)=\frac{1}{2}e^{4\lambda}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx +\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}V(e^{-2\lambda}x)|\varphi(x)|^{2}dx\nonumber\\ &\quad\quad\quad +\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{2}dx -\frac{1}{4}e^{6\lambda}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx,\nonumber\\ &s'(\lambda)=P_{k}(\varphi_{\lambda}^{3,-2})=2e^{4\lambda}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx -e^{-2\lambda}\displaystyle\int_{{\bf R}^{3}}\Big[x\cdot (\nabla V)(e^{-2\lambda}x)\Big]|\varphi(x)|^{2}dx\label{s'}\\ &\quad\quad\quad-\frac{3}{2}e^{6\lambda}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\nonumber\\ &\quad\quad\geq 2e^{4\lambda}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx -\frac{3}{2}e^{6\lambda}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx,\nonumber\\ &s''(\lambda)=8e^{4\lambda}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx +2e^{-2\lambda}\displaystyle\int_{{\bf R}^{3}}\Big[x\cdot (\nabla V)(e^{-2\lambda}x)\Big]|\varphi(x)|^{2}dx \label{s''}\\ &\quad\quad\quad+2e^{-4\lambda}\displaystyle\int_{{\bf R}^{3}}\Big[x(\nabla^{2} V)(e^{-2\lambda}x)x^{T}\Big]|\varphi(x)|^{2}dx -9e^{6\lambda}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\nonumber\\ &\quad\quad\leq 8e^{4\lambda}\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx -4e^{-2\lambda}\displaystyle\int_{{\bf R}^{3}}\Big[x\cdot (\nabla V)(e^{-2\lambda}x)\Big]|\varphi(x)|^{2}dx\nonumber\\ &\quad\quad\quad-9e^{6\lambda}\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\nonumber\\ &\quad\quad= 4s'(\lambda)-3\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx\leq 4s'(\lambda)\nonumber,\end{aligned}$$ where we have used the inequalities $x\cdot\nabla V$ and $3x\cdot\nabla V+x\nabla^{2} Vx^{T}\leq 0$. 1\. If $\varphi\in {\mathcal{N}}^{-}$, then it follows from that $s'(0)=P_{k}(\varphi)<0$ and $s'(\lambda)>0$ for sufficiently small $\lambda<0$. Thus, by the continuity of $s'(\lambda)=P_{k}(\varphi_{\lambda}^{3,-2})$ in $\lambda$, there exists a negative $\lambda_{0}<0$ such that $$\begin{aligned} s'(\lambda_{0})=P_{k}(\varphi_{\lambda_{0}}^{3,-2})=0\;\;\text{and}\;\;s'(\lambda)<0,\;\;\text{for}\;\;\forall \lambda\in (\lambda_{0},0].\end{aligned}$$ By the definition of $n_{0}$ , $s(\lambda_{0})=S_{k}(\varphi_{\lambda_{0}}^{3,-2})\geq n_{0}$. Integrating the inequality over $[\lambda_{0}, 0]$ yields that $$P_{k}(\varphi)=s'(0)=s'(0)-s'(\lambda_{0})\leq 4(s(0)-s(\lambda_{0}))\leq 4(S_{k}(\varphi)-n_{0})=-4(n_{0}-S_{k}(\varphi)).$$ Thus, we complete the proof of . 2\. If $\varphi\in {\mathcal{N}}^{+}$, we consider two cases: one is $8P_{k}(\varphi)\geq 3\|\varphi\|_{L^{4}}^{4}$ and the other is $8P_{k}(\varphi)< 3\|\varphi\|_{L^{4}}^{4}$. For the case $8P_{k}(\varphi)\geq 3\|\varphi\|_{L^{4}}^{4}$, it follows from the definition of $P_{k}$ that $$2P_{k}(\varphi)=4\displaystyle\int_{{\bf R}^{3}}|\nabla \varphi(x)|^{2}dx -2\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx -3\displaystyle\int_{{\bf R}^{3}}|\varphi(x)|^{4}dx,$$ and then we have $$10P_{k}(\varphi)\geq 4(\|\nabla\varphi\|_{L^{2}}^{2}-\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx),$$ that is, $$\begin{aligned} \label{lowerbound1} P_{k}(\varphi)\geq \frac{2}{5}\Big(\|\nabla\varphi\|_{L^{2}}^{2}-\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |\varphi(x)|^{2}dx\Big).\end{aligned}$$ For the other case $8P_{k}(\varphi)< 3\|\varphi\|_{L^{4}}^{4}$, by , we have $$\begin{aligned} \label{s'decreasing} 0<8s'(\lambda)<3e^{6\lambda}\|\varphi\|_{L^{4}}^{4}\;\;\text{and then}\;\;s''(\lambda)\leq 4s'(\lambda)-3e^{6\lambda}\|\varphi\|_{L^{4}}^{4}< -4s'(\lambda)\end{aligned}$$ at $\lambda=0$. Also as $s'(\lambda)$ and $s''(\lambda)$ are continuous, $s'(\lambda)$ decreases as $s$ increases until $s'(\lambda_{1})=0$ for some $0<\lambda_{1}<+\infty$ and is true over $[0, \lambda_{1}]$. Since $P_{k}(\varphi_{\lambda_{1}}^{3,-2})=s'(\lambda_{1})=0$, by the definition of $n_{0}$ , $s(\lambda_{1})=S_{k}(\varphi_{\lambda_{1}}^{3,-2})\geq n_{0}$. Integrating the second inequality in over $[0, \lambda_{1}]$, we have $$\begin{aligned} -P_{k}(\varphi)=s'(\lambda_{1})-s'(0)<-4(s(\lambda_{1})-s(0))\leq -4\Big(n_{0}-S_{k}(\varphi)\Big),\end{aligned}$$ which is $$\begin{aligned} \label{lowerbound2} P_{k}(\varphi)\geq 4\Big(n_{0}-S_{k}(\varphi)\Big).\end{aligned}$$ Putting and together yields . Criteria for global well-posedness and blow-up ============================================== In this section, we will give the criteria for global well-posedness and blow-up for the solution $u$ of ($\rm{NLS_{k}}$), which are partial results of Theorem \[scattering1\]. The proof of blow-up part is based on the argument of [@DWZ]. \[globalvsblowup\] Let $u$ be the solution of ($\rm{NLS_{k}}$) on $(-T_{min}, T_{max})$, where $(-T_{min}, T_{max})$ is the maximal life-span. \(i) If $u_{0}\in {\mathcal{N}}^{+}$, then $u$ is global well-posedness and $u(t)\in {\mathcal{N}}^{+}$ for any $t\in {\bf R}$. \(ii) If $u_{0}\in {\mathcal{N}}^{-}$, then $u(t)\in {\mathcal{N}}^{-}$ for any $t\in (-T_{min}, T_{max})$ and one of the following four statements holds true: \(a) $T_{max}<+\infty$ and $\lim_{t\uparrow T_{max}}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(b) $T_{min}<+\infty$ and $\lim_{t\downarrow -T_{min}}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(c) $T_{max}=+\infty$ and there exists a sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $t_{n}\rightarrow+\infty$ and $\lim_{t_{n}\uparrow +\infty}\|\nabla u(t)\|_{L^{2}}=+\infty$. \(d) $T_{min}=+\infty$ and there exists a sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $t_{n}\rightarrow-\infty$ and $\lim_{t_{n}\downarrow -\infty}\|\nabla u(t)\|_{L^{2}}=+\infty$. $(i)$ Define $$I^{+}=\{t\in (-T_{min}, T_{max}): u(t)\in {\mathcal{N}}^{+}\}.$$ Obviously, $0\in I^{+}\neq \Phi$. On one hand, since $S_{k}(u(t))=S_{k}(u_{0})<n_{0}$ and $P_{k}(u(t))$ is continuous in $t$, $I^{+}$ is a closed subset of $(-T_{min}, T_{max})$. On the other hand, by , we further obtain that, $I^{+}$ is a open subset of $(-T_{min}, T_{max})$. Therefore, $I^{+}=(-T_{min}, T_{max})$. Namely, for any $t\in (-T_{min}, T_{max})$, $u(t)\in {\mathcal{N}}^{+}$. It follows form that for any $t\in (-T_{min}, T_{max})$, $$\|u(t)\|_{H^{1}}^{2}\leq \|u(t)\|_{{\mathcal{H}}_{k}^{1}}^{2}\leq 4S_{k}(u(t))\leq 4n_{0}.$$ So by local well-posedness result $(i)$ of Theorem \[localwellposedness\], we have $(-T_{min}, T_{max})={\bf R}$, which implies that $u$ is global well-posedness and $u(t)\in {\mathcal{N}}^{+}$ for any $t\in {\bf R}$. Thus, we complete the proof of $(i)$. $(ii)$ Similarly above, we can show that $u(t)\in {\mathcal{N}}^{-}$ for any $t\in (-T_{min}, T_{max})$ by replacing with . In the sequel, we only consider the positive time because the negative time can be dealt with similarly. If $T_{max}<+\infty$, we naturally have $\lim_{t\uparrow T_{max}}\|\nabla u(t)\|_{L^{2}}=+\infty$. If $T_{max}=+\infty$, we shall prove $\lim_{t\uparrow +\infty}\|\nabla u(t)\|_{L^{2}}=+\infty$ by contradiction. Assume we have $$C_0=\sup_{t\in\mathbb R^+}\|\nabla u(t)\|_{L^2}<\infty.$$ Consider the localized Virial identity and define $$\label{vf} I(t):=\int_{{\bf R}^{3}}\phi(x)|u(t,x)|^2dx,$$ then by straight computations, we obtain that for any $\phi\in C^4(\mathbb R^3)$ (e.g., see Proposition 7.1 in [@H]) $$I'(t)=2{\operatorname{Im}}\int_{{\bf R}^{3}}\nabla\phi\cdot\nabla u\bar udx;$$ $$\begin{aligned} I''(t)=\int_{{\bf R}^{3}}4{\operatorname{Re}}\nabla\bar u\nabla^2\phi\nabla udx -\int_{{\bf R}^{3}}2\nabla\phi\cdot\nabla V|u|^2+\Delta\phi |u|^4dx -\int_{{\bf R}^{3}}\Delta^2\phi|u|^2dx.\end{aligned}$$ In particular, if $\phi$ is a radial function , $$\begin{aligned} \label{I'} I'(t)=2{\operatorname{Im}}\int_{{\bf R}^{3}}\phi'(r)\frac{x\cdot\nabla u}r\bar udx,\end{aligned}$$ $$\begin{aligned} \label{I''0} &I''(t)=4\int_{{\bf R}^{3}}\frac{\phi'}r|\nabla u|^2dx+4\int_{{\bf R}^{3}}\left(\frac{\phi''}{r^2}-\frac{\phi'}{r^3}\right)|x\cdot\nabla u|^2dx\\ &-2\int_{{\bf R}^{3}}\frac{\phi'}{r}x\cdot\nabla V|u|^2dx-\int_{{\bf R}^{3}}\left(\phi''(r)+\frac{2}r\phi'(r)\right) |u|^4dx\nonumber -\int_{{\bf R}^{3}}\Delta^2\phi|u|^2dx.\end{aligned}$$ [**$L^2$ estimate in the exterior ball**]{} Given $R\gg 1$, which will be determined later. Take $\phi$ in such that $$\phi=\begin{cases}0,&0\leq r\leq\frac R2;\\1,&r\geq R,\end{cases}$$ and $$0\leq\phi\leq1,\ \ 0\leq\phi'\leq\frac4R.$$ By and Hölder inequality, there holds that $$\begin{aligned} I(t)=&I(0)+\int_0^tI'(\tau)d\tau \leq I(0)+t\|\phi'\|_{L^\infty}M(u_{0}) C_0\\ \leq&\int_{|x|\geq\frac R2}|u_0|^2dx+\frac{4M(u_{0}) C_0t}R.\end{aligned}$$ Note that $$\int_{|x|\geq\frac R2}|u_0|^2dx=o_R(1),$$ and $$\int_{|x|\geq R}|u(t,x)|^2dx\leq I(t).$$ So for given $\eta_0>0$, if $$t\leq\frac{\eta_0R}{4M(u_{0}) C_0},$$ then we have that $$\begin{aligned} \label{outermass} \int_{|x|\geq\frac R2}|u(t,x)|^2dx\leq\eta_0+o_R(1).\end{aligned}$$ [**Localized Virial identity**]{} $I''(t)$ can be rewritten as $$\begin{aligned} \label{I''} I''(t)=4P_{k}(u)+R_1+R_2+R_3+R_4,\end{aligned}$$ where $$R_1=4\int_{{\bf R}^{3}}\left(\frac{\phi'}r-2\right)|\nabla u|^2dx+4\int_{{\bf R}^{3}} \left(\frac{\phi''}{r^2}-\frac{\phi'}{r^3}\right)|x\cdot\nabla u|^2dx,$$ $$R_2=-\int_{{\bf R}^{3}}\left(\phi''+\frac{2}r\phi'(r)-6\right)| u|^4dx,$$ $$R_3=-2\int_{{\bf R}^{3}}\left(\frac{\phi'}{r}-2\right)(x\cdot\nabla V)|u|^2dx,$$ $$R_4=-\int_{{\bf R}^{3}}\Delta^2\phi|u|^2dx.$$ At this stage, we choose another radial function $\phi$ such that $$\begin{aligned} \label{radialfunction} 0\leq\phi\leq r^2,\ \ \phi''\leq2,\ \ \phi^{(4)}\leq\frac4{R^2},\;\; \text{and}\;\; \phi=\begin{cases}r^2,&0\leq r\leq R;\\0,&r\geq 2R\end{cases}.\end{aligned}$$ First, note that $R_1\leq0$. If $\phi''\leq r^{-1}\phi'\leq0$, by $\phi'\leq2r$, it is easy to see that $R_1\leq0$. If $\phi''\leq r^{-1}\phi'\leq0$, by $\phi''\leq2$, it holds that $$R_1\leq 4\int_{{\bf R}^{3}}\left(\frac{\phi'}r-2\right)|\nabla u|^2dx+4\int_{{\bf R}^{3}} \left(\phi''-\frac{\phi'}r\right)|\nabla u|^2dx=4\int_{{\bf R}^{3}}\left(\phi''-2\right)|\nabla u|^2dx\leq 0.$$ Secondly, let $\chi^{4}(r)=|\phi''+\frac{2}r\phi'(r)-6|$. it is easy to see that ${\rm supp}\chi\subset[R,\infty).$ So by Gagliardo-Nirenberg inequality $$R_2\leq \displaystyle\int_{{\bf R}^{3}}\chi^{4}|u|^{4}dx\lesssim \|\nabla(\chi u)\|_{L^{2}}^{3}\|\chi u\|_{L^{2}} \leq C(M(u_{0}), C_0) \|u\|_{L^2(|x|>R)}^{\frac{1}{4}}.$$ By the properties of $\phi$, $$R_4\leq CR^{-2}\|u\|_{L^2(|x|>R)}^2.$$ Finally, by $x\cdot\nabla V\leq0$, and we obtain $R_3\leq0$. Putting all the above estimates together, there holds that for $R\gg 1$, $$\begin{aligned} \label{I''upperbound} I''(t)\leq 4P_{k}(u)+\tilde C\|u\|_{L^2(|x|>R)},\end{aligned}$$ where $\tilde C>0$ depending on $M(u_{0})$ and $C_0$. Using and yields that $t\leq T:=\eta_0R/(4M(u_{0}) C_0)$, $$I''(t)\leq 4P_{k}(u)+\tilde C(\eta_0^{1/2}+o_R(1)).$$ As $u(t)\in {\mathcal{N}}^{-}$, it follows from that there exists $$\beta_{0}:=-4(n_{0}-S_{k}(u(t)))=-4(n_{0}-S_{k}(u_{0}))<0$$ such that $$\begin{aligned} \label{I''upperbound1} I''(t)\leq 4\beta_{0}+\tilde C(\eta_0^{1/2}+o_R(1)).\end{aligned}$$ Choose $\eta_0$ sufficiently small and take $R$ sufficiently large such that $$4\beta_{0}+\tilde C\eta_0^{1/2}+o_R(1)<\beta_{0}.$$ Integrating over $[0, T]$ twice, we obtain that $$\begin{aligned} I(T)&\leq I(0)+I'(0)T+\int_0^T\int_0^t\beta_{0}\\ &\leq I(0)+I'(0)T+\beta_{0}\frac{T^2}2.\end{aligned}$$ Hence for $T=\eta_0R/(4M(u_{0})C_0)$, we obtain that $$\begin{aligned} \label{IT} I(T)\leq I(0)+I'(0)\eta_0R/(4M(u_{0}) C_0)+\alpha_0R^2,\end{aligned}$$ where the constant $$\alpha_0=\beta_0\eta_0^2/(4M(u_{0}) C_0)^2<0$$ is independent of $R$. At the same time, we note that $$\begin{aligned} \label{I0} I(0)=o_R(1)R^2,\ \ \ I'(0)=o_R(1)R.\end{aligned}$$ In fact, $$\begin{aligned} I(0)&\leq\int_{|x|<\sqrt{R}}|x|^2|u_0|^2dx+\int_{\sqrt{R}<|x|<2R}|x|^2|u_0|^2dx\\ &\leq RM(u_{0})+R^2\int_{|x|>\sqrt{R}}|u_0|^2dx=o_R(1)R^2.\end{aligned}$$ Similarly, we obtain the second estimate and then prove . Putting with together and choosing $R$ sufficiently enough, we find that $$I(T)\leq (o_R(1)+\alpha_0)R^2\leq\frac14\alpha_0R^2<0,$$ which is impossible since $I\geq 0$. Thus, we conclude the proof of blow-up part. Scattering result ================= In this section, we shall show the remaining part of Theorem \[scattering1\]. In the previous section, we have obtained that the solution $u(t)$ of ($\rm{NLS_{k}}$) is global and belongs to $\mathcal{N}^{+}$ if $u_{0}\in \mathcal{N}^{+}$. To get scattering result, by $(iv)$ of Theorem \[localwellposedness\], it’s enough to get . To this end, we introduce a definition. \[SC\] We say that $SC(u_0)$ holds if for $u_0\in H^1({\bf R}^{3})$ satisfying $u_{0}\in \mathcal{N}^{+}$, the corresponding global solution $u$ of ($\rm{NLS_{k}}$) satisfies . We first note that for $u_{0}\in \mathcal{N}^{+}$, there exists $\delta>0$ such that if $S_{k}(u_{0})<\delta$, then holds. In fact, by , $\|u_{0}\|_{H^{1}}\lesssim S_{k}(u_{0})$. Therefore, by $(ii)$ of Theorem \[localwellposedness\], taking $\delta>0$ sufficiently small gives . Now for each $\delta>0$, we define the set $S_\delta$ as follows: $$\begin{aligned} \label{Sdelta} S_\delta=\{u_0\in H^1({\bf R}^{3}):\ \ S_{k}(u_{0})<\delta \ \ and \ \ u_{0}\in{\mathcal{N}^{+}} \Rightarrow \ \eqref{scatteringbound}\ holds\}.\end{aligned}$$ We also define $$\begin{aligned} \label{nc} n_c=\sup\{\delta:\ \ u_0\in S_\delta\Rightarrow SC(u_0)\ \ holds \}.\end{aligned}$$ Hence, $0<n_{c}\leq n_{0}$. Next we shall prove that $n_{c}<n_{0}$ is impossible, which implies that $n_{c}=n_{0}$. Thus, we assume now $$n_{c}<n_{0}.$$ By the definition of $n_{c}$, we can find a sequence of solutions $u_{n}$ of ($\rm{NLS_{k}}$) with initial data $\phi_{n}\in {\mathcal{N}^{+}}$ such that $S_{k}(\phi_{n})\rightarrow n_{c}$ and $$\begin{aligned} \label{L5} \|u_{n}\|_{L_{{\bf R}^{+},x}^{5}}=+\infty\;\;\text{and}\;\;\|u_{n}\|_{L_{{\bf R}^{-},x}^{5}}=+\infty.\end{aligned}$$ In the subsequent subsection, our goal is to prove the existence of critical element $u_{c}\in H^1({\bf R}^{3})$, which is a global solution of ($\rm{NLS_{k}}$) with initial data $u_{c,0}$ such that $S_{k}(u_{c,0})=n_{c}$, $u_{c,0}\in {\mathcal{N}^{+}}$ and $SC(u_{c,0})$ does not hold. Moreover, we prove that if $\|u_{c}\|_{L_{t,x}^{5}}=+\infty$, then $K:=\{u_c(t): t\in {\bf R}\}$ is precompact in $H^1({\bf R}^{3})$. Before showing the existence and compactness of critical element $u_{c}$, we need a lemma related with the linear profile decomposition Lemma \[linearprofile\]. \[whole-partial\] Let $M\in {\bf N}$ and $\psi^{j}\in H^{1}({\bf R}^{3})$ for any $0\leq j\leq M$. Suppose that there exist some $\delta>0$ and $\epsilon>0$ with $2\epsilon<\delta$ such that $$\begin{aligned} \sum_{j=0}^{M}S_{k}(\psi^{j})-\epsilon\leq S_{k}\Big(\sum_{j=0}^{M}\psi^{j}\Big)\leq n_{0}-\delta,\:\:\;\; -\epsilon\leq I_{k}\Big(\sum_{j=0}^{M}\psi^{j}\Big)\leq \sum_{j=0}^{M}I_{k}(\psi^{j})+\epsilon.\end{aligned}$$ Then $\psi^{j}\in \mathcal{N}^{+}$ for any $0\leq j\leq M$. Suppose that for some $0\leq l\leq M$, $I_{k}(\psi^{l})<0$. By with $(a, b)=(3,0)$, we have $I_{k}\Big((\psi^{l})_{\lambda}^{3,0}\Big)>0$ for sufficiently small $\lambda<0$. Thus, by continuity of $I_{k}\Big((\psi^{l})_{\lambda}^{3,0}\Big)$ in $\lambda$, there exists $\lambda_{2}<0$ such that $I_{k}\Big((\psi^{l})_{\lambda_{2}}^{3,0}\Big)=0$. As $n_{k}^{3.0}=n_{0}$, by the increasing property of $J_{k}^{3,0}\Big((\psi^{l})_{\lambda}^{3,0}\Big)$ in $\lambda$, we have $$J_{k}^{3,0}(\psi^{l})\geq J_{k}^{3,0}\Big((\psi^{l})_{\lambda_{2}}^{3,0}\Big)=S_{k}\Big((\psi^{l})_{\lambda_{2}}^{3,0}\Big)\geq n_{0}.$$ By the nonnegativity of $J_{k}^{3,0}(\psi^{j})$ for any $0\leq j\leq M$ and $2\epsilon<\delta$, we have $$\begin{aligned} n_{0}&\leq J_{k}^{3,0}(\psi^{l})\leq \sum_{j=0}^{M}J_{k}^{3,0}(\psi^{j})=\sum_{j=0}^{M}\Big(S_{k}(\psi^{j})-\frac{1}{2}I_{k}(\psi^{j})\Big)\\ &\leq S_{k}\Big(\sum_{j=0}^{M}\psi^{j}\Big)+\epsilon-\frac{1}{2}I_{k}\Big(\sum_{j=0}^{M}\psi^{j}\Big)+\frac{1}{2}\epsilon\\ &\leq n_{0}-\delta+2\epsilon<n_{0},\end{aligned}$$ which is impossible. Hence, for each $0\leq j\leq M$, we obtain $$I_{k}(\psi^{j})\geq 0.$$ So $$S_{k}(\psi^{j})=J_{k}^{3,0}(\psi^{j})+\frac{1}{2}I_{k}(\psi^{j})\geq 0,$$ which together with $$\sum_{j=0}^{M}S_{k}(\psi^{j})\leq S_{k}\Big(\sum_{j=0}^{M}\psi^{j}\Big)+\epsilon\leq n_{0}-\delta+\epsilon<n_{0}$$ yields that $S_{k}(\psi^{j})<n_{0}$ for each $0\leq j\leq M$. By Lemma \[independentofab\], $\psi^{j}\in \mathcal{N}^{+}$ for any $0\leq j\leq M$. Existence and compactness of critical element --------------------------------------------- \[criticalelement\] There exists a $u_{c,0}$ in $H^1({\bf R}^{3})$ with $S_{k}(u_{c,0})=n_{c}$, $u_{c,0}\in {\mathcal{N}^{+}}$ such that if $u_c$ is the corresponding global solution of ($\rm{NLS_{k}}$) with the initial data $u_{c,0}$, then $\|u_c\|_{L_{t,x}^{5}({\bf R}\times{\bf R}^{3})}=+\infty$ and $K$ is precompact in $H^1({\bf R}^{3})$. We first note that $\{\phi_{n}\}_{n=1}^{+\infty}$ be a uniformly bounded sequence in $H^{1}({\bf R}^{3})$. In fact, since $\phi_{n}\in \mathcal{N}^{+}$ for any $n\in {\bf N}$, by , $$\|\phi_{n}\|_{H^{1}}\leq \|\phi_{n}\|_{{\mathcal{H}}_{k}^{1}}^{2}\leq 4S_{k}(\phi_{n})<4n_{0}.$$ We apply Lemma \[linearprofile\] to $\phi_{n}$ to get that for each $M\leq M^{*}$, $$\begin{aligned} \label{M} \phi_{n}=\sum_{j=1}^{M}\psi_{n}^{j}+W_{n}^{M},\end{aligned}$$ $$\begin{aligned} S_{k}(\phi_{n})=\sum_{j=1}^{M}S_{k}(\psi_{n}^{j})+S_{k}(W_{n}^{M})+o_{n}(1),\end{aligned}$$ and $$\begin{aligned} I_{k}(\phi_{n})=\sum_{j=1}^{M}I_{k}(\psi_{n}^{j})+I_{k}(W_{n}^{M})+o_{n}(1),\end{aligned}$$ which together with $\phi_{n}\in \mathcal{N}^{+}$ yield that there exist some $\delta>0$ and $\epsilon>0$ with $2\epsilon<\delta$ such that $$\begin{aligned} \sum_{j=1}^{M}S_{k}(\psi_{n}^{j})+S_{k}(W_{n}^{M})-\epsilon\leq S_{k}(\phi_{n})\leq n_{0}-\delta,\:\:\;\; -\epsilon\leq I_{k}(\phi_{n})\leq \sum_{j=1}^{M}I_{k}(\psi_{n}^{j})+I_{k}(W_{n}^{M})+\epsilon.\end{aligned}$$ According to Lemma \[whole-partial\], we have that for large $n$ and each $1\leq j\leq M$, $\psi_{n}^{j}$, $W_{n}^{M}\in {\mathcal{N}^{+}}$, and then $$\begin{aligned} \label{M=1} 0\leq \overline{\lim_{n\rightarrow+\infty}}S_{k}(\psi_{n}^{j}) \leq \overline{\lim_{n\rightarrow+\infty}}S_{k}(\phi_{n})=n_{c},\end{aligned}$$ where if equality holds in the last inequality for some $j$, we must have $M^{*}=1$ and $W_{n}^{1}\rightarrow 0$ in $H^{1}$. We claim that if equality holds in the last inequality of for some $j$ (w.l.g. let $j=1$), $u_{c,0}$ is namely $\psi^{1}$. Indeed, at this time we have $$\begin{aligned} \label{psi1} \phi_{n}=\psi_{n}^{1}+W_{n}^{1}=e^{it_{n}^{1}H_{\alpha}}\tau_{x_{n}^{1}}\psi^{1}+W_{n}^{1},\end{aligned}$$ $$\begin{aligned} \overline{\lim_{n\rightarrow+\infty}}S_{k}(\psi_{n}^{1})=n_{c}\end{aligned}$$ and $$\begin{aligned} \label{wn1} W_{n}^{1}\rightarrow 0 \;\;\text{in}\;\; H^{1}.\end{aligned}$$ Our target is to prove that $$\begin{aligned} \label{xt} x_{n}^{1}\equiv 0\;\;\text{ and}\;\; t_{n}^{1}\equiv 0.\end{aligned}$$ If is true, then we have $$\begin{aligned} \phi_{n}=\psi^{1}+W_{n}^{1},\;\; S_{k}(\psi^{1})=n_{c},\;\; \psi^{1}\in\mathcal{N}^{+}\end{aligned}$$ and $$\begin{aligned} \lim_{n\rightarrow+\infty}\|\phi_{n}-\psi^{1}\|_{H^{1}}=0.\end{aligned}$$ Here $\psi^{1}$ is namely our required $u_{c,0}$. Let $u_c$ be the solution of ($\rm{NLS_{k}}$) with the initial data $u_{c,0}=\psi^{1}$, then $u_{c}$ is global and $S_{k}(u_{c})=S_{k}(u_{c,0})=n_{c}$. Using Lemma \[stability\], it holds that $\|u_c\|_{L_{t,x}^{5}({\bf R}\times{\bf R}^{3})}=+\infty$. Otherwise, $\|u_n\|_{L_{t,x}^{5}}<+\infty$, which contradicts with . If is false, then either $|x_{n}^{1}|\rightarrow+\infty$ or $t_{n}^{1}\rightarrow\pm\infty$. I will see that it leads to $\|u_n\|_{L_{t,x}^{5}}<+\infty$ or $\|u_{n}\|_{L_{{\bf R}^{\pm},x}^{5}}<+\infty$ contradicting with . For the case $|x_{n}^{1}|\rightarrow+\infty$, by , we have $$\begin{aligned} \label{H1} \lim_{n\rightarrow +\infty}\|\psi_{n}^{1}\|_{{\mathcal H}_{k}^{1}}=\|\psi^{1}\|_{H^{1}}>0,\end{aligned}$$ which implies that when $t_{n}^{1}\equiv 0$, $$S_{0}(\psi^{1})=\overline{\lim_{n\rightarrow+\infty}}S_{k}(\psi_{n}^{1})=n_{c}<n_{0}\;\;\text{and} \;\;I_{0}(\psi^{1})=\overline{\lim_{n\rightarrow+\infty}}I_{k}(\psi_{n}^{1})\geq 0.$$ By , $P_{0}(\psi^{1})\geq 0$. Hence, when $t_{n}^{1}\equiv 0$, $\psi^{1}$ satisfies the condition . When $t_{n}^{1}\rightarrow\pm\infty$, apply and to get $$\frac{1}{2}\|\psi^{1}\|_{H^{1}}^{2}=\overline{\lim_{n\rightarrow+\infty}}S_{k}(\psi_{n}^{1})=n_{c}<n_{0},$$ that is, $\psi^{1}$ satisfies the condition . Using Theorem \[nonlinearprofile\] yields that the solution $NLS_{k}(t)\psi_{n}^{1}$ of ($\rm{NLS_{k}}$) with initial data $\psi_{n}^{1}$ is global and satisfies $$\|NLS_{k}(t)\psi_{n}^{1}\|_{S_{\alpha}^{1}(I)}\lesssim_{\|\psi^{1}\|_{H^{1}}}1.$$ We know that $W_{n}^{1}\rightarrow 0$ in $H^{1}$, which is $$\begin{aligned} \lim_{n\rightarrow+\infty}\|\phi_{n}-\psi_{n}^{1}\|_{H^{1}}=0.\end{aligned}$$ Using Lemma \[stability\] again, we obtain $\|u_n\|_{L_{t,x}^{5}}<+\infty$. For the other case $t_{n}^{1}\rightarrow\pm\infty$, we only cope with $t_{n}^{1}\rightarrow-\infty$ since $t_{n}^{1}\rightarrow+\infty$ can be dealt with similarly. Apply with $x_{n}^{1}\equiv 0$, , Strichartz estimates Lemma \[Strichartz\] and norm equivalence Lemma \[Sobolev\] to get that $$\begin{aligned} \lim_{n\rightarrow+\infty}\|e^{-itH_{\alpha}}\phi_{n}\|_{L_{{\bf R}^{+},x}^{5}} &\leq \lim_{n\rightarrow+\infty}\|e^{-i(t-t_{n}^{1})H_{\alpha}}\psi^{1}\|_{L_{{\bf R}^{+},x}^{5}} +\lim_{n\rightarrow+\infty}\|e^{-itH_{\alpha}}W_{n}^{1}\|_{L_{{\bf R}^{+},x}^{5}}\\ &\lesssim \lim_{n\rightarrow+\infty}\|W_{n}^{1}\|_{H^{1}} +\lim_{n\rightarrow+\infty}\|e^{-itH_{\alpha}}\psi^{1}\|_{L_{(-t_{n}^{1}, +\infty),x}^{5}}=0,\end{aligned}$$ which immediately implies that $\lim_{n\rightarrow+\infty}\|u_{n}\|_{L_{{\bf R}^{+},x}^{5}}=0$ by $(ii)$ of Theorem \[localwellposedness\]. Thus, we obtain that holds true. Next we turn to the other situation that equality doesn’t hold in the last inequality of for any $1\leq j\leq M$. So for each $1\leq j\leq M$ and $\psi_{n}^{j}\in \mathcal{N}^{+}$, there exists $\delta=\delta_{j}>0$ such that $$\begin{aligned} \overline{\lim_{n\rightarrow+\infty}}S_{k}(\psi_{n}^{j}) \leq n_{c}-2\delta,\;\; P_{k}(\psi_{n}^{j})\geq 0\;\;\text{and}\;\;I_{k}(\psi_{n}^{j})\geq 0.\end{aligned}$$ We shall use $\psi_{n}^{j}$ to constitute approximate solutions of $u_{n}$ under three cases: $|x_{n}^{j}|\rightarrow+\infty$; $x_{n}^{j}\equiv 0$ and $t_{n}^{j}\equiv 0$; $x_{n}^{j}\equiv 0$ and $t_{n}^{j}\rightarrow\pm\infty$ and then apply Lemma \[stability\] to get a contradiction. For some $j$ such that $|x_{n}^{j}|\rightarrow+\infty$, still holds for $\psi_{n}^{j}$. Using the same argument after , we obtain that $\psi^{j}$ satisfies or . Therefore, using Theorem \[nonlinearprofile\], we can constitute a global solution $v_{n}^{j}(t):=NLS_{k}(t)\psi_{n}^{j}$ of ($\rm{NLS_{k}}$) with initial data $\psi_{n}^{j}$ such that $$\|v_{n}^{j}\|_{L_{t,x}^{5}}\leq \|NLS_{k}(t)\psi_{n}^{j}\|_{S_{\alpha}^{1}(I)}\lesssim_{\|\psi^{j}\|_{H^{1}}}1.$$ For some $j$ such that $x_{n}^{j}\equiv 0$ and $t_{n}^{j}\equiv 0$, we apply $\psi^{j}\in \mathcal{N}^{+}$ to constitute a global solution $v_{n}^{j}(t):=NLS_{k}(t)\psi^{j}$ of ($\rm{NLS_{k}}$) with initial data $\psi^{j}$. For some $j$ such that $x_{n}^{j}\equiv 0$ and $t_{n}^{j}\rightarrow\pm\infty$, by $(iii)$ of Theorem \[localwellposedness\], there exists $\tilde{\psi}^{j}\in H^{1}$ such that $$\begin{aligned} \label{initialdata} \|NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}-e^{it_{n}^{j}H_{\alpha}}\psi^{j}\|_{{\mathcal{H}}_{k}^{1}} \sim \|NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}-e^{it_{n}^{j}H_{\alpha}}\psi^{j}\|_{H^{1}}\rightarrow 0\;\;\text{as}\;\;n\rightarrow+\infty,\end{aligned}$$ which implies that for each $1\leq j\leq M$ and $n$ large enough, $$\begin{aligned} S_{k}\Big(NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}\Big) \leq n_{c}-\delta,\;\; P_{k}\Big(NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}\Big)\geq 0\;\;\text{and}\;\;I_{k}\Big(NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}\Big)\geq 0.\end{aligned}$$ We set $v_{n}^{j}(0)=NLS_{k}(t_{n}^{j})\tilde{\psi}^{j}$. Then according to the definition of $n_{c}$ and $v_{n}^{j}(0)\in\mathcal{N}^{+} $, we obtain that the solution $v_{n}^{j}(t):=NLS_{k}(t+t_{n}^{j})\tilde{\psi}^{j}$ of ($\rm{NLS_{k}}$) with initial data $v_{n}^{j}(0)$ is global and satisfies uniform space-time bounds: $\|v_{n}^{j}\|_{L_{t,x}^{5}}<+\infty$. As a result, we can construct approximate solutions of ($\rm{NLS_{k}}$): $$\tilde{u}_{n}(t):=\sum_{j=1}^{M}v_{n}^{j}+e^{-itH_{\alpha}}W_{n}^{M}$$ and set $$e:=(i\partial_{t}-H_{\alpha})\tilde{u}_{n}+|\tilde{u}_{n}|^{2}\tilde{u}_{n}.$$ By and , we have $$\begin{aligned} \label{approximate1} \|\phi_{n}-\tilde{u}_{n}(0)\|_{H^{1}}=\|u_{n}(0)-\tilde{u}_{n}(0)\|_{H^{1}}\rightarrow 0\;\;\text{as}\;\;n\rightarrow+\infty,\end{aligned}$$ which implies that $$\begin{aligned} \label{approximate2} \overline{\lim}_{n\rightarrow+\infty}\|\tilde{u}_{n}(0)\|_{H^{1}}\;\;\text{ has a uniform bound independent of}\;\; M.\end{aligned}$$ Using the same argument of Lemma 7.3 in [@KMVZ] and replacing $H_{\alpha}$ and homogeneous fractional operator (e.g., $|\nabla|^{\frac{1}{2}}$) with $1+H_{\alpha}$ and inhomogeneous fractional operator (e.g., $(1+\Delta)^{\frac{1}{4}}$), respectively, we also obtain the same results as there for $\tilde{u}_{n}(t)$, that is, $$\begin{aligned} \label{approximate3} \overline{\lim}_{n\rightarrow+\infty}\|\tilde{u}_{n}\|_{L_{t,x}^{5}} \;\;\text{has a uniform bound independent of }\;\;M\end{aligned}$$ and $$\begin{aligned} \label{approximate4} \lim_{M\rightarrow M^{*}}\overline{\lim}_{n\rightarrow+\infty} \|(1+\Delta)^{\frac{1}{4}}e\|_{L_{t,x}^{\frac{10}{7}}}=0\end{aligned}$$ Applying - to Theorem \[stability\] gives $\|u_n\|_{L_{t,x}^{5}}<+\infty$, which is a contradiction with . Thus, we have completed the proof of existence of critical element $u_{c}$. Finally, we consider precompactness of $K$ in $H^{1}$. We recall that $u_{c}$ satisfies the following properties: $$\begin{aligned} S_{k}(u_{c}(t))=n_{c}\;\;,\;\; u_{c}(t)\in {\mathcal{N}^{+}}\;\;\text{ for}\;\; \forall t\in {\bf R}\;\;\text{ and}\;\; \|u_c\|_{L_{t,x}^{5}}=+\infty.\end{aligned}$$ In particular, for any time sequence $\{t_{n}\}_{n=1}^{+\infty}$, the sequence $\{u_{c}(t_{n})\}_{n=1}^{+\infty}$ also satisfies $$\begin{aligned} S_{k}(u_{c}(t_{n}))=n_{c}\;\;,\;\; u_{c}(t_{n})\in {\mathcal{N}^{+}}\;\;\text{ for}\;\; \forall t\in {\bf R}\;\;\text{ and}\;\; \|u_c\|_{L_{(-\infty, t_{n}),x}^{5}}=\|u_c\|_{L_{( t_{n}, +\infty),x}^{5}}=+\infty.\end{aligned}$$ Hence, regarding $u_{c}(t_{n})$ as the foregoing $\phi_{n}$ and noting that the fact $\phi_{n}$ converges $\psi^{1}$ in $H^{1}$ yieds that $K$ is precompact in $H^{1}$. Thus, we complete the whole proof. Precluding the critical element ------------------------------- In this subsection, we shall apply the localized Virial identities and to preclude the critical element $u_{c}$. First by the precompactness of $K$, we have uniform localization of $u_{c}$: For each $\epsilon>0,$ there exists $R=R(\epsilon)>0$ independent of $t$ such that $$\begin{aligned} \label{localization} \int_{|x|>R}\Big(|\nabla u_{c}(t,x)|^2+|u_{c}(t,x)|^2+|u_{c}(t,x)|^4\Big)dx\leq\epsilon.\end{aligned}$$ We next claim that there exists a constant $c$ such that for any $t\in {\bf R}$, $$\begin{aligned} \label{control} \|\nabla u_{c}(t)\|_{L^{2}}\geq c\|u_{c}(t)\|_{L^{2}}.\end{aligned}$$ Indeed, if it’s false, then there exists a time sequence $\{t_{n}\}_{n=1}^{+\infty}$ such that $$\|\nabla u_{c}(t_{n})\|_{L^{2}}\leq \frac{1}{n}\|u_{c}(t_{n})\|_{L^{2}}=\frac{1}{n}\|u_{c}(0)\|_{L^{2}},$$ which means that $u_{c}(t_{n}\rightarrow 0$ in $\dot{H}^{1}$. However, $\{u_{c}(t_{n})\}_{n=1}^{+\infty}$ is precompact in $H^{1}$. Hence, there exists a subsequence (still denoted by itself) $u_{c}(t_{n})\rightarrow 0$ in $H^{1}$. As $u_{c}(t_{n})\in {\mathcal{N}^{+}}$, by , $n_{c}=\lim_{n\rightarrow+\infty}S_{k}(u_{c}(t_{n}))=0$, which is impossible. Now we use localized Virial identities and only with $u_{c}$ in place of $u$ again, where we still choose the radial function $\phi$ satisfying . For $R_{1}$, $R_{2}$, $R_{3}$ and $R_{4}$ in , by , we have $$\begin{aligned} \label{importanterror} |R_{1}+R_{2}+R_{3}+R_{4}|\lesssim &\displaystyle\int_{|x|\geq R}\Big(|\nabla u_{c}(t,x)|^{2}+|u_{c}(t,x)|^{4}+\frac{1}{R^{2}}|u_{c}(t,x)|^{2} +\frac{1}{R^{\alpha}}|u_{c}(t,x)|^{2}\Big)dx\nonumber\\ &\rightarrow 0\;\;\text{as}\;\; R\rightarrow+\infty.\end{aligned}$$ And for $P_{k}(u_{c})$ in , by , $x\cdot \nabla V\leq 0$, and , we have $$\begin{aligned} \label{importantlowerbound} P_{k}(u_{c}(t))&\geq \min\Big\{4\Big(n_{0}-S_{k}(u_{c}(t))\Big), \frac{2}{5}\Big(\|\nabla u_{c}(t)\|_{L^{2}}^{2}-\frac{1}{2}\displaystyle\int_{{\bf R}^{3}}(x\cdot\nabla V) |u_{c}(t)|^{2}dx)\Big)\Big\}\nonumber\\ &\gtrsim \|\nabla u_{c}(t)\|_{L^{2}}^{2}\gtrsim \|u_{c}(t)\|_{H^{1}}^{2}\gtrsim S_{k}(u_{c}(t))=n_{c}.\end{aligned}$$ It follows from and that there exists $\delta_{0}>0$ such that for $R$ large enough, $$I''(t)\geq \delta_{0},$$ which means that $\lim_{t\rightarrow+\infty}I'(t)=+\infty$. However, it is impossible since $I'(t)$ is bounded. In fact, by , $$|I'(t)|\lesssim R.$$ Thus, we complete the proof of scattering part of Theorem \[scattering1\]. [99]{} T. Akahori and H. Nawa, Blowup and scattering problems for the nonlinear Schrödinger equations, *Kyoto J. Math.*, 53(2013), pp. 629-672. N. Burq, F. Planchon, J. Stalker and A. S. Tahvildar-Zadeh, Strichartz estimates for the wave and Schödinger equations with the inverse-square potential, *J. Funct. Anal.*, 203(2003), pp. 519-549. T. Cazenave, *Semilinear Schrödinger equations*, Courant Lecture Notes in Mathematics, 10. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 2003. D. Du, Y. Wu and K. Zhang, On blow-up criterion for the nonlinear Schrödinger equation, *Discrete Contin. Dyn. Syst.*, 36(2016), pp. 3639-3650. T. Duyckaerts, J. Holmer and S. Roudenko, Scattering for the non-radial 3D cubic nonlinear Schrödinger equation, *Math. Res. Lett.*, 15(2008), pp. 1233-1250. D. Fang, J. Xie and T. Cazenave, Scattering for the focusing energy-subcritical nonlinear Schrödinger equaiton, *Sci. China Math.*, 54(2011), pp. 2037-2062. J. Holmer and S. Roudenko, A sharp condition for scattering of the radial 3D cubic nonlinear Schrödiger equation, *Comm. Math. Phy.*, 282(2008), pp. 435-467. Y. Hong, Scattering for a nonlinear Schrödinger equation with a potential, *Commun. Pure Appl. Anal.*, 15(2016), pp. 1571-1601. S. Ibrahim, N. Masmoudi and K. Nakanishi, Scattering threshold for the focusing nonlinear Klein-Gordon equation, *Anal. PDE*, 4(2011), pp. 405-460. M. Ikeda and T. Inui, Global dynamics below the standing waves for the focusing semilinear Schrödinger equation with a repulsive Dirac delta potential. C. Kenig and F. Merle, Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equaiton in the radial case, *Invent. Math.*, 166(2006), pp. 645-675. R. Killip, C. Miao, M. Visan, J. Zhang and J. Zheng, Multipliers and Riesz transforms for the Schrödinger operator with inverse-square potential, arXiv:1503.02716. R. Killip, C. Miao, M. Visan, J, Zhang and J. Zheng, The energy-critical NLS with inverse-square potential, arXiv: 1509.05822. R. Killip, J. Murphy, M. Visan and J. Zheng, The focusing cubic NLS with inverse-square potential in three space dimensions, arXiv: 1603.08912. C. Miao, J. Zhang and J. Zheng, Nonlinear Schrödinger equation with Coulomb potential, arXiv:1809.06685. H. Mizutani, Strichartz estimates for Schrödinger equations with slowly decaying potential, arXiv:1808.06987. A. Sikora and J. Wright, Imaginary powers of Laplace operators, *Proc. Amer. Math. Soc.*, 129(2001), pp. 1745-1754. B. Simon, Schrödinger semigroups, *Bull. Amer. Math. Soc.*, 7(1982), pp.447-526. J. Zhang and J. Zheng, Scattering theory for nonlinear Schrödinger with inverse-square potential, *J. Funct. Anal.*, 267(2014), pp.2907-2932.
INTRODUCTION ============ The search for correlations between the protein folding kinetics and the native state equilibrium properties (i.e. chain length and stability) presents a major challenge for those working in the field of protein folding, both in theory and experiments. Progress has been significantly hindered by difficulty analyzing the folding of protein molecules larger than about 100 amino acids, whose kinetics is widely believed to be based on some multiexponential mechanism [[@ALLAN]]{}. By contrast, for smaller proteins whose folding kinetics is close to single exponential, there seems to be some consensus as to the dependence of the folding time, $t$, on native state stability [@PLAXCO1; @PLAXCO; @THOMAS; @PFN]. Experiment and theory appear to be at odds with each other over the dependence of the folding time on the number $N$ of amino acids in the folding unit. Recent Monte Carlo (MC) simulations [[@PFN]]{} of a simple lattice model have proposed that, for two-state proteins, a scaling law of the type $t\approx N^{\lambda}$, $\lambda \approx 5$, appropriately describes the dependence of the folding time on the chain length, $N$; a weaker dependence ($\lambda \approx 4$) has been previously reported in Ref.6 for the same model Hamiltonian and distribution of contact energies, and in Ref.7 for a two-letter alphabet model that, apart from the commonly used isotropic contact interactions, also considers orientation-dependent interactions. However, available experimental data shows no correlation between $t$ and $N$ [@PLAXCO1; @PLAXCO; @DEMCHENKO]. We examine here the influence of the native state geometric properties on the protein folding kinetics in the context of MC simulation. One simple parameter of the geometry which has already attracted attention is contact order, measuring the average length of the backbone loops connecting contacting pairs of residues in the structure [[@PLAXCO1]]{}. Formally, the relative contact order, $CO$, is defined as $$CO=\frac{1}{LN}\sum_{i,j}^N \Delta_{i,j}\vert i-j \vert,$$ where $N$ is the total number of amino acid residues in the protein, $L$ is the total number of contacts, and $\Delta_{i,j}=1$ if residues $i$ and $j$ are in contact and is 0 otherwise. $\vert i -j \vert$ is the backbone separation between residues $i$ and $j$. High values of $CO$ are associated with protein structures where amino acid residues interact on average with others that are far away in sequence (long-range interactions), while those displaying predominantly local interactions are of low contact order. A high correlation was found between the $CO$ parameter, and the folding rates for the protein set considered in Ref.3: proteins with ’low’ contact order tend to fold faster than proteins with ’high’ contact order. This finding strongly supports the view that native geometry strongly influences the kinetics of the rate-limiting step in the two-state mechanism of small protein molecules ($N < 100$), determining their folding rates. The connection between $CO$ and the dominant range of residue interactions brings back the controversial issue of the importance of local (and non-local) contacts in the dynamics of protein folding. An argument against local contacts is that they might increase the ’roughness’ of the energy landscape, and therefore the stability of the unfolded state [[@ABKEVICH]]{}. On the other hand, an argument supporting local interactions is based on the idea that they might provide the ideal substrate for the development of nucleation or initiation sites, small local sequence substructures forming in an early stage of folding and driving the subsequent pathway [[@Wetlaufer]]{}. Moreover, the formation of non-local contacts in early folding is entropically costly as it restricts the number of conformations available to the folding unit [[@BAKER]]{}. The limited amount of experimental information available [@PLAXCO1; @PLAXCO; @LINDBERG] suggests that investigating this problem within the scope of theoretical models could give more insight. MODEL AND METHODS ================= To achieve this goal we consider a simple three dimensional lattice model of a protein molecule whose Hamiltonian is given by the contact approximation, $$H(\lbrace \sigma_{i} \rbrace,\lbrace \vec{r_{i}} \rbrace)=\sum_{i>j}^N \epsilon(\sigma_{i},\sigma_{j})\Delta(\vec{r_{i}}-\vec{r_{j}}), \label{eq:no1}$$ where $\lbrace \sigma_{i} \rbrace$ stands for an amino acid sequence ($\sigma_{i}$ being the chemical identity of bead $i$) while $\lbrace \vec{r_{i}} \rbrace$ is the set of bead coordinates that define a certain conformation. The contact function, $\Delta$, equals $1$ if beads $i$ and $j$ are in contact but not covalently linked, and is $0$ otherwise. We follow many previous studies in taking the interaction parameters $\epsilon$ from the $20 \times 20$ Miyazawa-Jerningan matrix derived from the distribution of contacts in native proteins [[@MJ]]{}. Our folding simulations follow the standard MC Metropolis algorithm [[@METROPOLIS]]{} and the kink-jump MC move set (end-move, corner-flip, and crankshaft)[[@BINDER]]{}. $N$ $<E>$ $\sigma$ ------ ------------ ----------------------- $36$ $-15.8722$ $0.0194$ $48$ $-23.1407$ $0.0331$ $54$ $-28.8028$ $0.0158$ $64$ $-35.0811$ $0.0477$ $80$ $-46.2858$ $0.0375$ \[tab:tab1\] NUMERICAL RESULTS ================= Contact order and homopolymer kinetics \[sec:3a\] ------------------------------------------------- We have explored the distribution of the relative contact order parameter over a population of 2000 maximally compact target geometries found by homopolymer relaxation [@GUTIN]. This distribution is shown in Figures 1(a)-(e) for each of the studied chain lengths $N=36$,$48$,$54$,$64$ and $80$, which are all commensurate with folding to fill a simple cuboid. There is only a slight shift in the modal contact order with chain length, from $0.19$ for $N=36$,$48$ down to $0.17$ for higher $N$. However, the target fraction in the histogram tail ($CO\geq 0.22$) is significantly smaller for $N\geq 54$ than for shorter chains. Interestingly, the values of $CO$ found for these lattice proteins span approximately the same range as those found in real kinetically characterized single domain proteins ($0.0745 \leq CO \leq 0.2120$) [[@PLAXCO]]{}. We have also checked the intrinsic kinetic accessibility of the compact configurations obtained, by measuring the time $t_{col}$ for these configurations to be reached under homopolymer relaxation. Figure 1(f) shows there is no evident correlation between $t_{col}$ and CO . Finding the optimal folding temperature --------------------------------------- To investigate the relationship between protein folding and contact order, (at least) 20 target conformations for each chain length, were selected so as to sample uniformly across the range of contact order. For each target an ensemble of 100 designed sequences was prepared by using the design method developed by Shakhnovich and Gutin [[@SG]]{} based on random heteropolymer theory, and simulated annealing techniques. The average trained sequence energy, $<E>$, is shown in Table I along with the standard deviation of the energy distribution, $\sigma$. Except for $N=54$, the chemical composition of the designed sequences was the same as the one used in Ref.5. In that study it was shown that the optimal folding temperature, $T_{fold}(N)$, defined as the temperature that minimizes the folding time, is close to a self-averaging parameter. Since the Shakhnovich and Gutin design scheme preserves the overall sequence chemical composition we can safely use for 36 and 48 bead long sequences studied here the $T_{fold}(36)$ and $T_{fold}(48)$ found in Ref.5. For the longer $N$, and most particularly $N=80$, foldicity, defined as the fraction of successful folding runs over the total number of attempted runs, was for the vast majority of the targets less than unity. This forced us to define the optimal folding temperature, $T_{fold}(N)$, is such cases as the temperature which optimized foldicity rather than the temperature which minimized the folding time. The case $N=48$ sits at the margin and provides confirmation that the two approaches to $T_{fold}(N)$ are not in conflict, as shown in Figure. 2. Contact order and folding kinetics ---------------------------------- After determining $T_{fold}(N)$ we ran a MC folding simulation for every designed sequence. The simulations proceeded until $\tau_{max}(N)$ MC steps or until folding was observed. The value of $\tau_{max}(N)$ was chosen such that it was much longer than the typical folding time of the studied sequences. Fig. 3 shows the dependence of the folding time, $t$, on the contact order parameter for chain lengths $N=36$ and $N=48$. The folding time was computed as the mean first passage time averaged over 100 simulation runs. In either case the points are close to be uniformly distributed suggesting no correlation between the $CO$, and the folding time for these chain lengths. Here and elsewhere error bars indicate $\pm$ one standard error in the mean. Figures 4(a)-(c) show the dependence of foldicity on $CO$ for $N=54,64$ and $N=80$ respectively. The results presented in Figures. 4(d)-(f) show, for the same chain lengths, the dependence of the estimated folding time on the relative contact order paramenter. Two distinct scenarios emerge from the analysis of the graphs: 1. [For $CO \leq 0.17$ there is no correlation between foldicity (or folding time) and the relative contact order parameter;]{} 2. [For $CO>0.17$, a general trend towards decreasing foldicity with increasing relative contact order can be observed. In this regime, a considerably strong positive correlation of $r=0.70,0.70$ and $0.79$, between $t$ and $CO$, shows up for chain lengths $N=54,64$ and $80$ respectively.]{} DISCUSSION AND CONCLUSIONS ========================== The ’turning point’ value of $CO=0.17$ is actually the peak of the homopolymer relaxation histogram distribution as previously discussed. This means that $CO$ and folding time are positively correlated only for proteins with predominantly non-local contacts. We interpret this result as a consequence of the properties of the move set used to explore the conformational space together with the ruggedness of the energy landscape. As seen in section \[sec:3a\], kink-jump dynamics does not favour the formation of high $CO$ structures in homopolymers. In proteins, when the native structure is of high $CO$, it will be difficult to escape from kinetic traps associated with local energy minima and structures of lower $CO$. This confirms and explains our previous findings [[@PFN]]{} according to which the folding performance achievable is strongly sensitive to target conformation for chain lengths $N \geq 80$. The comparison of the simulation’s results with the experimental data set of 24 two-state proteins, with chain length ranging from 41 to 154 amino acids, reported in Ref.3 is hindered by the fact that the proteins considered in Ref.3 fail to exhibit the scaling of folding time with chain length which is typical of lattice model simulations. However, a strong correlation ($r=0.80$) is also found between $CO$ and the folding times. Moreover, this correlation is considerably improved ($r=0.97$) if only long protein chains ($N \geq 80$) are considered. As a general conclusion, we might say that results on lattice models encourage the idea that the contact order of the native structure plays a significant role in determining the folding rate. The match with the correlation between the $CO$ and the folding time found from the analysis of experimental data suggests that lattice polymer dynamics with local moves does capture the key dynamical features of real protein folding. It would be interesting to know if similar results can be found in the scope of ’off-lattice’ models, where one would expect proteins with high helical content to be better folders. [99]{} \ \
--- abstract: 'Interferometric observations of stars in late stages of stellar evolution and the impact of VLTI observations are discussed. Special attention is paid to the spectral information that can be derived from these observations and on the corresponding astrophysical interpretation of the data by radiative transfer modelling. It is emphasized that for the robust and non-ambiguous construction of dust-shell models it is essential to take diverse and independent observational constraints into account. Apart from matching the spectral energy distribution, the use of spatially resolved information plays a crucial role for obtaining reliable models. The combination of long-baseline interferometry data with high-resolution single-dish data (short baselines), as obtained, for example, by bispectrum speckle interferometry, provide complementary information and will improve modelling and interpretation.' address: 'Max-Planck-Institut für Radioastronomie, Bonn, Germany' author: - 'T. Blöcker' - 'K.-H. Hofmann$^{1}$' - 'G. Weigelt$^{1}$' title: | Spectral Observations of Envelopes around\ Stars in Late Stages of Stellar Evolution --- Introduction: Late stages of stellar evolution ============================================== The Very Large Telescope Interferometer (VLTI; see Glindemann, this volume) of the European Southern Observatory with its four 8.2m unit telescopes (UTs) and three 1.8m auxiliary telescopes (ATs) will certainly establish a new era of studying the late stages of stellar evolution within the next few years. With a maximum baseline of up to more than 200m, the VLTI will allow observations with unprecedented resolution opening up new vistas to a better understanding of the physics of evolved stars and thus of stellar evolution. Stars in late stages of stellar evolution form therefore an important group among the VLTI key targets. During the Red Giant phase, strong winds erode the stellar surfaces leading to the formation of circumstellar shells which absorb an increasing fraction of the visible light and re-emit it in the infrared regime. Accordingly, most of these evolved stars are bright infrared objects. The heavy mass loss leads to the chemical enrichment of the interstellar medium and therefore plays a crucial role for the understanding of the galactic chemodynamical evolution. The vast majority of all stars, which have left their main sequence phase and become Red Giants, are of low and intermediate mass and finally evolve along the Asymptotic Giant Branch (AGB). These luminous, frequently pulsating and heavily mass-losing AGB stars form an important stellar population which contributes considerably to light, chemistry and dynamics of galaxies. The envelopes of AGB stars are the major factories of cosmic dust. Accordingly, AGB stars are often heavily enshrouded by dust exposing high fluxes in the infrared and are ideal laboratories to investigate the interplay between various physical and chemical processes. Most dust shells around AGB stars are known to be spherically symmetric on larger scales, whereas most objects in the immediate successive stage of proto-planetary nebulae appear in axisymmetric geometry. Evidence is growing that this break of symmetry takes place already at the very end of the AGB evolution. Mass loss is also one of the dominant effects during the evolution of massive stars, virtually leading to an almost complete peeling of the star. Circumstellar dust shells found around evolved massive supergiants often show features of non-spherical outflows. Observing and modelling the circumstellar shells surrounding these stars, unveil details of evolution as, for instance, mass-loss rates. The presence of fossil shells even gives clues for the evolutionary history. Dust formation around evolved stars can even continue beyond the Red Giant stage, as, e.g., in RCrB stars or late-type Wolf-Rayet stars. The production of dust in such hostile environments is still challenging to theory. In the instance of Wolf-Rayet stars colliding winds due to binarity is one of the favored scenarios. High-resolution interferometric observations reveal details of disks and dust shells of evolved stars and thus improve our knowledge of, for example, the mass-loss process and its evolution. In the following sections, we discuss high spatial resolution observations and their interpretation by radiative transfer calculations for some prominent evolved stars. Bispectrum speckle interferometry ================================= [-90]{} =1.00 =41.3mm The refractive index variations in the atmosphere of the earth restrict the angular resolution of large ground-based telescopes to $\sim$ 05 which is much worse than the theoretical diffraction limit ($\sim$001 for a 8m telescope at optical wavelenghts). However, the atmospheric image degradation can be overcome and diffraction-limited images can be obtained either by adaptive optics (see, e.g., Tyson et al. 2002, Brandner et al. 2002) or bispectrum speckle interferometry (Weigelt [@Wei77], Lohmann et al. [@LohWeiWir83], Hofmann & Weigelt [@HofWei86]). Speckle interferograms are images recorded with exposure times of $\sim 50$ms in order to “freeze” the atmospheric turbulence. They consist of many small bright dots, called speckles, which are interference maxima of the incident light. The speckles are typically of the size of the theroretical Airy pattern of the aberration-free telescope. Fig. \[Fspeckle\]. shows a speckle interferogram of $\gamma$ Ori (6m telescope, $\lambda \sim 500$nm) for illustration. Bispectrum speckle interferometry consists, in principle, of four steps: (i) calculation of the average bispectrum of all speckle interferograms; (ii) compensation of the photon bias in the bispectrum; (iii) compensation of the speckle transfer function which can be derived from the speckle interferograms of a point source; and (iv) derivation of modulus and phase of the object Fourier transform from the bispectrum. In other words, bispectrum speckle interferometry with a single large telescope covers simultaneously all baselines up to the single telescopes’s aperture size. Interpreting the observations: Radiative transfer models ======================================================== As discussed above, evolved stars are often surrounded by dust shells. An appropriate tool to interpret interferometric observations of dust shells are radiative transfer calculations. In order to handle the numerical effort to solve this complex problem, various assumptions have usually to be made with regard to the dust-shell geometry (spherical/2d/3d), dust formation (instantaneous/chemical network), and hydrodynamics (stationary/time-dependent). For example, the spherical radiative transfer problem can be solved by utilizing the self-similarity and scaling behaviour of IR emission from radiatively heated dust (Ivezić & Elitzur [@IveEli97]). To tackle this problem including absorption, emission and scattering, several properties of the central source and its surrounding envelope are required, viz. (i) the spectral shape of the central source’s radiation; (ii) the dust properties, i.e. the grains’ optical constants and the grain size distribution, as well as the dust temperature at the inner boundary; (iii) the relative thickness of the envelope, i.e. the ratio of outer to inner shell radius, and the density distribution; and (iv) the total optical depth at a given reference wavelength. A two-dimensional approach of the radiative transfer problem is applied in, e.g., Men’shchikov et al. (2001), a hydrodynamic approach with explicit consideration of dust nucleation is given in Winters et al. (2000). Radiative-transfer models have often to rely only on the comparison with the observed spectral energy distribution. However, high-resolution spatial information has proven to be an essential and complementary ingredient of dust-shell modelling. Only if such information is available, reliable, i.e. non-ambiguous, radiative-transfer models can be constructed and sound conclusions on the mass-loss process drawn (see e.g. Blöcker et al. 1999, 2001). In the following sections, we give three examples (IRC+10216, CIT3, and IRC+10420) for such modelling. The carbon star IRC+10216 ========================= The carbon star IRC+10216 is a long-period AGB star suffering from a strong stellar wind (several $10^{-5}$ M$_{\odot}$/yr; Loup et al. 1993) which have led to an almost complete obscuration of the star by dust. Due to the high mass-loss rate, long period of $P=649$d (Le Bertre 1992), and carbon-rich chemistry of the dust-shell, IRC+10216 is obviously in a very advanced stage of its AGB evolution. High-resolution near-infrared imaging of IRC+10216 has revealed that on sub-arcsecond scales (100mas) its dust shell is clumpy, bipolar, and changing on a time scale of only $\sim$1yr (Weigelt et al. 1997, 1998, Haniff & Buscher 1998, Osterbart et al. 2000, Tuthill et al. 2000, Weigelt et al. 2002). Since most dust shells around AGB stars are known to be spherically symmetric, whereas most proto-planetary nebulae (PPN) show an axisymmetric geometry (Olofsson 1996), it appears likely that IRC+10216 has already entered the transition phase to the PPN stage. This suggests that the break of the dust-shell symmetry between the AGB and post-AGB phase already takes place at the end of the AGB evolution. Bispectrum speckle-interferometry observations of IRC+10216 were carried out with the SAO 6m telescope in the $J$, $H$, and $K$ band by Osterbart et al. (2000) and Weigelt et al. (2002) covering eight epochs between 1995 and 2001. Fig. \[FJHKcont\] illustrates the different appearance of the dusty environment of IRC+10216 in the $J$, $H$, and $K$ bands. Fig. \[FKima\] shows the reconstructed $K$-band images of the innermost region of IRC+10216 in 1996, 1998 and 2000. The dust shell consists of several compact components, at the beginning within a radius of 200mas, which steadily change in shape and brightness. For instance, the apparent separation of the two initially brightest components A and B increased from 201 mas in 1996 to 320 mas in 2000. At the same time, component B is fading and has almost disappeared in 2000 whereas the initially faint components C and D have become brighter. In 2001, the intensity level of component C has increased to almost 40% of the peak intensity of component A. Both components appear to have started merging in 2000. These changes of the dust-shell appearance can be related to changes of the optical depths caused, e.g., by mass-loss variations. The present monitoring, covering more than 3 pulsational periods, shows that the structural variations are not related to the stellar pulsational cycle in a simple way. This is consistent with the predictions of hydrodynamical models that enhanced dust formation takes place on a timescale of several pulsational cycles (Fleischer et al. 1995). Recent two-dimensional radiative transfer modelling (Men’shchikov et al. 2001) has shown that the star is surrounded by an optically thick dust shell with polar cavities of a full opening angle of $36^{\rm o}$, which are inclined by $40^{\rm o}$ pointing with the southern lobe towards the observer. The bright and compact component A is not the direct light from the underlying central star but the southern lobe of this bipolar structure dominated by scattered light. Instead, the carbon star is at the position of the fainter northern component B. The oxygen-rich AGB star CIT3 ============================= CIT3 is an oxygen-rich long-period variable star evolving along the AGB with extreme infrared properties. Due to substantial mass loss it is surrounded by an optically thick dust shell which absorbs almost all visible light radiated by the star and finally re-emits it in the infrared regime. The first near infrared bispectrum speckle-interferometry observations of CIT3 in the $J$-, $H$-, and $K^{\prime}$-band (resolution: 48mas, 56mas, and 73mas) were obtained with the SAO 6m telescope by Hofmann et al. (2001). While CIT3 appears almost spherically symmetric in the $H$- and $K^{\prime}$-band it is clearly elongated in the $J$-band along a symmetry axis of position angle $-28^{\rm o}$. Two structures can be identified: a compact elliptical core and a fainter north-western fan-like structure. Extensive radiative transfer calculations have been carried out and confronted with the spectral energy distribution ranging from 1$\mu$m to 1mm, with the 1.24$\mu$m, 1.65$\mu$m and 2.12$\mu$m visibility functions, as well as with 11$\mu$m ISI interferometry. The best model found to match the observations refers to a cool central star with $T_{\rm eff}=2250$K which is surrounded by an optically thick dust shell with $\tau (0.55\mu m) = 30$ (see Fig. \[FvisiCIT3\]). The central-star diameter is 10.9mas and the inner dust shell diameter 71.9mas. The inner dust-shell rim at $r_{1}= 6.6 R_{\ast}$ has a temperature of $T_{1}=900$K. A two-component model consisting of an inner uniform-outflow shell region ($\rho \sim 1/r^{2}$, $r < 20.5 r_{1}$) and an outer region where the density declines more shallow as $\rho \sim 1/r^{1.5}$ proved to give the best overall match of the observations. Provided the outflow velocity stayed constant, the more shallow density distribution in the outer shell indicates that mass-loss has decreased with time in the past of CIT3. Adopting $v_{\rm exp}=20$km/s, the termination of that mass-loss decrease and the begin of the uniform-outflow phase took place 87yr ago. The present-day mass-loss rate can be determined to be $\dot{M} = (1.3-2.1) \cdot 10^{-5}$M$_{\odot}$/yr for $d=500-800$pc. A full description of these observations and models is given in Hofmann et al. (2001). CIT3 proved to be among the most interesting far-evolved AGB stars due to its infrared properties. Moreover, the aspherical appearance of its dust shell in the $J$-band puts it in one line with the few AGB stars known to expose near-infrared asphericities in their dust shells. The development of such asphericities close to the central star suggests that CIT3 is in the very end of the AGB evolution or even in transition to the proto-planetary nebula phase where most objects are observed in axisymmetric geometry (Olofsson 1996). However, in contrast to other objects (as IRC+10216), CIT3 shows these deviations from spherical symmetry only in the $J$-band, which is almost completely dominated by scattered light. This suggests that CIT3 had just started to form aspherical structures and is in this regard still in the beginning of its final AGB phase. If so, CIT3 is one of the earliest representatives of this dust-shell transformation phase known so far. The rapidly evolving hypergiant IRC+10420 ========================================= The star IRC+10420 is an outstanding object for the study of stellar evolution since it is the only object currently being observed in its rapid transition from the red supergiant stage to the Wolf-Rayet phase. Its spectral type changed from F8I$_{\rm a}^{+}$ in 1973 (Humphreys et al. 1973) to mid-A today (Oudmaijer et al. 1996) corresponding to an increase of its effective temperature of 1000-2000K within only 25yr. It is heavily obscured by circumstellar dust due to strong mass loss with rates typically of the order of several $10^{-4}$M$_{\odot}$/yr. IRC+10420 can be classified as a luminous hypergiant with a mass of initially $\sim 20$ to 40 M$_{\odot}$. Diffraction-limited 73mas bispectrum speckle interferometry of IRC+10420 (Blöcker et al. 1999) shows that the $K$-band visibility drops to 0.6 and then stays constant for frequencies $>4$cycles/arcsec revealing that the central star contributes $\sim$ 60% and the dust shell $\sim$ 40% to the total flux. To interpret these observations in more detail, radiative transfer calculations were conducted taking into account both SED and visibility. Again, single-shell models failed to reproduce the observations and a two-component shell was introduced assuming that IRC+10420 had passed through a superwind phase in its history as can be expected from its evolutionary status. A previous superwind phase leads to changes in the density distribution, i.e. there is a region in the dusty shell which shows a density enhancement over the normal $r^{-2}$ distribution. The best model for both SED and visibility was found for a dust shell with a dust temperature of 1000 K at its inner radius of $r_{1}=69 R_{\ast}$. At a distance of $308 R_{\ast}$ ($Y=r/r_{1}=4.5$), where the dust temperature has dropped to 480 K, the density was enhanced by a factor of $S=40$ and its slope within the shell changed from $1/r^{2}$ to $1/r^{1.7}$. The angular diameters of these components are 69 mas and 311 mas (stellar diameter $\sim$ 1 mas for $d=5$ kpc). This can be interpreted in terms of a termination of an enhanced mass-loss phase roughly 90 years ago. The mass-loss rates of the components can be determined to be $\dot{M}_{1}= 7.0\ 10^{-5}$$M_{\odot}/{\rm yr}$ and $\dot{M}_{2}= 1.1\ 10^{-3}$$M_{\odot}/{\rm yr}$. We refer to Blöcker et al. ([@BloeEtal99]) for a full description of the model grid. Simulating VLTI observations of IRC+10420 ========================================= The above data (and model) of IRC+10420 rely on observations with the SAO 6m telescope. Fig. \[FsimvisATH\] shows the visibility model predictions for different superwind amplitudes $S$ up to a baseline of 110m. Obviously, the various models can be best distinguished at longer baselines. Taking the corresponding model intensity distributions as input, Prygodda et al. (2001) presented computer simulations of interferometric imaging with VLTI/AMBER (ATs, wide-field mode) for IRC+10420. =0.80 These simulations consider light propagation from the object to the detector as well as photon noise and detector read-out noise and show the dependence of the visibility error bar on various observational parameters. The results are shown in Fig. \[FsimvisATH\]. Different seeing conditions for object and reference star turn out to be more crucial than, e.g., residual tip-tilt errors. With these simulations at hand one can immediately see under which conditions the visibility data quality would allow us to discriminate between the different model assumptions (here: the size of the superwind amplitude $S$). Inspection of Fig. \[FsimvisATH\] shows that in all studied cases the observations will give clear preference to one particular model. Therefore, observations with VLTI will certainly be well suited to prove theroretical predictions and to improve our current knowledge of this outstanding object. Evolved Stars and the VLTI ========================== VLTI observations with AMBER (see Petrov, this volume) and MIDI (see Perrin, this volume) will certainly have a large impact on the study of stars in late stages of stellar evolution revealing, for example, details of dust-shell structures and the mass-loss process. Radiative transfer calculations are an appropriate and efficient tool of interpreting these observations. However, for the robust and non-ambiguous construction of dust-shell models it is essential to take diverse and independent observational constraints into account. Apart from matching the spectral energy distribution, the consideration of spatially resolved information plays a crucial role for obtaining a reliable model. Generally speaking, as many pieces of spectral information as possible have to be taken into account. Visibilities at various wavelengths greatly constrain modelling, probing, for instance, scattering ($J$ band) and thermal emission of hot dust ($H$, $K$ band) and cool dust ($N$ band). Near infrared visibilities can serve as sensitive indicators of the grain size. Furthermore, the combination of long-baseline interferometry data with high-resolution data at short baselines, as obtained, e.g., by bispectrum speckle interfermetry, will provide complementary information and will be of utmost value for modelling and interpretation (see, e.g., the case of IRC+10420). For example, a near-infrared survey of dusty AGB stars with VLTI-AMBER should be based on a combination of various selection criteria comprising the existence of high-resolution single-dish observations, and/or prominent location in color-color (e.g. $J{-}K$ vs. $K$-$[12]$) and magnitude-color (e.g. $K$ vs. $K$-$[12]$) diagrams indicative for the presence of dust, i.e. close vicinity to already resolved dusty objects. Dusty AGB objects resolved by speckle interferometry mostly show $K{-}[12]\ga$5. Due to their thick dust shells, most of these stars are bright in $K$ and observable with the VLTI without fringe tracking. [99]{} Blöcker T., Balega Y., Hofmann K.-H., Weigelt G., 2001, A&A 369, 142 Blöcker T., Balega Y., Hofmann K.-H., Lichtenthäler J., Osterbart R., Weigelt G., 1999, A&A 348, 805 Brandner W., Rousset G., Leuzen R., Hubin N., Lacombe F., Hofmann R., et al., 2002, Msngr 107, 1 Fleischer A.J., Gauger A., Sedlmayr E., 1995, A&A 297, 543 Haniff C.A., Buscher D.F., 1998, A&A 334, L5 Hofmann K.-H., Weigelt G., 1986, A&A 167, L15 Hofmann K.-H., Blöcker T., Balega Y., Weigelt G., 2001, A&A 379, 529 Humphreys R.M., Strecker D.W., Murdock T.L., Low, F.J., 1973, ApJ 179, L49 Ivezić Ž., Elitzur M., 1997, MNRAS 287, 799 Le Bertre T., 1992, A&AS 94, 377 Lipman E.A., Hale D.D., Monnier J.D., Tuthill P.G., Danchi W.C., Townes C.H., 2000, ApJ 532, 467 Lohmann A.W., Weigelt G., Wirnitzer B., 1983, Appl. Opt. 22, 4028 Loup C., Forveille T., Omont A., Paul J.F., 1993, A&AS 99, 291. Men’shchikov A., Balega Y., Bl[ö]{}cker T., Osterbart R., & Weigelt G. 2001, A&A 368, 497 Olofsson H., 1996, ApSS 245, 169 Osterbart R., Balega Y., Bl[ö]{}cker T., Men’shchikov A., Weigelt G., 2000, A&A 357, 169 Oudmaijer R.D., Groenewegen M.A.T., Matthews H.E., Blommaert J.A.D.L, Sahu K.C., 1996, MNRAS 280, 1062 Przygodda F., Bl[ö]{}cker T., Hofmann K.-H., Weigelt G., 2001, Opt.Eng. 40, 753 Tuthill P.G., Monnier J.D., Danchi W.C., Lopez B., 2000, ApJ 543, 284 Tyson R.K., Bonaccini D., Roggemann M.C. (eds.), 2002, Adaptive Optics Systems and Technology II, Proc. SPIE Vol. 4494 Weigelt G., 1977, Optics Commun. 21, 55 Weigelt G., Balega, Y., Hofmann, K.-H., Langer, N., Osterbart, R., 1997, Science with the VLT Interferometer, ESO Astrophysics Symposia, p. 206 Weigelt G., Balega Y., Bl[ö]{}cker T., Fleischer A.J., Osterbart R., Winters J.M., 1998, A&A 333, L51 Weigelt G., Balega Y., Bl[ö]{}cker T., Hofmann K.-H., Men’shchikov A., Winters J.M., 2002, [*Planetary Nebulae*]{}, IAU Symp. 209, M. Dopita, S. Kwok, and R.S. Sutherland (eds.), Astronomical Society of the Pacific, in press Winters J.M., Le Bertre T., Jeong K.S., Helling C., Selmayr E., 2000, A&A 361, 641
--- abstract: 'Provided that cavities are initially in a Greenberger-Horne-Zeilinger (GHZ) entangled state, we show that GHZ states of $N$-group qubits distributed in $N$ cavities can be created via a 3-step operation. The GHZ states of the $N$-group qubits are generated by using $N$-group qutrits placed in the $N$ cavities. Here, “qutrit" refers to a three-level quantum system with the two lowest levels representing a qubit while the third level acting as an intermediate state necessary for the GHZ state creation. This proposal does not depend on the architecture of the cavity-based quantum network and the way for coupling the cavities. The operation time is independent of the number of qubits. The GHZ states are prepared deterministically because no measurement on the states of qutrits or cavities is needed. In addition, the third energy level of the qutrits during the entire operation is virtually excited and thus decoherence from higher energy levels is greatly suppressed. This proposal is quite general and can in principle be applied to create GHZ states of many qubits using different types of physical qutrits (e.g., atoms, quantum dots, NV centers, various superconducting qutrits, etc.) distributed in multiple cavities. As a specific example, we further discuss the experimental feasibility of preparing a GHZ state of four-group transmon qubits (each group consisting of three qubits) distributed in four one-dimensional transmission line resonators arranged in an array.' address: - '$^1$Quantum Information Research Center, Shangrao Normal University, Shangrao 334001, China' - '$^2$Department of Physics, Hangzhou Normal University, Hangzhou 311121, China ' - '$^3$School of Physics, Nanjing University, Nanjing 210093, China' author: - 'Tong Liu$^{1}$' - 'Qi-Ping Su$^{2}$' - 'Yu Zhang$^{3}$' - 'Yu-Liang Fang$^{1}$' - 'Chui-Ping Yang$^{1}$' date: - - title: Generation of quantum entangled states of multiple groups of qubits distributed in multiple cavities --- **I. INTRODUCTION AND MOTIVATION** Large-scale quantum information processing (QIP) has drawn much attention \[1-3\]. Usually, a large number of qubits may be involved in large-scale QIP. The size of QIP with qubits in multiple cavities can be larger when compared to QIP with qubits in a single cavity. For instance, given the number of qubits in each cavity is $m$, the number of qubits placed in $n$ cavities is $n\times m$, which is $n$ times the number $m$ of qubits placed in a single cavity. Therefore, large-scale QIP based on cavity or circuit QED may require distributing qubits in different cavities. In such an architecture, quantum state engineering and manipulation may involve not only qubits in the same cavity *but also qubits distributed in different cavities* \[4,5\]. The ability to prepare quantum entangled states of qubits located in different cavities and to perform nonlocal quantum operations on qubits in different cavities is a prerequisite to realize large-scale QIP based on cavity or circuit QED \[6,7\]. Greenberger-Horne-Zeilinger (GHZ) entangled states play a key role in quantum communication and QIP. To give just a few examples, QIP \[8\], quantum communication \[9-11\], error-correction protocols \[12,13\], quantum metrology \[14\], and high-precision spectroscopy \[15,16\] require entangling quantum systems in a GHZ state. New systems and methods for preparing and measuring GHZ states have therefore been sought intensively for a long time, and remains a very active field of research. To date, GHZ states of 10 or more qubits have been experimentally demonstrated in various systems. For examples, experiments have reported the generation of GHZ states with 14 ionic qubits \[17\], 20 atomic qubits \[18\], 12 photonic qubits via a linear optical setup \[19\], 18 qubits with six photons’ three degrees of freedom \[20\], and 10 superconducting (SC) qubits coupled to a single microwave resonator \[21\]. Moreover, GHZ states of 18 SC qubits coupled to a single cavity or resonator has recently been produced in experiments \[22\] (hereafter, the terms cavity and resonator are used interchangeably). Theoretically, based on cavity or circuit QED, a large number of theoretical methods have been presented for creating multi-qubit GHZ states with various quantum systems (e.g., atoms, quantum dots, SC qutrits, NV centers, etc.), which are placed in a single cavity or coupled to a single resonator \[23-31\]. Moreover, proposals have been presented to entangle qubits distributed in different cavities \[32-42\]. Note that the previous methods presented for entangling qubits in a single cavity or resonator may not be applied to entangle qubits that are distributed in different cavities, and the previous proposals for entangling qubits in different cavities are not universal, which depend on the specific cavity-system architecture and the way in which the cavities are connected. Motivated by the above, we present an efficient method to prepare GHZ states of $N$-group qubits distributed in a $N$-cavity system. The multi-qubit GHZ states are generated by using qutrits (three-level quantum systems) placed in cavities or embedded in resonators. Here, the two logic states of a qubit are represented by the two lowest levels of a qutrit placed in a cavity, while the third higher energy level of each qutrit is utilized to facilitate the coherent manipulation. By using this proposal, we show that given the initial GHZ state of the cavities is prepared, the $N$-group qubits can be deterministically prepared in a GHZ state with a 3-step operation only. The procedure for creating the GHZ state of qubits works for a 1D (one-dimensional), 2D, or 3D cavity-based quantum network (Fig. 1). Moreover, it does not depend on in which way the cavities are connected (e.g., via optical fibers or other auxiliary systems). This proposal is quite general and can be used to create GHZ states of multiple groups of qubits, by using natural atoms or artificial atoms (e.g., quantum dots, NV centers, various SC qutrits, etc.) distributed in different cavities. Other advantages of this proposal are: (i) The GHZ state is prepared in a deterministic way because neither measurement on the state of qutrits nor measurement on the state of the cavities is needed; (ii) The GHZ-state preparation time is independent of the number of qubits and thus does not increase with the number of qubits; and (iii) The third level $\left\vert f\right\rangle $ of the qutrits is not occupied during the entire operation, thus decoherence from the higher energy levels of the qutrits is greatly suppressed. As an example, we further discuss the experimental feasibility of the proposal, based on circuit QED. Our numerical simulations show that within current circuit QED technology, it is feasible to produce GHZ states of four groups of SC transmon qubits, each group containing three transmon qubits and the four groups distributed in four one-dimensional transmission line resonators (TLRs) arranged in an array. By increasing the number of resonators, GHZ states of more groups of SC qubits can be created experimentally. This paper is organized as follows. Sec. II introduces basic theory. Sec. III shows how to generate GHZ states of $N$-group qubits distributed in $N$cavities. Sec. IV investigates the experimental feasibility of preparing GHZ states of four-group SC transmon qubits distributed in four TLRs arranged in an array. A concluding summary is given in Sec. V. **II. BASIC THEORY** ![(color online) (a) 1D cavity-based quantum network. (b) 2D cavity-based quantum network. (c) 3D cavity-based quantum network. In (a,b,c), each short line represents an optical fiber or other auxiliary system, which is used to couple two adjacent cavities. In addition, each cavity is a 1D or 3D cavity, hosting one group of qutrits (red dots).[]{data-label="fig:1"}](fig1.eps){width="12.0"} ![(color online) (a) Illustration of the dispersive interaction between cavity $l$ and the $\left\vert e\right\rangle $ $\leftrightarrow $ $\left\vert f\right\rangle $ transition of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $, with coupling constant $g_{l} $ and detuning $\Delta _{l}=\protect\omega _{fe}-\protect\omega _{c_{l}}>0$. Here, $\protect\omega_{fe}$ is the $\left\vert e\right\rangle $ $% \leftrightarrow $ $\left\vert f\right\rangle $ transition frequency of the qutrits and $\protect\omega _{c_{l}}$ is the frequency of cavity $l$. (b) Illustration of the resonant interaction between cavity $l$ and the $\left\vert g\right\rangle $ $\leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $m_{l}$ with coupling constant $g_{r,l}$. (c) Illustration of the resonant interaction between a classical pulse and the $\left\vert g\right\rangle $ $% \leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrits $% \left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$. Note that the level structures in (a), (b), and (c) are different. The level spacings of qutrits in (a) are adjusted such that $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition is dispersively coupled to cavity $l$. The level spacings in (b) are adjusted such that the $% \left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition is resonant with cavity $l$. The level spacings in (c) are adjusted such that qutrits are decoupled from cavity $l$ during the pulse. A blue double-arrow vertical line in (a) and (b) represents the frequency of cavity $l$, while a blue double-arrow vertical line in (c) represents the pulse frequency.[]{data-label="fig:2"}](fig2.eps){width="10.5"} Consider $N$ cavities ($1,2,...,N$) each hosting a group of qutrits (Fig. 1). For simplicity, assume that each group contains $m$ qutrits. The $% m $ qutrits hosted in cavity $l$ ($l=1,2,...,N$) are labelled as $1_{l},$ $% 2_{l},...,$ and $m_{l}$. The three levels of each qutrit are denoted as $% \left\vert g\right\rangle ,$ $\left\vert e\right\rangle $ and $\left\vert f\right\rangle $ (Fig. 2). As shown in the next section, the GHZ state preparation requires: (i) Cavity $l$ dispersively interacting with the $% \left\vert e\right\rangle \leftrightarrow $ $\left\vert f\right\rangle $ transition of each of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l,$ (ii) Cavity $l$ resonantly interacting with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $m_{l}$ in cavity $l$, and (iii) A classical pulse resonantly interacting with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of each of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$ ($l=1,2,...,N$). In the following, we will give a brief introduction to the state evolution under these types of interaction. **A.** **Qutrit-cavity dispersive interaction** Suppose that cavity $l$ is dispersively coupled to the $\left\vert e\right\rangle $ $\leftrightarrow $ $\left\vert f\right\rangle $ transition of each of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ with coupling strength $g_{l}$ and detuning $\Delta _{l}=\omega _{fe}-\omega _{c_{l}}>0$, while highly detuned (decoupled) from other energy level transitions \[Fig. 2(a)\]$.$ Here, $\omega _{fe}$ and $\omega _{c_{l}}$ are the $\left\vert e\right\rangle $ $\leftrightarrow $ $\left\vert f\right\rangle $ transition frequency of each qutrit and the frequency of cavity $l,$ respectively. This condition can be met by prior adjustment of the qutrit’s level spacings or the frequency of cavity $l$. For instance, the level spacings of superconducting qutrits can be rapidly (within $1\sim 3 $ ns) tuned \[43,44\]; the level spacings of NV centers can be readily adjusted by changing the external magnetic field applied along the crystalline axis of each NV center \[45,46\]; and the level spacings of atoms/quantum dots can be adjusted by changing the voltage on the electrodes around each atom/quantum dot \[47\]. In addition, the frequency for an optical cavity can be changed in experiments \[48\], and the frequency of a microwave cavity can be rapidly adjusted with a few nanoseconds \[49,50\]. Under the above assumptions, the Hamiltonian of the whole system in the interaction picture and after the rotating wave approximation (RWA) is given by (assuming $\hbar =1$) $$H_{1}=\sum\limits_{l=1}^{N}g_{l}e^{i\Delta _{l}t}\hat{a}_{l}S_{fe,l}^{+}+% \text{H.c.,}$$where $S_{fe,l}^{+}=\sum\limits_{j=1}^{m-1}\left\vert f\right\rangle _{j_{l}}\left\langle e\right\vert $, and $\hat{a}_{l}$ is the photon annihilation operator of the cavity $l$ ($l=1,2,...,N$). In Eq. (1), we assume that the coupling strength $g_{l}$ between cavity $l$ and the $% \left\vert e\right\rangle $ $\leftrightarrow $ $\left\vert f\right\rangle $ transition is the same for all of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} .$ Under the large detuning condition $\Delta _{l}\gg g_{l}\ (l=1,2,...,N),$ we can obtain the following effective Hamiltonian \[51–53\] $$H_{\mathrm{eff}}=\sum\limits_{l=1}^{N}\lambda _{l}\left( S_{f,l}\hat{a}_{l}% \hat{a}_{l}^{+}-S_{e,l}\hat{a}_{l}^{+}\hat{a}_{l}+\sum_{j,k=1;j\neq k}^{m-1}\left\vert f\right\rangle _{j_{l}}\left\langle e\right\vert \otimes \left\vert e\right\rangle _{k_{l}}\left\langle f\right\vert \right)$$where $S_{f,l}=\sum\limits_{j=1}^{m-1}\left\vert f\right\rangle _{j_{l}}\left\langle f\right\vert ,$ $S_{e,l}=\sum\limits_{j=1}^{m-1}\left% \vert e\right\rangle _{j_{l}}\left\langle e\right\vert ,$ and $\lambda _{l}=g_{l}^{2}/\Delta _{l}.$ Here, the first (second) term is an ac-Stark shift of the level $\left\vert f\right\rangle $ ($\left\vert e\right\rangle $) induced by cavity $l$. The last term represents the dipole coupling between the $j$th and the $k$th qutrits in cavity $l$, mediated by cavity $l$. When the level $\left\vert f\right\rangle $ of each qutrit is not occupied, the Hamiltonian (2) reduces to $$H_{\mathrm{eff}}=-\sum\limits_{l=1}^{N}\lambda _{l}S_{e,l}\hat{a}_{l}^{+}% \hat{a}_{l}.$$Under this Hamiltonian, one can easily find that the following state evolution$$\begin{array}{c} \left\vert g\right\rangle _{j_{l}}\left\vert 0\right\rangle _{c_{l}} \\ \left\vert e\right\rangle _{j_{l}}\left\vert 0\right\rangle _{c_{l}} \\ \left\vert g\right\rangle _{j_{l}}\left\vert 1\right\rangle _{c_{l}} \\ \left\vert e\right\rangle _{j_{l}}\left\vert 1\right\rangle _{c_{l}}% \end{array}% \rightarrow \begin{array}{c} \left\vert g\right\rangle _{j_{l}}\left\vert 0\right\rangle _{c_{l}} \\ \left\vert e\right\rangle _{j_{l}}\left\vert 0\right\rangle _{c_{l}} \\ \left\vert g\right\rangle _{j_{l}}\left\vert 1\right\rangle _{c_{l}} \\ e^{i\lambda _{l}t}\left\vert e\right\rangle _{j_{l}}\left\vert 1\right\rangle _{c_{l}}% \end{array}% .$$applies to each of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$ simultaneously ($l=1,2,...,N$). Note that the subscript $j_l$ involved in Eq. (4) is $1_l,2_l,...,$or $(m-1)_l$ $(l=1,2,...,N)$. **B. Qutrit-cavity resonant interaction** Consider that cavity $l$ is resonant with the $\left\vert g\right\rangle $ $% \leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $m_{l}$ $% (l=1,2,...,N)$ \[Fig. 2(b)\]. The Hamiltonian in the interaction picture and after the RWA is given by $$H_{2}=g_{r,l}\hat{a}_{l}\left\vert e\right\rangle _{m_{l}}\left\langle g\right\vert +\text{H.c.},$$where $g_{r,l}$ is the resonant coupling constant of cavity $l$ with the $% \left\vert g\right\rangle $ $\leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $m_{l}.$ Under this Hamiltonian, we can obtain the state evolution $$\left\vert g\right\rangle _{m_{l}}\left\vert 1\right\rangle _{c_{l}}\rightarrow \cos g_{r,l}t\left\vert g\right\rangle _{m_{l}}\left\vert 1\right\rangle _{c_{l}}-i\sin g_{r,l}t\left\vert e\right\rangle _{m_{l}}\left\vert 0\right\rangle _{c_{l}},$$while the state $\left\vert g\right\rangle _{m_{l}}\left\vert 0\right\rangle _{c_{l}}$ remains unchanged. **C.** **Qutrit-pulse resonant interaction** Assume that a classical pulse is resonant with the $\left\vert g\right\rangle $ $\leftrightarrow $ $\left\vert e\right\rangle $ transition of each of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$ \[Fig. 2(c)\]. The Hamiltonian in the interaction picture and after making the RWA is given by $$H_{3}=\Omega _{l}e^{-i\phi }S_{eg,l}^{+}+\text{H.c.},$$where $S_{eg,l}^{+}=\sum\limits_{j=1}^{m-1}\left\vert e\right\rangle _{j_{l}}\left\langle g\right\vert ,$ $\phi $ is the pulse initial phase and $% \Omega _{l}$ is the pulse Rabi frequency. Under this Hamiltonian, we can easily obtain the following state rotation $$\begin{aligned} \left\vert g\right\rangle _{j_{l}} &\rightarrow &\cos \Omega _{l}t\left\vert 0\right\rangle -ie^{-i\phi }\sin \Omega _{l}t\left\vert 1\right\rangle , \notag \\ \left\vert e\right\rangle _{j_{l}} &\rightarrow &-ie^{i\phi }\sin \Omega _{l}t\left\vert 0\right\rangle +\cos \Omega _{l}t\left\vert 1\right\rangle ,\end{aligned}$$for qutrit $j_{l}$ ($j=1,2,...,m-1$). The results (4), (6) and (8) will be applied for the GHZ state preparation, as shown in the next section. **III. PREPARATION OF GHZ STATES OF** $\mathbf{N}$**-GROUP QUBITS IN** $\mathbf{N}$** CAVITIES** Assume that the $N$ cavities are initially prepared in a GHZ state $\alpha \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{N}}+\beta \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}...\left\vert 1\right\rangle _{c_{N}}$ ($\left\vert \alpha \right\vert ^{2}+\left\vert \beta \right\vert ^{2}=1,\alpha \neq 0,$ $\beta \neq 0$). In addition, assume that qutrit $m_{l}$ in cavity $l$ is in the state $\left\vert g\right\rangle $ while each of the remaining qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$ is in the state $\frac{1}{\sqrt{2}}$ $\left( \left\vert g\right\rangle +\left\vert e\right\rangle \right) $, which can be prepared by applying a classical $\pi $ pulse resonant with the $\left\vert g\right\rangle $ $\leftrightarrow $ $% \left\vert e\right\rangle $ transition of the qutrits each initially in the state $\left\vert g\right\rangle .$ Hereafter, define $\left\vert \pm \right\rangle =\frac{1}{\sqrt{2}}$ $\left( \left\vert g\right\rangle \pm \left\vert e\right\rangle \right) .$ The initial state of the whole system is thus given by$$\begin{aligned} &&\left( \alpha \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{N}}+\beta \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}...\left\vert 1\right\rangle _{c_{N}}\right) \notag \\ &&\otimes \prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{N}}\otimes \left\vert g\right\rangle _{m_{1}}\left\vert g\right\rangle _{m_{2}}...\left\vert g\right\rangle _{m_{N}},\end{aligned}$$where the subscripts $j_{1},j_{2},...,j_{N}$ represent the $j$th qutrit in cavity $1$, cavity $2$, ..., cavity $N$ respectively; and $% m_{1},m_{2},...m_{N}$ represent the $m$-th qutrit (i.e., qutrit $m$) in cavity $1$, cavity $2$, ..., cavity $N$ respectively. ![(color online) (a) Sequence of operations for step 1. (b) Sequence of operations for step 2. (c) Sequence of operations for step 3. Here, $% \protect\tau _{1}$ and $\protect\tau _{2}$ are the qutrit-cavity interaction times, while $\protect\tau _{3}$ is the qutrit-pulse interaction time, as described in the text. In addition, $\protect\tau _{a}$ is the typical time required to adjust the qutrit level spacings. Note that the operation sequence in (a)-(c) follows from left to right.[]{data-label="fig:3"}](fig3.eps){width="12.5"} All qutrits are initially decoupled from their respective cavities. The procedure for preparing the $N$-group qubits in a GHZ state is listed below: Step 1. Keep qutrit $m_{l}$ decoupled from cavity $l$ but adjust the level spacing of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ in cavity $l$ to obtain an effective Hamiltonian described by Eq. (3). According to Eq. (4), the state (9) evolves as follows $$\begin{aligned} &&\left[ \alpha \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{N}}\otimes \prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{N}}\right. \notag \\ &&\left. +\beta \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}...\left\vert 1\right\rangle _{c_{N}}\prod\limits_{j=1}^{m-1}\frac{% \left( \left\vert g\right\rangle _{j_{1}}+e^{i\lambda _{1}t}\left\vert e\right\rangle _{j_{1}}\right) }{\sqrt{2}}\prod\limits_{j=1}^{m-1}\frac{% \left( \left\vert g\right\rangle _{j_{2}}+e^{i\lambda _{2}t}\left\vert e\right\rangle _{j_{2}}\right) }{\sqrt{2}}...\prod\limits_{j=1}^{m-1}\frac{% \left( \left\vert g\right\rangle _{j_{N}}+e^{i\lambda _{N}t}\left\vert e\right\rangle _{j_{N}}\right) }{\sqrt{2}}\right] \notag \\ &&\otimes \left\vert g\right\rangle _{m_{1}}\left\vert g\right\rangle _{m_{2}}...\left\vert g\right\rangle _{m_{N}}.\end{aligned}$$By setting $\lambda _{1}=\lambda _{2}=...=\lambda _{N}=\lambda $ and for $% t=\tau _{1}=\pi /\lambda ,$ the state (10) becomes $$\begin{aligned} &&\left( \alpha \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{N}}\otimes \prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{N}}\right. \notag \\ &&\left. +\beta \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}...\left\vert 1\right\rangle _{c_{N}}\prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{N}}\right) \notag \\ &&\otimes \left\vert g\right\rangle _{m_{1}}\left\vert g\right\rangle _{m_{2}}...\left\vert g\right\rangle _{m_{N}}.\end{aligned}$$ Then, adjust the level spacings of qutrits $\left\{ 1_{l},2_{l},...,\left( m-1\right) _{l}\right\} $ such that they are decoupled from cavity $l$. The operation sequence for this step of operation is illustrated in Fig. 3(a). Step 2. Adjust the level spacing of qutrit $m_{l}$ in cavity $l$ such that the $\left\vert g\right\rangle $ $\leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $m_{l}$ is resonant with cavity $l$ (with a resonant coupling constant $g_{r,l}$). After an interaction time $\tau _{2}=\pi /\left( 2g_{r,l}\right) $, we have $\left\vert 1\right\rangle _{c_{l}}\left\vert g\right\rangle _{m_{l}}\rightarrow -i\left\vert 0\right\rangle _{c_{l}}\left\vert e\right\rangle _{m_{l}}$ according to Eq. (6). Thus, the state (11) becomes $$\begin{aligned} &&\left( \alpha \prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert +\right\rangle _{j_{N}}\otimes \left\vert g\right\rangle _{m_{1}}\left\vert g\right\rangle _{m_{2}}...\left\vert g\right\rangle _{m_{N}}\right. \notag \\ &&\left. +\left( -i\right) ^{N}\beta \prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{1}}\prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m-1}\left\vert -\right\rangle _{j_{N}}\otimes \left\vert e\right\rangle _{m_{1}}\left\vert e\right\rangle _{m_{2}}...\left\vert e\right\rangle _{m_{N}}\right) \notag \\ &&\otimes \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{N}}.\end{aligned}$$To maintain the state (12), one should adjust the level spacing of qutrit $m_l$ such that it is decoupled from cavity $l.$ The operation sequence for this step of operation is illustrated in Fig. 3(b). Step 3. Apply a classical $\pi $ pulse (with an initial phase $\pi /2$) to qutrit $j_{l}$ ($j=1,2,...,m-1$). The pulse is resonant with the $\left\vert g\right\rangle $ $\leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrit $j_{l}$ for a duration time $\tau _{3}=\pi /\left( 2\Omega _{l}\right) ,$ resulting in $\left\vert +\right\rangle _{j_{l}}\rightarrow \left\vert g\right\rangle _{j_{l}}$ and $\left\vert -\right\rangle _{j_{l}}\rightarrow -\left\vert e\right\rangle _{j_{l}}$ according to Eq. (8). The state (12) thus becomes $$\alpha \prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{1}}\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{N}}+e^{i\phi }\beta \prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{1}}\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{N}},$$where $\phi =\left( m-3/2\right) N\pi .$ This state is a GHZ entangled state for the $N$-group qubits in the $N$ cavities, with the two logic states of a qubit being represented by the two lowest levels $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $ of a qutrit. For $\left\vert \alpha \right\vert =\left\vert \beta \right\vert =1/\sqrt{2},$ the state (13) is a standard GHZ state with maximal entanglement. The operation sequence for this step of operation is illustrated in Fig. 3(c). In above, we have set $\lambda _{1}=\lambda _{2}=...=\lambda _{N}$, which turns out into $$\frac{g_{1}^{2}}{\Delta _{1}}=\frac{g_{2}^{2}}{\Delta _{2}}=...=\frac{% g_{N}^{2}}{\Delta _{N}}.$$This condition (14) can be readily met by adjusting the qutrits’ positions in the cavities, the qutrits’ level spacings \[43-47\] or the cavity frequencies \[48-50\]. From the above description, one can see: \(i) Because the same detuning $\Delta _{l}$ is set for each of qutrits $% 1_{l},2_{l},...,\left( m-1\right) _{l}$ in cavity $l$ ($l=1,2,...,N$), the level spacings for qutrits $1_{l},2_{l},...,\left( m-1\right) _{l}$ can be synchronously adjusted, e.g., via changing the common external parameters. \(ii) During the entire operation, the level $\left\vert f\right\rangle $ for all qutrits in each cavity is not occupied. Thus, decoherence due to energy relaxation and dephasing of this higher energy level is greatly suppressed. \(iii) Assume that both $g_{r,1},g_{r,2},...,g_{r,N}$ and $\Omega _{1},\Omega _{2},...,\Omega _{N}$ are non-identical for different cavities. Thus, the total operation time is $$t_{op}=\pi /\lambda +\max \{\frac{\pi }{2g_{r,1}},\frac{\pi }{2g_{r,2}},...,% \frac{\pi }{2g_{r,N}}\}+\max \{\frac{\pi }{2\Omega _{1}},\frac{\pi }{2\Omega _{2}},...,\frac{\pi }{2\Omega _{N}}\}+4\tau _{d},$$which is independent of the number of qubits and thus does not increase with the number of qubits. Note that $\tau _{d}$ is the typical time required for adjusting the level spacings of qutrits. \(iv) This proposal does not require measurement on the state of the qutrits or the cavities. Thus, the GHZ state is created deterministically. \(v) The above operations have nothing to do with the manner in which the cavities are connected. In this sense, the method presented here can be applied to create GHZ states of the qubits distributed in a 1D, 2D, or 3D cavity-based quantum network (Fig. 1), where the cavities can be connected with optical fibers or other auxiliary systems. \(vi) When the $N$ cavities are initially prepared in another type of symmetrical GHZ state $\alpha \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}...\left\vert 0\right\rangle _{c_{s}}\left\vert 1\right\rangle _{c_{s+1}}\left\vert 1\right\rangle _{c_{s+2}}...\left\vert 1\right\rangle _{c_{N}}+\beta \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}...\left\vert 1\right\rangle _{c_{s}}\left\vert 0\right\rangle _{c_{s+1}}\left\vert 0\right\rangle _{c_{s+2}}...\left\vert 0\right\rangle _{c_{N}},$ it is straightforward to show that by following the procedure described above, the $N$-group qubits distributed in $N$ cavities will be prepared in the following GHZ state $$\begin{aligned} &&\alpha \prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{1}}\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{s}}\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{s+1}}\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{s+2}}...\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{N}} \notag \\ &&+\beta \prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{1}}\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{2}}...\prod\limits_{j=1}^{m}\left\vert e\right\rangle _{j_{s}}\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{s+1}}\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{s+2}}...\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{N}}.\end{aligned}$$ \(vii) The procedure described above can also be applied to create GHZ state of $N-$ group qubits distributed in $N$ cavities in the case when the number of qutrits in each group is different. As a matter of fact, the condition (14) is unnecessary. For the case of $% \lambda _{1}\neq \lambda _{2}\neq ...\neq \lambda _{N}$, the state (11) resulting from the operation of step 1 described above cannot be achieved by turning on/off the effective couplings of the qutrits with the $N$ cavities simultaneously. However, this state (11) can be obtained by modifying the operation of step 1 as follows. First, switch on the effective dispersive interaction of the qutrits $% \{1_{l},2_{l},...,\left( m-1\right) _{l}\}$ with cavity $l$ at a proper time $\tau _{l}$ = $t_{\max }-t_{l},$ by tuning the frequency of the qutrits $% \{1_{l},2_{l},...,\left( m-1\right) _{l}\}$ or the frequency of cavity $l$ to have the proper $\Delta _{l},$ where $t_{\max }$ $=\max \{\pi /\left( 2\lambda _{1}\right) ,\pi /\left( 2\lambda _{2}\right) ,...,\pi /\left( 2\lambda _{N}\right) \}$ and $t_{l}=$ $\pi /\left( 2\lambda _{l}\right) $. Then, switch off all the effective interactions of the qutrits with the $N$ cavities at the time $t_{\max },$ by tuning the frequency of the qutrits or the frequency of the $N$ cavities such that the qutrits are decoupled from the $N$ cavities. In the above discussion, we have assumed that the coupling strength $g_{l}$ is identical for all of qutrits $\{1_{l},2_{l},...,\left( m-1\right) _{l}\}$ in cavity $l$ ($l=1,2,...,N)$. For the case of $g_{l}$ varying with different qutrits in cavity $l,$ this proposal is still valid as long as the large detuning condition holds for individual qutrits, but the procedure may become more complex because one will need to adjust the frequencies of individual qutrits separately. Therefore, to simplify the experiments, it is strongly suggested to design the sample with identical qutrit-cavity coupling strength for qutrits in the same cavity. To prepare the cavities in the GHZ state, two key ingredients are required. One is the coupling between neighbor cavities. For optical cavities, this can be obtained by using optical fibers to connect the neighbor cavities. In addition, for microwave cavities or resonators, this can be achieved by using solid-state auxiliary systems (e.g., superconducting qubits/qutrits, quantum dots, or NV centers) to connect the neighbor cavities. The other is decoupling of the intra-cavity atoms with the cavities. This can be realized by adjusting the level spacings of the atoms or the frequencies of the cavities such that the cavities are highly detuned (decoupled) from the transitions between any two levels of the atoms. As discussed previously, both level spacings of natural or artificial atoms and cavity frequencies can be adjusted in experiments \[43-50\]. ![(color online) 1D quantum network consisting of four one-dimensional transmission line resonators (TLRs) arranged in an array. Each TLR hosts three SC transmon qutrits (red dots), and adjacent TLRs are coupled through SC transmon qutrits ($q_1,q_2,q_3$).[]{data-label="fig:4"}](fig4.eps){width="11.5"} **IV. POSSIBLE EXPERIMENTAL IMPLEMENTATION** In above, a general type of qubit is considered and a qubit is formed by the two lowest levels of a qutrit. Circuit QED consists of microwave cavities and superconducting (SC) qubits, which is an analogue of cavity QED and has been considered as one of the leading candidates for QIP \[54-60\]. As an example, let us consider a setup, which consists of four TLRs, each hosting three SC transmon qutrits, connected through the coupler SC transmon qutrits ($q_{1},q_{2},q_{3}$), and arranged in an array (Fig. 4). The three SC transmon qutrits placed in cavity $l$ are labelled as $1_l,2_l$, and $3_l$ ($l=1,2,3,4$). In the following, we will give a discussion on the experimental feasibility of preparing a GHZ state of the four-group SC transmon qubits distributed in the four TLRs (Fig. 4). Let us first give some explanation on transmon qutrits and transmon qubits. A transmon qutrit has a ladder-type three level structure as shown in Fig. 2, while a transmon qubit considered here is formed by the two lowest levels $\left\vert g\right\rangle $ and $\left\vert e\right\rangle $ of a transmon qutrit. In other words, when the third level $\left\vert f\right\rangle $ of a transmon qutrit is dropped off (Fig. 2), the transmon qutrit reduces to a transmon qubit. As is well known, a transom qubit is an artificial two-level atom, whose Hamiltonian takes the same form as the Hamiltonian of a natural two-level atom, i.e., $H=\omega _{0}\sigma _{z}$, where $\omega _{0}$ is the transition frequency of the atom, and $\sigma _{z}=\left\vert e\right\rangle \left\langle e\right\vert -\left\vert g\right\rangle \left\langle g\right\vert $ is the Pauli operator. Based on the discussion here, one can see that the three tranmon qutrits (red dots in Fig. 4) placed in a TLR correspond to three transmon qubits (i.e., one group of qubits). Thus, the four groups of transmon qutrits placed in the four TLRs correspond to the four groups of SC transmon qubits. For convenience, in the following we will use [*the terms “cavity" and “resonator"*]{} interchangeably. ![(color online) (a) Dispersive interaction between cavity $l$ and the $% \left\vert e\right\rangle $ $\leftrightarrow $ $\left\vert f\right\rangle $ transition of qutrits $\left\{ 1_{l},2_{l}\right\} $ with coupling strength $% g_{l}$ and detuning $\Delta _{l}=\protect\omega _{fe}-\protect\omega % _{c_{l}}>0$, as well as the unwanted off-resonant interaction between cavity $l$ and the $\left\vert g\right\rangle $ $% \leftrightarrow $ $\left\vert e\right\rangle $ transition of qutrits $% \left\{ 1_{l},2_{l}\right\} $ with coupling strength $\widetilde{g}_{l}$ and detuning $\widetilde{\Delta }_{l}=\protect\omega _{eg}-\protect\omega % _{c_{l}}>0$. (b) Resonant interaction between cavity $l$ and the $\left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition of qutrit $3_{l}$ with coupling constant $g_{r,l}$, as well as the unwanted off-resonant interaction between cavity $l$ and the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of qutrit $3_{l}$ with coupling constant $\widetilde{g}_{r,l}$ and detuning $% \Delta _{r,l}$. (c) Resonant interaction between a classical pulse and the $\left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition of qutrits $\left\{ 1_{l},2_{l}\right\} $ with Rabi frequency $\Omega _{l}$, as well as the unwanted off-resonant interaction between the pulse and the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of qutrits $\left\{ 1_{l},2_{l}\right\} $ with Rabi frequency $\widetilde{\Omega }_{l}$ and detuning $\Delta _{p}=\omega_{fe}-\omega_p$. Here, $\omega_p$ is the pulse frequency.[]{data-label="fig:5"}](fig5.eps){width="10.5"} From the description given in the previous section, one can see that three basic interactions are used in the preparation of the GHZ states, i.e., the three basic interactions described by the Hamiltonians $H_{1}, H_{2},$ and $% H_{3}$ described above. With the unwanted interaction and the inter-cavity crosstalk being considered, these Hamiltonians are modified as follows: \(i) $H_{1}^{\prime }=H_{1}+\delta \!H_{1}+\varepsilon $, where $\delta \!H_{1}$ describes the unwanted interaction of cavity $l$ with the $% \left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition of qutrits $\left\{ 1_{l},2_{l}\right\} $ in cavity $l$ ($l=1,2,3,4$) \[Fig. 5(a)\]. The expression of $\delta \!H_{1}$ is given by $$\delta H_{1}=\sum\limits_{l=1}^{4}\widetilde{g}_{l}e^{i\widetilde{\Delta }% _{l}t}\hat{a}_{l}S_{eg,l}^{+}+\text{H.c.,}$$where $S_{eg,l}^{+}=\sum\limits_{j=1}^{2}\left\vert e\right\rangle _{j_{l}}\left\langle g\right\vert $, $\widetilde{g}_{l}$ is the coupling strength between cavity $l$ and the $\left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition of qutrits $% \left\{ 1_{l},2_{l}\right\} $, and $\widetilde{\Delta }_{l}=\omega _{eg}-\omega _{c_{l}}$ is the detuning between the frequency of cavity $l$ and the $\left\vert g\right\rangle \leftrightarrow \left\vert e\right\rangle $ transition frequency of qutrits $\left\{ 1_{l},2_{l}\right\} $. In addition, $\varepsilon $ describes the inter-cavity crosstalk between the adjacent cavities, which is given by$$\varepsilon =g_{12}e^{i\Delta _{12}t}\hat{a}_{1}^{+}\hat{a}% _{2}+g_{23}e^{i\Delta _{23}t}\hat{a}_{2}^{+}\hat{a}_{3}+g_{34}e^{i\Delta _{34}t}\hat{a}_{3}^{+}\hat{a}_{4}+\text{H.c.},$$where $\Delta _{j(j+1)}=\omega _{c_{j}}-\omega _{c_{j+1}}=\Delta _{j+1}-\Delta _{j}$ $(j=1,2,3)$, $ g_{j(j+1)}$ is the crosstalk strength between the two neighbor cavities $j$ and $j+1$ $(j=1,2,3).$ Note that when compared to the crosstalk between the adjacent cavities, the crosstalk between non-adjacent cavities (i.e., cavities $1$ and $3$, cavities $1$ and $4$, and cavities $2$ and $4$) are negligible. \(ii) $H_{2}^{\prime }=H_{2}+\delta \!H_{2}+\varepsilon ,$ where $\delta \!H_{2}$ describes the unwanted interaction between cavity $l$ and the $% \left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of qutrit $3_{l}$ in cavity $l$ ($l=1,2,3,4$) \[Fig. 5(b)\]$.$ The expression of $\delta \!H_{2}$ is given by $$\delta \!H_{2}=\widetilde{g}_{r,l}e^{i\Delta _{r,l}t}\hat{a}_{l}\left\vert f\right\rangle _{3_{l}}\left\langle e\right\vert +\text{H.c.}$$where $\widetilde{g}_{r}$ is the off-resonant coupling strength between cavity $l$ and the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of qutrit $3_{l}$ in cavity $l,$ and $\Delta _{r,l}=\omega _{fe}-\omega _{c_{l}}$ is the detuning between the frequency of cavity $l$ and the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition frequency of qutrit $3_{l}.$ \(iii) $\widetilde{H}_{3}=H_{3}+\delta \!H_{3}+\varepsilon ,$ where $\delta \!H_{3}$ describes the unwanted interaction between the pulse and the $% \left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of $\left\{ 1_{l},2_{l}\right\} $ ($l=1,2,3,4$) \[Fig. 5(c)\]$% . $ The expression of $\delta \!H_{3}$ is given by $$\delta \!H_{3}=\widetilde{\Omega }_{l}e^{-i\phi }e^{-i\Delta _{p}t}S_{fe,l}^{+}+\text{H.c.}$$where $S_{fe,l}^{+}=\sum\limits_{j=1}^{2}\left\vert f\right\rangle _{j_{l}}\left\langle e\right\vert ,$ $\widetilde{\Omega }_{l}$ is the pulse Rabi frequency associated with the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition of the qutrits, and $% \Delta _{p}=\omega _{fe}-\omega _{p}=\omega _{fe}-\omega _{eg}$ is the detuning between the pulse frequency $\omega_p$ and the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition frequency of the qutrits. It should be mentioned that the $\left\vert g\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition induced by the pulse or the cavities is negligible because $\omega _{eg},\omega _{fe}\ll \omega _{fg}$ (Fig. 2). For simplicity, we also assume that the effect of the qutrit decoherence and the cavity decay during the adjustment of the qutrit level spacings is negligible because for transmon qutrits the level spacings can be rapidly adjusted. After taking into account the qutrit decoherence and the cavity decay, the system dynamics, under the Markovian approximation, is determined by the master equation $$\begin{aligned} \frac{d\rho }{dt} &=&-i\left[ H_{k}^{\prime },\rho \right] +\sum\limits_{l=1}^{4}\kappa _{l}\mathcal{L}\left[ \hat{a}_{l}\right] + \notag \\ &&+\gamma _{eg}\sum\limits_{l=1}^{4}\sum\limits_{j=1}^{3}\mathcal{L}\left[ \sigma _{eg,j_{l}}^{-}\right] +\gamma _{fe}\sum\limits_{l=1}^{4}\sum\limits_{j=1}^{3}\mathcal{L}\left[ \sigma _{fe,j_{l}}^{-}\right] +\gamma _{fg}\sum\limits_{l=1}^{4}\sum\limits_{j=1}^{3}\mathcal{L}\left[ \sigma _{fg,j_{l}}^{-}\right] \notag \\ &&+\gamma _{\varphi ,e}\sum\limits_{l=1}^{4}\sum\limits_{j=1}^{3}\left( \sigma _{ee,j_{l}}\rho \sigma _{ee,j_{l}}-\sigma _{ee,j_{l}}\rho /2-\rho \sigma _{ee,j_{l}}/2\right) \notag \\ &&+\gamma _{\varphi ,f}\sum\limits_{l=1}^{4}\sum\limits_{j=1}^{3}\left( \sigma _{ff,j_{l}}\rho \sigma _{ff,j_{l}}-\sigma _{ff,j_{l}}\rho /2-\rho \sigma _{ff,j_{l}}/2\right) ,\end{aligned}$$where $H_{k}^{\prime }$ (with $k=1,2,3$) are the modified Hamiltonians $% H_{1}^{\prime },$ $H_{2}^{\prime },$ and $H_{3}^{\prime }$ given above, $% \mathcal{L}\left[ \Lambda \right] =\Lambda \rho \Lambda ^{+}-\Lambda ^{+}\Lambda \rho /2-\rho \Lambda ^{+}\Lambda /2$ (with $\Lambda =\hat{a}% _{l},,\sigma _{fe,j_{l}}^{-},\sigma _{eg,j_{l}}^{-},\sigma _{fg,j_{l}}^{-}$), $\sigma _{fe,j_{l}}^{-}=\left\vert e\right\rangle _{j_{l}}\left\langle f\right\vert ,$ $\sigma _{eg,j_{l}}^{-}=\left\vert g\right\rangle _{j_{l}}\left\langle e\right\vert ,$ $\sigma _{fg,j_{l}}^{-}=\left\vert g\right\rangle _{j_{l}}\left\langle f\right\vert ,$ $\sigma _{ee,j_{l}}=\left\vert e\right\rangle _{j_{l}}\left\langle e\right\vert $, and $\sigma _{ff,j_{l}}=\left\vert f\right\rangle _{j_{l}}\left\langle f\right\vert .$ In addition, $\kappa _{l}$ is the decay rate of cavity $l$;$\gamma _{eg}$ is the energy relaxation rate for the level $\left\vert e\right\rangle $ associated with the decay path $\left\vert e\right\rangle \rightarrow \left\vert g\right\rangle $; $\gamma _{fe}$ ($\gamma _{fg}$) is the relaxation rate for the level $\left\vert f\right\rangle $ related to the decay path $\left\vert f\right\rangle \rightarrow \left\vert e\right\rangle $ ($\left\vert f\right\rangle \rightarrow \left\vert g\right\rangle $); $\gamma _{\varphi ,e}$ ($\gamma _{\varphi ,f}$) is the dephasing rate of the level $\left\vert e\right\rangle $ ($\left\vert f\right\rangle $)**.** The fidelity of the operation is given by $\mathcal{F}=\sqrt{\left\langle \psi _{id}\right\vert \rho \left\vert \psi _{id}\right\rangle },$ where $% \left\vert \psi _{id}\right\rangle $ is the ideal output state given by$$\frac{1}{\sqrt{2}}\left( \prod\limits_{j=1}^{3}\left\vert g\right\rangle _{j_{1}}\prod\limits_{j=1}^{3}\left\vert g\right\rangle _{j_{2}}\prod\limits_{j=1}^{3}\left\vert e\right\rangle _{j_{3}}\prod\limits_{j=1}^{3}\left\vert e\right\rangle _{j_{4}}+\prod\limits_{j=1}^{3}\left\vert e\right\rangle _{j_{1}}\prod\limits_{j=1}^{3}\left\vert e\right\rangle _{j_{2}}\prod\limits_{j=1}^{3}\left\vert g\right\rangle _{j_{3}}\prod\limits_{j=1}^{m}\left\vert g\right\rangle _{j_{4}}\right) \otimes \prod\limits_{l=1}^{4}\left\vert 0\right\rangle _{c_{l}},$$when the four TLRs are initially in the GHZ state $\frac{1}{% \sqrt{2}}\left( \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}\left\vert 1\right\rangle _{c_{4}}+\left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert 0\right\rangle _{c_{4}}\right) $ (see the appendix for the details of preparing the four TLRs in this GHZ state), while $\rho $ is the final density matrix obtained by numerically solving the master equation. We now numerically calculate the fidelity. For a transmon qutrit, the level spacing anharmonicity $100\sim 720$ MHz was reported in experiments \[61\]. As an example, consider $\Delta _{r,l}/2\pi =\Delta _{p}/2\pi =-\left( \widetilde{\Delta }_{l}-\Delta _{l}\right) /2\pi =-0.7$ GHz. By choosing $% \Delta _{1}/2\pi =\Delta _{3}/2\pi =100$ MHz and $\Delta _{2}/2\pi =\Delta _{4}/2\pi =80$ MHz, we have $\Delta _{12}/2\pi =-20$ MHz, $\Delta _{23}/2\pi =20$ MHz, and $\Delta _{34}/2\pi =-20$ MHz. With the choice of $\Delta _{1},\Delta _{2},\Delta _{3},\Delta _{4}$ here, one has $g_{2}=g_{4}=\sqrt{% \frac{4}{5}}g_{1}$ and $g_{3}=g_{1}$ according to Eq. (14). For transmon qutrits \[62\], $\widetilde{g}_{l}=g_{l}/\sqrt{2},$ $\widetilde{g}_{r,l}=\sqrt{% 2}g_{r,l},$ $\widetilde{\Omega }_{l}=\sqrt{2}\Omega _{l}.$ For simplicity, we assume $g_{r,l}=\widetilde{g}_{l}.$ In addition, we choose $% g_{12},g_{23},g_{34}=0.01\max \{g_{1},g_{2},g_{3}\},$ which is achievable in experiments by a prior design of the sample with appropriate capacitances $% c_{11},c_{12},c_{22,}c_{23},c_{33},c_{34}$ \[63\]. Other parameters used in the numerical simulation are: (i) $\gamma _{eg}^{-1}=60$ $\mu $s, $\gamma _{fg}^{-1}=150$ $\mu $s \[64\], $\gamma _{fe}^{-1}=30$ $\mu $s, $\gamma _{\phi ,e}^{-1}=\gamma _{\phi ,f}^{-1}=20$ $\mu $s, (ii) $\Omega _{l}/2\pi =45$ MHz. Here, we consider a rather conservative case for decoherence time of the transmon qutrit \[65,66\]. For simplicity, we assume $\kappa _{l}=\kappa$ in our numerical simulation ($l=1,2,3,4$). By numerically solving the master equation (21), we plot Fig. 6 for $\kappa ^{-1}=10$ $\mu $s, which shows the fidelity versus $g_{1}.$ From Fig. 6, one can see that for $g_{1}/2\pi \sim $ $14.15$ MHz, a high fidelity $\sim 90\%$ can be obtained. For the value of $g_{1}$ here$,$ $g_{2}/2\pi ,g_{4}/2\pi \sim 12.65$ MHz; $g_{3}\sim 14.15$ MHz; $g_{r,1}/2\pi ,g_{r,3}/2\pi \sim 10$ MHz; and $g_{r,2}/2\pi ,g_{r,4}/2\pi =8.95$ MHz$,$ which are readily available in experiments because a coupling strength $% g/2\pi $ $\sim 360$ MHz has been reported for a transmon qutrit coupled to a TLR \[67,68\]. ![(color online) Fidelity versus $g_{1}$. The parameters used in the numerical simulation are referred to the text[]{data-label="fig:4"}](fig6.eps){width="11.5"} ![(color online) Fidelity versus $\protect\kappa ^{-1}$ for $g_{1}/2% \protect\pi =$ $14.15$ MHz and $\Omega _{l}/2\protect\pi =45$ MHz. Other parameters used in the numerical simulation are the same as those used in Fig. 6.[]{data-label="fig:4"}](fig7.eps){width="11.5"} To see how the fidelity changes with the cavity decay rate, we plot Fig. 7, which shows the fidelity versus $\kappa ^{-1}$ for $g_{1}/2\pi =$ $14.15$ MHz and $\Omega _{l}/2\protect\pi =45$ MHz. Fig. 7 demonstrates that the fidelity strongly depends on the photon lifetime of the cavities. For $\kappa ^{-1}=20$ $\mu $s, a high fidelity $>90\%$ can be achieved. We remark that the fidelity can be further increased by improving the system parameters. The operation time is $\sim 0.27$ $\mu $s, which is much shorter than the decoherence times of transmon qutrits used in our numerical simulations. For a transmon qutrit, the typical transition frequency between two neighbor levels is $1-20$ GHz. As an example, we consider $\omega _{eg}/2\pi \sim 6.7$ GHz and $\omega _{fe}/2\pi \sim 6.0$ GHz for the case of the transmon qutrits being dispersively coupled to their cavities. Thus, for the values of $\Delta _{1},\Delta _{2},\Delta _{3},\Delta _{4}$ chosen above, one has $% \omega _{c_{1}}/2\pi =\omega _{c_{3}}/2\pi =6.6$ GHz and $\omega _{c_{2}}/2\pi =\omega _{c_{4}}/2\pi =6.62$ GHz. For the cavity frequencies here and $\kappa^{-1}=10$ $\mu $s, the quality factors of the four cavities are $% Q_{1},Q_{3}\sim 4.14\times 10^{5}$ and $Q_{2},Q_{4}\sim 4.16\times 10^{5}$, which are available because TLRs with a loaded quality factor $Q\sim 10^{6}$ have been experimentally demonstrated \[69,70\]. The analysis given above shows that high-fidelity creation of GHZ states of four-group SC qubits distributed in four cavities is feasible with the present circuit QED technology. Further investigation on the experimental feasibility of creating GHZ states of more qubits distributed in different cavities would be necessary. However, we note that the numerical simulations become rather lengthy and complex as the number of qubits increases, which is beyond the scope of this theoretical work. **V. CONCLUSION** We have presented an approach to generate Greenberger-Horne-Zeilinger (GHZ) entangled states of multiple groups of qubits distributed in multiple cavities. From the above description, one can see that as long as the cavities are initially prepared in a GHZ state, all qubits in the cavities can be entangled via a 3-step operation only, no matter what type of architecture the cavity-based quantum network preserves and in which way the cavities are coupled. This proposal also has some additional advantages stated in the introduction. Our numerical simulation shows that high-fidelity preparation of GHZ states of four-group SC qubits, each group containing three qubits and the four groups distributed in four cavities, is feasible with current circuit QED technology. By increasing the number of resonators, GHZ states of more groups of SC qubits distributed in multiple cavities can be created. This work opens a way for quantum state engineering with many qubits distributed in different cavity nodes of a quantum network. We wish that it will stimulate experimental activities in the near future. As a final note, it should be stressed that this proposal is based on the prerequisite that the cavities are initially prepared in a GHZ state. Nevertheless, this work is of interest, because it may be easy to entangle the cavities when compared to directly entangle a large number of qubits distributed in different cavities without aid of the cavity initial GHZ states and because the proposal works for a 1D, 2D, or 3D quantum network composed of cavities. **ACKNOWLEDGMENTS** This work was partly supported by the Key R$\&$D Program of Guangdong province (2018B030326001), the National Natural Science Foundation of China (NSFC) (11074062, 11374083, 11774076), the NKRDP of China (2016YFA0301802), and the Jiangxi Natural Science Foundation (20192ACBL20051). [99]{} S. Seidelin, J. Chiaverini, R. Reichle, J. J. Bollinger, D. Leibfried, J. Britton, J. H. Wesenberg, R. B. Blakestad, R. J. Epstein, D. B. Hume, W. M. Itano, J. D. Jost, C. Langer, R. Ozeri, N. Shiga, and D. J. Wineland, Microfabricated surface-electrode ion trap for scalable quantum information processing, Phys. Rev. Lett. **96**, 253003 (2006). K. Nemoto, M. Trupke, S. J. Devitt, A. M. Stephens, B. Scharfenberger, K. Buczak, T. Nöbauer, M. S. Everitt, J. Schmiedmayer, and W. J. Munro, Photonic Architecture for Scalable Quantum Information Processing in Diamond, Phys. Rev. X **4**, 031022 (2014). X. Qiang, X. Zhou, J. Wang, C. M. Wilkes, T. Loke, S. O’Gara, L. Kling, G. D. Marshall, R. Santagati, T. C. Ralph, J. B. Wang, J. L. O’Brien, M. G. Thompson, and J. C. F. Matthews, Large-scale silicon quantum photonics implementing arbitrary two-qubit processing, Nature Photonics **12**, 534 (2018). C. P. Yang, Q. P. Su, S. B. Zheng, and S. Han, Generating entanglement between microwave photons and qubits in multiple cavities coupled by a superconducting qutrit, Phys. Rev. A **87**, 022320 (2013). M. Mariantoni, F. Deppe, A. Marx, R. Gross, F. K. Wilhelm, and E. Solano, Two-resonator circuit quantum electrodynamics: A superconducting quantum switch, Phys. Rev. B **78**, 104508 (2008). M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, England, 2001). P. W. Shor, *in Proceedings of the 35th Annual Symposium on Foundations of Computer Science* (IEEE Computer Society Press, Santa Fe, NM, 1994). M. Hillery, V. Buzék, and A. Berthiaume, Quantum secret sharing, Phys. Rev. A **59**, 1829 (1999). S. Bose, V. Vedral, and P. L. Knight, Multiparticle generalization of entanglement swapping, Phys. Rev. A **57**, 822 (1998). R. Cleve, D. Gottesman, and H. K. Lo, How to Share a Quantum Secret, Phys. Rev. Lett. **83**, 648 (1999). C. P. Yang, Shih I. Chu, and S. Han, Efficient many-party controlled teleportation of multiqubit quantum, Phys. Rev. A **70**, 022329 (2004). D. P. DiVincenzo and P. W. Shor, Fault-Tolerant Error Correction with Efficient Quantum Codes, Phys. Rev. Lett. **77**, 3260 (1996). J. Preskill, Reliable quantum computers, Proc. R. Soc. London A **454**, 385 (1998). V. Giovannetti, S. Lloyd, and L. Maccone, Quantum-enhanced measurements: Beating the standard quantum Limit, Science **306**, 1330 (2004). J. J. Bollinger, W. M. Itano, D. J. Wineland, and D. J. Heinzen, Optimal frequency measurements with maximally correlated states, Phys. Rev. A **54**, 4649 (1996). S. F. Huelga, C. Macchiavello, T. Pellizzari, A. K. Ekert, M. B. Plenio, and J. I. Cirac, Improvement of frequency standards with quantum entanglement, Phys. Rev. Lett. **79**, 3865 (1997). T. Monz, P. Schindler, J. T. Barreiro, M. Chwalla, D. Nigg, W. A. Coish, M. Harlander, W. Hansel, M. Hennrich, and R. Blatt, 14-Qubit Entanglement: Creation and Coherence, Phys. Rev. Lett. **106**, 130506 (2011). A. Omran, H. Levine, A. Keesling, G. Semeghini, T. T. Wang, S. Ebadi, H. Bernien, A. S. Zibrov, H. Pichler, S. Choi *et al.*, Generation and manipulation of Schrödinger cat states in Rydberg atom arrays, Science **365**, 570 (2019). H. S. Zhong, Y. Li, W. Li, L. C. Peng, Z. E. Su, Y. Hu, Y. M. He, X. Ding, W. J. Zhang, Hao Li, et al., 12-photon entanglement and scalable scattershot boson sampling with optimal entangled-photon pairs from parametric down-conversion, Phys. Rev. Lett. **121**, 250505 (2018). X.-L. Wang, Y.-H. Luo, H.-L. Huang, M.-C. Chen, Z.-E. Su, C. Liu, C. Chen, W. Li, Y.-Q. Fang, X. Jiang, et al., 18-qubit entanglement with six Photons’ three degrees of freedom, Phys. Rev. Lett. **120**, 260502 (2018). C. Song, K. Xu, W. Liu, C.-p. Yang, S.-B. Zheng, H. Deng, Q. Xie, K. Huang, Q. Guo, L. Zhang, et al., 10-qubit entanglement and parallel logic operations with a superconducting circuit, Phys. Rev. Lett. **119**, 180511 (2017). C. Song, K. Xu, H. Li, Y. Zhang, X. Zhang, W. Liu, Q. Guo, Z. Wang, W. Ren, J. Hao, H. Feng, H. Fan, D. Zheng, D. Wang, H. Wang, and S. Zhu, Observation of multi-component atomic Schrödinger cat states of up to 20 qubits, Science **365**, 574 (2019). J. I. Cirac and P. Zoller, Preparation of macroscopic superpositions in many-atom systems, Phys. Rev. A **50**, R2799 (1994). C. C. Gerry, Preparation of multiatom entangled states through dispersive atom–cavity-field interactions, Phys. Rev. A **53**, 2857 (1996). S. B. Zheng, One-Step Synthesis of Multiatom Greenberger-Horne-Zeilinger States, Phys. Rev. Lett. **87**, 230404 (2001). S. B. Zheng, Quantum-information processing and multiatom-entanglement engineering with a thermal cavity, Phys. Rev. A **66**, 060303 (2002). L. M. Duan and H. Kimble, Efficient Engineering of Multiatom Entanglement through Single-Photon Detections, Phys. Rev. Lett. **90**, 253601 (2003). X. Wang, M. Feng, and B. C. Sanders, Multipartite entangled states in coupled quantum dots and cavity QED, Phys. Rev. A **67**, 022302 (2003). S. L. Zhu, Z. D. Wang, and P. Zanardi, Geometric quantum computation and multiqubit entanglement with superconducting qubits inside a cavity, Phys. Rev. Lett. **94**, 100502 (2005). W. Feng, P. Wang, X. Ding, L. Xu, and X. Q. Li, Generating and stabilizing the Greenberger-Horne-Zeilinger state in circuit QED: Joint measurement, Zeno effect, and feedback, Phys. Rev. A **83**, 042313 (2011). S. Aldana, Y. D. Wang, and C. Bruder, Greenberger-Horne-Zeilinger generation protocol for N superconducting transmon qubits capacitively coupled to a quantum bus, Phys. Rev. B **84**, 134519 (2011). J. Cho, D. G. Angelakis, and S. Bose, Heralded generation of entanglement with coupled cavities, Phys. Rev. A **78**, 022323 (2008). S. B. Zheng, C. P. Yang, and F. Nori, Arbitrary control of coherent dynamics for distant qubits in a quantum network, Phys. Rev. A **82**, 042327 (2010) C. P. Yang, Q. P. Su, and F. Nori, Entanglement generation and quantum information transfer between spatially-separated qubits in different cavities, New J. Phys. **15**, 115003 (2013). X. L. He, Q. P. Su, F. Y. Zhang, and C. P. Yang, Generating multipartite entangled states of qubits distributed in different cavities, Quantum Inf. Process. **13**, 1381 (2014). S. Liu, R. Yu, J. Li, and Y. Wu, Generation of a multi-qubit W entangled state through spatially separated semiconductor quantum-dot-molecules in cavity-quantum electrodynamics arrays, J. Applied Phys. **115**, 134312 (2014). X. B. Huang, Z. R. Zhong, and Y. H. Chen, Generation of multi-atom entangled states in coupled cavities via transitionless quantum driving, Quantum Inf. Process. **14**, 4475 (2015). C. P. Yang, Q. P. Su, S. B. Zheng, and F. Nori, Entangling superconducting qubits in a multi-cavity system, New J. Phys. **18**, 013025 (2016). X. B. Huang, Y. H. Chen, and Z. Wang, Fast generation of three-qubit Greenberger-Horne-Zeilinger state based on the Lewis-Riesenfeld invariants in coupled cavities, Sci Rep. **6**, 25707 (2016). M. Izadyari, M. Saadati-Niari, R. Khadem-Hosseini, and M. Amniat-Talab, Creation of N-atom GHZ state in atom-cavity-fiber system by multi-state adiabatic passage, Opt. Quant. Electron** 48**, 71 (2016). Y. H. Kang, Y. H. Chen, Q. C. Wu, B. H. Huang, J. Song, and Y. Xia, Fast generation of W states of superconducting qubits with multiple Schröinger dynamics, Sci Rep. **6**, 36737 (2016). X. T. Mo and Z. Y. Xue, Single-step multipartite entangled states generation from coupled circuit cavities, Frontiers of Physics **14**, 31602 (2019). P. J. Leek, S. Filipp, P. Maurer, M. Baur, R. Bianchetti, J. M. Fink, M. Goppl, L. Steffen, and A. Wallraff, Using sideband transitions for two-qubit operations in superconducting circuits, Phys. Rev. B **79**, 180511 (2009). M. Neeley, M. Ansmann, R. C. Bialczak, M. Hofheinz, N. Katz, E. Lucero, A. O’Connell, H. Wang, A. N. Cleland, and J. M. Martinis, Process tomography of quantum memory in a Josephson-phase qubit coupled to a two-level state, Nat. Phys. **4**, 523 (2008). Z. L. Xiang, X. Y. Lu, T. F. Li, J. Q. You, and F. Nori, Hybrid quantum circuit consisting of a superconducting flux qubit coupled to a spin ensemble and a transmission-line resonator. Phys. Rev. B **87**, 144516 (2013). P. Neumann, et al., Excited-state spectroscopy of single NV defects in diamond using optically detected magnetic resonance. New J. Phys. **11**, 013017 (2009). P. Pradhan, M. P. Anantram, and K. L. Wang, Quantum computation by optically coupled steady atoms/quantum-dots inside a quantum electro-dynamic cavity, arXiv:quant-ph/0002006. M. Brune, E. Hagley, J. Dreyer, X. Maitre, A. Maali, C. Wunderlich, J. M. Raimond, and S. Haroche, Observing the progressive decoherence of the Meter in a quantum measurement, Phys. Rev. Lett. **77**, 4887 (1996). M. Sandberg, C. M. Wilson, F. Persson, T. Bauch, G. Johansson, V. Shumeiko, T. Duty, and P. Delsing, Tuning the field in a microwave resonator faster than the photon lifetime, Appl. Phys. Lett. **92**, 203501 (2008). Z. L. Wang, Y. P. Zhong, L. J. He, H. Wang, J. M. Martinis, A. N. Cleland, and Q. W. Xie, Quantum state characterization of a fast tunable superconducting resonator, Appl. Phys. Lett. **102**, 163503 (2013). S. B. Zheng and G. C. Guo, Efficient scheme for two-atom entanglement and quantum information processing in cavity QED, Phys. Rev. Lett. **85**, 2392 (2000). A. Sørensen and K. Mølmer, Quantum computation with ions in thermal motion, Phys. Rev. Lett. **82**, 1971 (1999). D. F. V. James and J. Jerke, Effective Hamiltonian theory and its applications in quantum information, Can. J. Phys. **85**, 625 (2007). C. P. Yang, S. I. Chu, and S. Han, Possible realization of entanglement, logical gates, and quantum information transfer with superconducting-quantuminterference-device qubits in cavity QED, Phys. Rev. A **67**, 042311 (2003). J. Q. You and F. Nori, Quantum information processing with superconducting qubits in a microwave field, Phys. Rev. B **68**, 064509 (2003). A. Blais, R. S. Huang, A. Wallra, S. M. Girvin, and R. J. Schoelkopf, Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation, Phys. Rev. A **69**, 062320 (2004). J. Clarke and F. K. Wilhelm, Superconducting quantum bits, Nature (London) **453**, 1031 (2008). J. Q. You and F. Nori, Atomic physics and quantum optics using superconducting circuits, Nature (London) **474**, 589 (2011). Z. L. Xiang, S. Ashhab, J. Q. You, and F. Nori, Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems, Rev. Mod. Phys. **85(2)**, 623 (2013). X. Gu, A. F. Kockum, A. Miranowicz, Y. X. Liu, and F. Nori, Microwave photonics with superconducting quantum circuits, Phys. Rep. **718–719**, 1 (2017). I. C. Hoi, C. M. Wilson, G. Johansson, T. Palomaki, B. Peropadre, and P. Delsing, Demon stration of a single-photon router in the microwave regime, Phys. Rev. Lett. **107**, 073601 (2011). J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, Charge-insensitive qubit design derived from the Cooper pair box, Phys. Rev. A **76**, 042319 (2007). C. P. Yang, Q. P. Su, and S. Han, Generation of Greenberger-Horne-Zeilinger entangled states of photons in multiple cavities via a superconducting qutrit or an atom through resonant interaction, Phys. Rev. A **86**, 022329 (2012). For a transmon qutrit, the $\left\vert 0\right\rangle \leftrightarrow \left\vert 2\right\rangle $ transition is much weaker than those of the $\left\vert 0\right\rangle \leftrightarrow \left\vert 1\right\rangle $ and $\left\vert 1\right\rangle \leftrightarrow \left\vert 2\right\rangle $ transitions. Thus, we have $\gamma _{20}^{-1}\gg \gamma _{10}^{-1},\gamma _{21}^{-1}$. C. Wang, Y. Y. Gao, P. Reinhold, R. W. Heeres, N. Ofek, K. Chou, C. Axline, M. Reagor, J. Blumoff, K. M. Sliwa, L. Frunzio, S. M. Girvin, L. Jiang, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, A Schrödinger cat living in two boxes, Science **352**, 1087 (2016). M. J. Peterer, S. J. Bader, X. Jin, F. Yan, A. Kamal, T. J. Gudmundsen, P. J. Leek, T. P. Orlando, W. D. Oliver, and S. Gustavsson, Coherence and decay of higher energy levels of a superconducting transmon qubit Phys. Rev. Lett. **114**, 010501 (2015). M. Baur, A. Fedorov, L. Steffen, S. Filipp, M. P. da Silva, and A. Wallraff, Benchmarking a quantum teleportation protocol in superconducting circuits using tomography and an entanglement witness, Phys. Rev. Lett. **108**, 040502 (2012). A. Fedorov, L. Steffen, M. Baur, M. P. da Silva, and A. Wallraff, Implementation of a Toffoli gate with superconducting circuits, Nature (London) **481**, 170 (2012). W. Chen, D. A. Bennett, V. Patel, and J. E. Lukens, Substrate and process dependent losses in superconducting thin film resonators, Supercond. Sci. Technol. **21**, 075013 (2008). P. J. Leek, M. Baur, J. M. Fink, R. Bianchetti, L. Steffen, S. Filipp, and A. Wallraff, Cavity quantum electrodynamics with separate photon storage and qubit readout modes, Phys. Rev. Lett. **104**, 100504 (2010). **APPENDIX: PREPARATION OF THE GHZ STATE OF THE FOUR TLRs** The ladder-type three levels of each of the coupler qutrits ($% q_{1},q_{2},q_{3}$) in Fig. 4 are labeled as $\left\vert g\right\rangle ,$ $% \left\vert e\right\rangle ,$ and $\left\vert f\right\rangle $ with energy $% E_{g}<E_{e}<E_{f}.$ Initially, $q_{1}$ is in the state $\left( \left\vert e\right\rangle +\left\vert f\right\rangle \right) /\sqrt{2},$ $% q_{2}$ and $q_{3}$ are in the ground state $\left\vert g\right\rangle ,$ and each TLR is in a vacuum state. In addition, assume that $q_{1},$ $q_{2}$ and $q_{3}$ are decoupled from their neighbor TLRs. Previously, we have set $\omega _{c_{1}}=\omega _{c_{3}}$ and $\omega _{c_{2}}=\omega _{c_{4}}$ in Fig. 4, i.e., every two neighbor TLRs have different frequencies. The procedure for preparing the GHZ state $\left( \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}\left\vert 1\right\rangle _{c_{4}}+\left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert 0\right\rangle _{c_{4}}\right) /\sqrt{2}$ of the four TLRs is listed as follows: Step 1: Adjust the level spacings of $q_{2}$ such that TLR $2$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{2},$ with a coupling constant $\mu _{1}.$ After an interaction time $\pi /\left( 2\mu _{1}\right) $ (i.e., half a Rabi oscillation), the state $\left\vert e\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}$ changes to $-i\left\vert g\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{2}}.$ Hence, the initial state $\frac{% 1}{\sqrt{2}}\left( \left\vert e\right\rangle _{q_{2}}+\left\vert f\right\rangle _{q_{2}}\right) \left\vert 0\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}$ of the system, composed of ($q_{2} $, TLR $2$ and TLR $3$), becomes $$\frac{1}{\sqrt{2}}\left( -i\left\vert g\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{2}}+\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\right) \left\vert 0\right\rangle _{c_{3}}.$$([*In the following, the normalization factor $\frac{1}{\sqrt{2}}$ will be omitted for simplicity*]{}). Then, adjust the level spacings of $q_{2}$ such that $q_{2}$ is decoupled from TLR $2.$ Now apply a classical pulse (resonant with the $\left\vert g\right\rangle \leftrightarrow $ $% \left\vert e\right\rangle $ transition) to $q_{2}$ to pump the state $% \left\vert g\right\rangle $ back to the state $\left\vert e\right\rangle $. Thus, the state (23) changes to $$\left( -i\left\vert e\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{2}}+\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\right) \left\vert 0\right\rangle _{c_{3}}.$$ Step 2: Adjust the level spacings of $q_{2}$ such that TLR $2$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{2}$ again. After an interaction time $\pi /\left( 2\sqrt{2}\mu _{1}\right) $, we have the transformation $\left\vert e\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{2}}$ $\rightarrow $ $% -i\left\vert g\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{2}}$ while the state $\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}$ remains unchanged. Hence, the state (24) becomes $$\left(-\left\vert g\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{2}}+\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\right) \left\vert 0\right\rangle _{c_{3}}.$$Then, adjust the level spacings of $q_{2}$ such that $q_{2}$ is decoupled from TLR $2.$ Step 3: Adjust the level spacings of $q_{2}$ such that TLR $3$ is resonant with the $\left\vert e\right\rangle \leftrightarrow $ $\left\vert f\right\rangle $ transition of $q_{2},$ with a coupling constant $\mu _{2}.$ After an interaction time $\pi /\left( 2\mu _{2}\right) $, the state $% \left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{3}}$ changes to $-i\left\vert e\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{3}}.$ Thus, the state (25) becomes $$\left\vert g\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+i\left\vert e\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}.$$Then, adjust the level spacings of $q_{2}$ such that $q_{2}$ is decoupled from TLR $3.$ Now apply a classical pulse (resonant with the $\left\vert e\right\rangle \leftrightarrow \left\vert f\right\rangle $ transition) to $q_{2}$ to pump the state $\left\vert e\right\rangle $ back to the state $\left\vert f\right\rangle $. Thus, the state (26) changes to $$\left\vert g\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+i\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}.$$ Step 4: Apply a classical pulse (resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition) to $q_{2}$ to pump the state $\left\vert g\right\rangle $ to the state $\left\vert e\right\rangle $. Thus, the state (27) changes to $$\left\vert e\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+i\left\vert f\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}.$$ Then, adjust the level spacings of $q_{2}$ such that TLR $3$ is resonant with the $\left\vert e\right\rangle \leftrightarrow $ $\left\vert f\right\rangle $ transition of $q_{2}$ again. After an interaction time $\pi /\left( 2\sqrt{2}\mu _{2}\right) $, one has the transformation $\left\vert f\right\rangle _{q_{2}}\left\vert 1\right\rangle _{c_{3}}$ $\rightarrow $ $% -i\left\vert e\right\rangle _{q_{2}}\left\vert 2\right\rangle _{c_{3}}$ while the state $\left\vert e\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{3}}$ remains unchanged. Thus, the state (28) changes to $$\left( \left\vert 2\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+\left\vert 0\right\rangle _{c_{2}}\left\vert 2\right\rangle _{c_{3}}\right) \left\vert e\right\rangle _{q_{2}}.$$ The, adjust the level spacings of $q_{2}$ such that $q_{2}$ is decoupled from TLR $3.$ From the description given above, one can see that TLR $2$ is decoupled from $q_{2}$ during the operation of steps (3) and (4). In addition, it is noted that the initial states of TLRs {$1,4\}$ and coupler qutrits {$% q_{1},q_{3}\}$ in Fig. 4 remain unchanged because they are not involved during each operation of steps $(1)-(4)$ above. Thus, based on Eq. (29), the state of the whole system after the above 4-step operation is $$\left( \left\vert 2\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+\left\vert 0\right\rangle _{c_{2}}\left\vert 2\right\rangle _{c_{3}}\right) \left\vert e\right\rangle _{q_{2}}\left\vert g\right\rangle _{q_{1}}\left\vert g\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{4}}.$$ The purpose of the remaining operations, described below, is to transfer one photon from TLR $2$ to TLR $1$ via $q_{1}$ and one photon from TLR $% 3$ to TLR $4$ via $q_{3}.$ Step 5: Adjust the level spacings of $q_{1}$ such that TLR $2$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{1},$with a coupling constant $\mu _{3}$ After an interaction time $\pi /\left( 2\sqrt{2}\mu _{3}\right) $, the state $\left\vert g\right\rangle _{q_{1}}\left\vert 2\right\rangle _{c_{2}}$ $% \rightarrow $ $-i\left\vert e\right\rangle _{q_{1}}\left\vert 1\right\rangle _{c_{2}}$ while the state $\left\vert g\right\rangle _{q_{1}}\left\vert 0\right\rangle _{c_{2}}$ remains unchanged. Thus, the state (30) becomes $$\left( -i\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert e\right\rangle _{q_{1}}+\left\vert 0\right\rangle _{c_{2}}\left\vert 2\right\rangle _{c_{3}}\left\vert g\right\rangle _{q_{1}}\right) \left\vert e\right\rangle _{q_{2}}\left\vert g\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{4}}.$$Then, adjust the level spacings of $q_{1}$ such that TLR $2$ is decoupled from $q_{1}$ but TLR $1$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{1},$ with a coupling constant $\mu _{4}.$ After an interaction time $\pi /\left( 2\mu _{4}\right) ,$ we have the transformation $\left\vert e\right\rangle _{q_{1}}\left\vert 0\right\rangle _{c_{1}}$ $\rightarrow $ $-i\left\vert g\right\rangle _{q_{1}}\left\vert 1\right\rangle _{c_{1}}$ while the state $% \left\vert g\right\rangle _{q_{1}}\left\vert 0\right\rangle _{c_{1}}$ remains unchanged. Hence, the state (31) changes to $$\left( -\left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}+\left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 2\right\rangle _{c_{3}}\right) \left\vert g\right\rangle _{q_{1}}\left\vert e\right\rangle _{q_{2}}\left\vert g\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{4}}.$$ Then, adjust the level spacings of $q_{1}$ such that both TLRs $1$ and $% 2 $ are decoupled from $q_{1}.$ Step 6: Adjust the level spacings of $q_{3}$ such that TLR $3$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{3},$ with a coupling constant $\mu _{5}$. After an interaction time $\pi /\left( 2\sqrt{2}\mu _{5}\right) $, the state $\left\vert g\right\rangle _{q_{3}}\left\vert 2\right\rangle _{c_{3}}$ $% \rightarrow $ $-i\left\vert e\right\rangle _{q_{3}}\left\vert 1\right\rangle _{c_{3}}$ while the state $\left\vert g\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{3}}$ remains unchanged. Thus, the state (32) becomes $$\left(\left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert g\right\rangle _{q_{3}}+i\left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}\left\vert e\right\rangle _{q_{3}}\right) \left\vert g\right\rangle _{q_{1}}\left\vert e\right\rangle _{q_{2}}\left\vert 0\right\rangle _{c_{4}}.$$ Then, adjust the level spacings of $q_{3}$ such that TLR $3$ is decoupled from $q_{3}$ but TLR $4$ is resonant with the $\left\vert g\right\rangle \leftrightarrow $ $\left\vert e\right\rangle $ transition of $q_{3},$ with a coupling constant $\mu _{6}.$ After an interaction time $\pi /\left( 2\mu _{6}\right) ,$ we have the transformation $\left\vert e\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{4}}$ $\rightarrow $ $-i\left\vert g\right\rangle _{q_{3}}\left\vert 1\right\rangle _{c_{4}}$ while the state $% \left\vert g\right\rangle _{q_{3}}\left\vert 0\right\rangle _{c_{4}}$ remains unchanged. Therefore, the state (33) becomes $$\left( \left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert 0\right\rangle _{c_{4}}+\left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}\left\vert 1\right\rangle _{c_{4}}\right) \left\vert g\right\rangle _{q_{1}}\left\vert e\right\rangle _{q_{2}}\left\vert g\right\rangle _{q_{3}}.$$ Then, adjust the level spacings of $q_{3}$ such that both TLRs $3$ and $% 4 $ are decoupled from $q_{3}.$ Eq. (34) shows that the four TLRs are prepared in the GHZ state $\left( \left\vert 0\right\rangle _{c_{1}}\left\vert 0\right\rangle _{c_{2}}\left\vert 1\right\rangle _{c_{3}}\left\vert 1\right\rangle _{c_{4}}+\left\vert 1\right\rangle _{c_{1}}\left\vert 1\right\rangle _{c_{2}}\left\vert 0\right\rangle _{c_{3}}\left\vert 0\right\rangle _{c_{4}}\right) /\sqrt{2},$ while the three coupler qutrits ($q_{1},q_{2},q_{3}$) are disentangled from the four TLRs. Since each step of operation employs the resonant qutrit-cavity or qutrit-pulse interaction, the GHZ state of the four TLRs can be fast prepared within a short time.
--- abstract: 'The paper comments on properties of the so-called “Unified approach to the construction of classical confidence intervals”, in which confidence intervals are computed in a Neyman construction using the likelihood ratio as ordering quantity. In particular, two of the main results of a paper by Feldman and Cousins (F&C) are discussed. It is shown that in the case of central intervals the so-called flip-flopping problem, occuring in the specific scenario where the experimenter decides to quote a standard upper limit or a confidence interval depending on the measurement, is due to an expectation which is not justified. The problem can be easily avoided by choosing appropriate confidence levels for the standard upper limits and confidence intervals. In the F&C paper “upper limit” is defined as the upper edge of a confidence interval, whose lower edge coincides with the physical limit. With this definition of upper limit (F&C limit), in an approach which uses the likelihood ratio as ordering quantity, two-sided confidence intervals automatically change over to “upper limits” as the signal becomes weaker (Unified approach). In the present paper it is pointed out that this behaviour is not a special property of this approach, because approaches with other ordering principles, like central intervals, symmetric intervals or highest-probability intervals, exhibit the same behaviour. The term “Unified approach” is therefore equally well justified for these approaches. The Unified approach is presented in the F&C paper as a solution to the flip-flopping problem. This might suggest that the F&C limit is a standard upper limit. Because the F&C limit can be easily misunderstood as standard upper limit, its coverage properties are investigated. It is shown that the coverage of the F&C limit, if it is interpreted as standard upper limit, depends strongly on the parameter $\mu$ to be determined, with values around and $\gtrsim (1+\alpha)/2$, where $\alpha$ is the confidence level of the confidence belt. Differences between the F&C limit and a standard upper limit were already pointed out in the F&C paper. In order to exclude any misunderstanding, it is proposed in the present paper to call the F&C limit “upper edge of the confidence interval”, even if its lower edge coincides with the physical limit.' address: 'Max-Planck-Institut für Physik, D-80805 München, Germany.' author: - 'Wolfgang Wittek, Hendrik Bartko, Nicola Galante and Thomas Schweizer' title: Comments on the Unified approach to the construction of Classical confidence intervals --- [*Address for correspondence:*]{} Wolfgang Wittek, Max-Planck-Institut für Physik,\ D-80805 München, Germany. E-mail: wolfgang.wittek@gmx.net Introduction ============ In 1998 the paper “Unified approach to the classical statistical analysis of small signals” \[@FaC1998\] appeared, in which the classical approach to the construction of confidence intervals is discussed in detail. The paper has received great attention and has stimulated the discussion about the calculation of confidence intervals. Two of the main results of the paper can be stated as follows: - By using the likelihood ratio as ordering quantity one obtains confidence intervals which automatically change over from two-sided intervals to upper limits as the signal becomes statistically less significant (“Unified approach”). - This eliminates undercoverage caused by basing this choice (of quoting an upper limit or a confidence interval) on the data (“flip-flopping”). These attractive features have induced many experimenters to follow the approach proposed by Feldman and Cousins. The purpose of the present paper is to point out possible misunderstandings of the above statements and to give alternative solutions to the flip-flopping problem. Another result of the paper \[@FaC1998\] is, that an approach which uses of the likelihood ratio as ordering quantity yields finite and physical intervals, in cases where central intervals are empty or unphysical. This subject is not discussed here. Neyman construction of confidence intervals and coverage {#Section:Neyman} ======================================================== To be specific the same example is considered as in \[@FaC1998\], a Poisson process with background, where the mean background is known: $$\begin{aligned} P(n|\mu )\;=\;\dfrac{(\mu +b)^n}{n!}\times \exp[-(\mu +b)] \label{eq:Pnmu}\end{aligned}$$ ------------- -------------------------------------------------------------------------------- $n$ is the measured number of signal plus background events $\mu$ is the average number of signal events $b$ is the average number of background events, $b$ is assumed to be exactly known $P(n|\mu )$ is the probability of measuring $n$, given $\mu$ and $b$ ------------- -------------------------------------------------------------------------------- \ For the discussion in this Section and for later comparisons with the results from \[@FaC1998\], the likelihood ratio $$\begin{aligned} R(n,\mu)\;=\;\dfrac{P(n|\mu )}{P(n|\mu_{best})} \label{eq:likratio}\end{aligned}$$ is assumed as ordering principle, when constructing the acceptance region in $n$ for an individual $\mu$. $\mu_{best}$ is the value of $\mu$ which maximizes $P(n|\mu )$, at fixed $n$, where only physically allowed values of $\mu$ are considered. For a given $\mu$, values of $n$ are added to the [**acceptance region**]{} $[n_1(\mu),\;n_2(\mu)]$ in decreasing order of the ratio $R(n,\mu)$, until the sum $p\;=\;\sum_n P(n|\mu)$ agrees with the desired confidence level $\alpha$. Such an ordering principle is naturally implied by the theory of likelihood ratio tests. For a given measurement $n$, the [**confidence interval**]{} $I\;=\; [\mu_1,\;\mu_2]$ for $\mu$ is found by including in $I$ all those $\mu$ which contain $n$ in their acceptance region. The limits $\mu_1$ and $\mu_2$ obviously depend on $n$. The set of confidence intervals $[\mu_1(n),\;\mu_2(n)]$ is called [**confidence belt**]{}. Upper limits of $\mu$ are determined in the same way as confidence intervals, except that the ordering quanity is now $R(n,\mu)\;=\;n$. The resulting “acceptance regions” are one-sided in this case: $[n_{low}(\mu), \infty]$. The upper limit $\mu_{up}(n)$ of $\mu$ at fixed $n$ is given by that $\mu$, for which the lower edge $n_{low}(\mu)$ of the acceptance region coincides with $n$. In general, upper limits of $\mu$ at a certain confidence level differ from the upper ends of confidence intervals at the same confidence level (see below). Considering an ensemble of experiments with arbitrary but fixed $\mu$, this fixed value of $\mu$ will be contained in the confidence belt in a fraction $\alpha$ of all experiments. This follows from the way the confidence intervals are constructed: Given $n$, the confidence interval is the set of all those $\mu$ which contain $n$ in their acceptance region. Confidence intervals with this property are said to have exact coverage. Coverage is an important feature of Neyman-constructed confidence intervals \[@Neyman1937\]. For upper limits the corresponding statement reads: Given an ensemble of experiments with arbitrary but fixed $\mu$, this fixed value of $\mu$ will be less than $\mu_{up}(n)$ in a fraction $\alpha$ of all experiments. In the following, this definition of upper limit will be called “standard definition”, and the corresponding upper limit as “[**standard upper limit**]{}”. It is evident that for both the confidence belt and for the standard upper limit the specification of the confidence level is essential. For the confidence belt, the ordering principle is relevant in addition. The ordering principle for the standard upper limit is fixed. In the present example small under or overcoverage occurs due to the discreteness of $n$. In order to avoid this problem, in the calculations presented below the factorial in the discrete Poisson function $P(n|\mu )$ is replaced by Euler’s Gamma function, which is then normalized to 1. Euler’s Gamma function, which is defined for all real values of $n$, agrees with the discrete Poisson function at all integer arguments $n$. The normalization factor differs from 1 by less than 0.1% for $\mu\geq 3$, the maximum difference occuring at $\mu = 0$, where it is less than 2 %. It should be stressed that it is not proposed to do this replacement in practical applications. It is applied here only for the purpose of this paper, to allow a discussion which is not affected by under or overcoverage due to the discreteness of $n$. In the general Neyman construction the ordering principle is not specified. In the present paper, besides the ordering principle based on the likelihood ratio (\[eq:likratio\]) also the ordering principle based on central intervals is considered. Discussion of Neyman-constructed confidence intervals and standard upper limits {#section:neyman} =============================================================================== In the previous Section the construction of confidence intervals and of standard upper limits is fully defined. In order to understand that this procedure of defining confidence intervals and standard upper limits ensures coverage, it is instructive to consider how coverage would be tested by Monte Carlo simulations: One would generate an ensemble of experiments with fixed $\mu_0$, throwing $n$ according to eq.(\[eq:Pnmu\]). An acceptance region $[n_1(\mu_0),\;n_2(\mu_0)]$ would be defined as explained in the previous Section. Although the confidence belt is not yet completely defined by knowing the acceptance region for a single $\mu_0$, one can already say that the confidence belt would contain $\mu_0$ in a fraction $\alpha$ of all cases, because for those $n$ which lie in the acceptance region of $\mu_0$ (which happens in a fraction $\alpha$ of all cases) $\mu_0$ would be added to the confidence interval $[\mu_1(n),\;\mu_2(n)]$. These statements are valid for all physically allowed values of $\mu$. One can conclude that coverage is confirmed. Thus, the way these confidence intervals and standard upper limits are defined ensures exact coverage (disregarding effects which occur when the problem involves discrete values). Obviously, the construction is independent of the decision, whether to quote a standard upper limit, a confidence interval or both. Therefore, also coverage is guaranteed independent of such a decision. The experimenter may thus decide to quote a standard upper limit, a confidence interval or both, at equal or different confidence levels, without violating coverage. Here, “coverage” is understood as explained in Section \[Section:Neyman\]. Coverage in a very specific scenario is discussed in Section \[section:flip-flopping\]. $ \;\;\;\;\mu\;\;\;\;$ $\;\;\;\;\;p_1\;\;\;\;$ $\;\;\;\;\;p\;\;\;\;$ $\;\;\;\;\;p_2\;\;\;\;$ ------------------------ ------------------------- ----------------------- ------------------------- 0.0 0.000 0.9 0.100 0.5 0.000 0.9 0.100 1.0 0.016 0.9 0.084 1.5 0.035 0.9 0.065 2.0 0.046 0.9 0.054 2.5 0.053 0.9 0.047 3.0 0.056 0.9 0.044 3.5 0.057 0.9 0.043 4.0 0.057 0.9 0.042 4.5 0.057 0.9 0.043 5.0 0.056 0.9 0.044 5.5 0.056 0.9 0.044 6.0 0.056 0.9 0.044 6.5 0.056 0.9 0.044 7.0 0.056 0.9 0.044 7.5 0.056 0.9 0.044 : \[tab:prob\]Integrated probabilities $p_1$, $p$ and $p_2$ below, in an above the acceptance regions respectively, as functions of $\mu$.The confidence level chosen is $\alpha$ = 90%. $\;\;\;\mu\;\;\;$ $\;\;\;\;n_1\;\;\;$ $\;\;\;\;n_2\;\;\;$ $\;\;\;n_{90\%}(\mu)\;$ $\;\;n_{95\%}(\mu)\;\;$ ------------------- --------------------- --------------------- ------------------------- ------------------------- 0.0 0.00 5.34 1.00 0.62 0.5 0.00 6.01 1.28 0.84 1.0 0.53 6.91 1.59 1.09 1.5 1.13 7.92 1.92 1.36 2.0 1.59 8.85 2.26 1.65 2.5 2.01 9.70 2.61 1.96 3.0 2.39 10.48 2.98 2.28 3.5 2.75 11.20 3.35 2.61 4.0 3.09 11.87 3.72 2.95 4.5 3.43 12.51 4.10 3.29 5.0 3.78 13.15 4.49 3.65 5.5 4.14 13.79 4.87 4.00 6.0 4.50 14.43 5.26 4.36 6.5 4.87 15.06 5.66 4.73 7.0 5.23 15.69 6.05 5.09 7.5 5.61 16.32 6.45 5.47 : \[tab:limits\]Lower and upper edge ($n_1$ and $n_2$) of the 90% c.l. acceptance region of $n$, 90% c.l. lower limit and 95% c.l. lower limit of $n$, as functions of $\mu$. $\;\;\;n\;\;\;$ $\;\;\;\;\mu_1\;\;\;$ $\;\;\;\;\mu_2\;\;\;$ $\;\;\;\mu_{90\%}(n)\;$ $\;\;\mu_{95\%}(n)\;\;$ ----------------- ----------------------- ----------------------- ------------------------- ------------------------- 0.0 0.00 0.61 0.00 0.00 0.5 0.00 0.98 0.00 0.00 1.0 0.00 1.38 0.02 0.86 1.5 0.00 1.90 0.87 1.75 2.0 0.00 2.50 1.64 2.58 2.5 0.00 3.16 2.36 3.35 3.0 0.00 3.87 3.04 4.08 3.5 0.00 4.59 3.71 4.81 4.0 0.00 5.30 4.39 5.50 4.5 0.00 6.01 5.04 6.21 5.0 0.00 6.69 5.67 6.89 5.5 0.11 7.36 6.31 7.56 6.0 0.49 8.04 6.94 8.21 6.5 0.81 8.69 7.56 8.89 7.0 1.04 9.34 8.19 9.54 7.5 1.29 10.01 8.81 10.19 8.0 1.54 10.64 9.41 10.84 8.5 1.81 11.29 10.03 11.47 9.0 2.08 11.93 10.64 12.11 9.5 2.37 12.56 11.24 12.74 : \[tab:limitsofmu\]Lower and upper edge ($\mu_1$ and $\mu_2$) of the 90% c.l. confidence interval of $\mu$, 90% c.l. standard upper limit and 95% c.l. standard upper limit of $\mu$, as functions of $n$. It should also be noted that coverage has to be obeyed at fixed $\mu_0$, and for coverage it is irrelevant how far $\mu_0$ is away from the edges $\mu_1(n)$ and $\mu_2(n)$ or whether $\mu_1(n)$ is given by the lowest physically allowed value of $\mu$. This is characteristic for a Frequentist approach. The discussion in Section \[section:unifiedapproach\] will refer to this feature. ![\[fig:FaC\]90% c.l. confidence belt (full circles), 90% c.l. standard upper limit (stars) and 95% c.l. standard upper limit (open circles) for the average number of signal events $\mu$ in the presence of a Poisson background with known mean $b=3.0$.](FaC2uplneu){width="48.00000%"} Assuming a confidence level of $\alpha$ = 90%, setting $b$ equal to 3 and using the likelihood ratio as ordering quantity, one obtains for the present example the confidence belt shown in Fig.\[fig:FaC\] (full circles). Also shown in Fig.\[fig:FaC\] are the 90% c.l. standard upper limit (stars) and the 95% c.l. standard upper limit (open circles) of $\mu$. As can be seen, the 90% c.l. standard upper limit and the 90% c.l. confidence belt in Fig.\[fig:FaC\] are perfectly compatible with the corresponding limit and confidence belt respectively in Figs. 5 and 7 of \[@FaC1998\]. Replacing the discrete Poisson distribution by the renormalized Euler’s function is therefore a valid procedure, for the purpose of this paper. Table \[tab:prob\] gives for each $\mu$ the integrated probabilities below, in and above the acceptance region of $n$: $p_1=\int_{n<n1}P(n|\mu ){\rm d}n$, $p=\int_{n1\leq n \leq n2}P(n|\mu )) {\rm d}n$, $p_2=\int_{n>n2}P(n|\mu )){\rm d}n$. By construction, $p$ is always equal to $\alpha$ = 90%. As a result of the ordering principle chosen, $p_1$ is generally different from $p_2$, $p_1$ being less than $p_2$ for $\mu\;<\;2.4$ and greater than $p_2$ for $\mu\;>\;2.4$. In Table \[tab:limits\] the lower and upper edge ($n_1$ and $n_2$) of the 90% c.l. acceptance region of $n$, the 90% c.l. lower limit $n_{90\%}(\mu)$ of $n$ and the 95% c.l. lower limit $n_{95\%}(\mu)$ of $n$ are given as functions of $\mu$. The inverse functions of $n_1(\mu)$, $n_2(\mu)$, $n_{90\%}(\mu)$ and $n_{95\%}(\mu)$ are the functions $\mu_2(n)$, $\mu_1(n)$, $\mu_{90\%}(n)$ and $\mu_{95\%}(n)$ respectively, which define the confidence intervals and standard upper limits of $\mu$ as functions of $n$. They are listed in Table \[tab:limitsofmu\]. By definition, for a fixed $\mu_0$, $n$ lies within the acceptance region $[n_1(\mu_0),\;n_2(\mu_0)]$ if and only if $\mu_0$ lies in the confidence belt $[\mu_1(n),\;\mu_2(n)]$. This means that the probability that $[\mu_1(n),\;\mu_2(n)]$ covers the value $\mu_0$ is equal to $p$ = $\alpha$. Also by construction, $n>n_1(\mu )$ if and only if $\mu < \mu_2(n)$, implying that the probability for $\mu < \mu_2(n)$ is equal to $p+p_2$. Thus $[\mu_1(n),\;\mu_2(n)]$ is a confidence interval at the $\alpha$-confidence level. If $\mu_2(n)$ is interpreted as a standard upper limit, its coverage $(p+p_2)$ depends strongly on $\mu$, making this interpretation problematic. The situation is different for central intervals, for which $p_1= (1-\alpha )/2$, $p$ = $\alpha$ and $p_2=(1-\alpha )/2$. In the case of central intervals, $\mu_2$ is a standard upper limit at the confidence level $p+p_2\;=\;(1+\alpha)/2$, at all $\mu$. The flip-flopping problem {#section:flip-flopping} ========================= In this Section a specific scenario is considered, in which the experimenter decides to quote a standard upper limit at the c.l. $\beta$ if $n$ is below and a confidence interval at the c.l. $\alpha$ if $n$ is above a certain value $n_0$. It should be noted that this very specific scenario is different from the scenario which is assumed when constructing confidence intervals or standard upper limits (see Section \[Section:Neyman\]). In the latter scenario one talks independently about coverage for a Neyman-constructed confidence interval or coverage for a Neyman-constructed standard upper limit, and there is no condition involving the measurement $n$. Therefore, the coverages for the two scenarios are not expected to be identical. In addition, the coverage $\gamma$ for the specific scenario depends on the choice of $\alpha$ and $\beta$. For central intervals all coverages of interest can be directly given, without the need of complicated calculations. In \[@FaC1998\] the special case $\alpha = \beta = 90$ % is discussed for an approach with central intervals. One obtains a coverage of $\gamma = \beta -p_2=\alpha -p_2= (3\alpha -1)/2=85$ %, at low $\mu$. The fact that $\gamma\neq \alpha$ is called in \[@FaC1998\] violation of coverage, or flip-flopping problem. Talking of “violation of coverage” means that a coverage of $\gamma = \alpha$ is expected. As pointed out above, there is no justification for that. The actual problem is that $\gamma$ is different for different regions of $\mu$. If $\gamma$ were the same for all $\mu$, one could quote a unique confidence level ($=\gamma$) for the specific scenario and there would be no problem. The behaviour $\gamma\neq \alpha$ at low $\mu$ is due to the fact that with the chosen alternative between a standard upper limit (at a c.l. $\beta$=90 %) and a confidence interval (at the c.l. $\alpha$=90%) the upper edge $\mu_2(n)$ of the confidence belt, in the region $n<n_0$, is shifted. Knowing this, it is easy to find a way of avoiding the flip-flopping problem: One has to define alternatives which preserve the upper edge $\mu_2(n)$, at all $\mu$. In the case of central intervals the alternatives are a $(1+\alpha)/2=$ 95% c.l. standard upper limit and a $\alpha=$ 90% c.l.confidence interval. With this choice the upper edge $\mu_2(n)$ of the confidence belt is not changed, because for central intervals $\mu_2(n)$, if interpreted as standard upper limit, has a c.l. of 95%. Since the upper edge of the confidence belt is preserved, coverage is fulfilled. Thus the flip-flopping problem can be easily avoided by choosing appropriate confidence levels for the standard upper limit and the confidence intervals. The approach by Feldman & Cousins {#section:unifiedapproach} ================================= In the approach described in \[@FaC1998\] a confidence belt is determined according to a Neyman construction, where the likelihood ratio is used as ordering quantity. Coverage is therefore guaranteed for the confidence level $\alpha$ chosen: If an experiment with fixed $\mu_0$ were repeated many times one would obtain a set of confidence intervals $[\mu_1(n), \mu_2(n)]$, and the relation $\mu_1(n)<\mu_0 < \mu_2(n)$ would be correct in a fraction $\alpha$ of all cases. Because for small $n$ the lower edge $\mu_1(n)$ coincides with the physical limit of $\mu$, the set of relations (which define the confidence belt) can also be written as $$\begin{aligned} \qquad\qquad \mu_0\;<\;\mu_2(n)\qquad\qquad {\rm for}\;\; n\leq n_0 \qquad\qquad {\rm (a)} \nonumber\\ \label{eq:FCuplim} \\ \qquad\qquad \mu_1(n)\;<\;\mu_0\;<\;\mu_2(n)\qquad\qquad {\rm for}\;\; n>n_0 \qquad\qquad {\rm (b)} \nonumber\end{aligned}$$ where $n_0$ is the largest $n$ for which $\mu_1(n)$ coincides with the physical limit. For $n\leq n_0$, $\mu_2(n)$ is called “upper limit” in \[@FaC1998\], and it will be called F&C limit in the following: the F&C limit is defined as the upper edge of a confidence interval, whose lower edge coincides with the physical limit of $\mu$. Again, out of all relations (\[eq:FCuplim\]) a fraction $\alpha$ of them would be staisfied, and coverage would be fulfilled. However, one should not call the F&C limit an “$\alpha$ c.l. upper limit”, because the relation $\mu_0\;<\;\mu_2(n)$ is not satisfied in a fraction $\alpha$ of all cases, and, moreover, this fraction depends on $\mu_0$. In \[@FaC1998\] the presence of two kinds of intervals ((a) and (b)) is expressed as “This choice (of using the likelihood ratio as ordering quantity) yields intervals which automatically change over from upper limits to two-sided intervals as the signal becomes more statistically significant (Unified approach)”. This is correct, when the term “upper limit” is understood as the F&C limit. One should note, however, that this is not a special feature of an approach which uses the likelihood ratio as ordering quantity. Approaches with other ordering principles, like central intervals, symmetric intervals or highest-probability intervals, exhibit the same behaviour. The term “Unified approach” is therefore equally well justified for these approaches. The approach in \[@FaC1998\] is also presented as a solution to the flip-flopping problem (see Section \[section:flip-flopping\]). This could be understood such that the approach in \[@FaC1998\] provides an alternative between a standard upper limit and a confidence interval. However, this approach rather provides an alternative between the F&C limit and a confidence interval. The F&C limit can therefore be easily misunderstood as a standard upper limit. As noted in Section \[section:neyman\], in a Frequentist approach the value of the physical limit of $\mu$ is irrelevant, except that the acceptance regions for $n$ can only be determined for physically allowed values of $\mu$. The definition of “upper limit” as the upper edge of a confidence interval whose lower edge coincides with the physical limit, is therefore not a Frequentist type of definition and may lead to confusion. In order to exclude any misunderstanding it is recommended in the present paper to call the F&C limit “upper edge of the confidence belt”, even if the lower edge coincides with the physical limit of $\mu$. Differences between the F&C limit and the standard upper limit were also pointed out by the authors of \[@FaC1998\]. From this discussion it follows, that the coverage properties of the F&C limit, if interpreted as a standard upper limit, are of great interest. From Table \[tab:prob\] one can see that (for $\alpha=90$%) this coverage $(p+p_2)$ varies between 94,4 % and 100 %. Compared to the confidence level $\alpha$ of the confidence belt, the F&C limit $\mu_2$, if interpreted as standard upper limit, is very conservative, with a coverage around $(1+\alpha)/2$=95% or an overcoverage around $(1+\alpha )/2-\alpha =(1-\alpha )/2$=5%. In the case of central intervals the upper edge of the confidence interval has the nice property that it is a standard upper limit with a well-defined confidence level, namely $(1+\alpha)/2$, at all $\mu$. Coming back to the flip-flopping problem: In the approach by F&C this problem is avoided, because introducing the alternative between an F&C limit and a confidence interval doesn’t change the original confidence belt, for which coverage is fulfilled. This is analogous to the approach with central intervals, where the original confidence belt is preserved by choosing the alternative between a $(1+\alpha)/2$ c.l. standard upper limit and a $\alpha$ c.l. confidence interval (see Section \[section:flip-flopping\]). Summary and Conclusions ======================= The paper discusses two of the main results of \[@FaC1998\]. One of them concerns the so-called flip-flopping problem, which according to \[@FaC1998\] exists for central intervals in the specific scenario, where the experimenter decides to quote a standard upper limit or a confidence interval, depending on the observed data. In the present paper it is shown that this problem is due to an expectation which is not justified. The problem can be easily avoided by choosing appropriate confidence levels for the standard upper limit and the confidence belt. Another result of \[@FaC1998\] concerns the Unified approach. In \[@FaC1998\] the term “upper limit” denotes the upper edge of a confidence interval whose lower edge coincides with the physical limit of $\mu$. With this definition of upper limit, upper ends of confidence intervals automatically change over to “upper limits” as the signal becomes weaker. This transition from confidence intervals to upper limits is not a special property of the approach using the likelihood ratio as ordering quantity, because the same transition takes place in approaches with different ordering priciples, like central intervals, symmetric intervals or highest-probability intervals. Thus the term “Unified approach” is equally well justified for these approaches. The approach with central intervals has the additional advantage that the upper end of the confidence interval is a standard upper limit at a well defined confidence level, at all $\mu$. The Unified approach is presented in the F&C paper as a solution to the flip-flopping problem, where the experimenter quotes alternatively a standard upper limit or a confidence interval, depending on the measurement. The F&C limit can therefore easily be misunderstood as a standard upper limit. For this reason the coverage properties of the F&C limit are investigated. It is shown that the coverage of the F&C limit, if it is interpreted as standard upper limit, depends strongly on the parameter $\mu$ to be determined. Compared to the confidence level $\alpha$ of the confidence belt, the upper limit is very conservative, with a coverage around $(1+\alpha )/2$ or an overcoverage around $(1+\alpha )/2-\alpha =(1-\alpha )/2$. More precisely, for $\alpha\;=\;90$ %, the coverage of the upper limit varies between 94,4 % and 100 %, and a unique confidence level cannot be assigned to it. Differences between the F&C limit and a standard upper limit were already pointed out in \[@FaC1998\]. In order to exclude any misunderstanding, it is proposed in the present paper to call the F&C limit “upper edge of the confidence interval”, even if its lower edge coincides with the physical limit. In this paper only one example was discussed in detail. The conclusions and recommendations, however, are also valid for any other application, in which confidence and upper limits are determined using the likelihood ratio as ordering quantity. Though, the applications are restricted to the cases with only one unknown ($\mu$) and no nuisance paramter, except if the nuisance parameter is exactly known ($b$). Finally it should be emphasized that the criticism expressed in this paper does not refer to the Neyman construction of confidence intervals itself or to the use of the likelihood ratio as ordering quantity. G.J. Feldman [*and*]{} R.D. Cousins (1998). Phys. Rev. D57, 3873. J. Neyman (1937). Phil. Trans. Royal Soc. London, Series A, 236, 333.
--- abstract: 'This paper presents a randomization-based framework for estimating causal effects under interference between units. We develop the case of estimating average unit-level causal effects from a randomized experiment with interference of arbitrary but known form. We illustrate and assess empirical performance with a naturalistic simulation using network data from American high schools. We discuss other applications and sketch approaches for situations where there is uncertainty about the form of interference.' author: - 'Peter M. Aronow and Cyrus Samii[^1]' bibliography: - '/Users/nlsamii/Dropbox/bib/all.bib' title: Estimating Average Causal Effects Under Interference Between Units --- Introduction ============ Experimental and observational studies often involve treatments with effects that “interfere” [@cox58] across units through spillover or other forms of dependency. Such interference is sometimes considered a nuisance, and researchers may strive to design studies that isolate units as much as possible from interference. However, such designs are not always possible. Furthermore, researchers may be interested in estimation of the spillover effects themselves, as these effects may be of substantive importance. Treatments may be applied to individuals in a social network, and we may wish to study how effects transmit to peers in the network. An urban renewal program applied to one town may divert capital from other towns, in which case the overall effect of the program may be ambiguous. Treatment effects may carry over from one time period to another and units have some chance of receiving treatment at any one of a set of points in time. In these cases, we need a method to estimate effects of both direct and [*indirect*]{} exposure to a treatment. This paper presents a general, randomization-based framework for estimating causal effects under these and other forms of interference. Interference represents a departure from the traditional causal inference scenario wherein units are assigned directly to treatment or control and the potential outcomes that would be observed for a unit in either the treatment or control condition are fixed and do not depend on the overall set of treatment assignments. The latter condition is what @rubin1990 refers to as the “stable unit treatment value assumption” (SUTVA). In the examples above, the traditional scenario is clearly an inadequate characterization, as SUTVA would be violated. A more sophisticated characterization of treatment exposure and associated potential outcomes must be specified. Our estimation framework consists of three components: (i) the experimental (or quasi-experimental) “design,” which characterizes precisely the probability distribution of treatments [*assigned*]{}, (ii) an “exposure mapping,” which relates treatments assigned to exposures [*received*]{}, and (iii) a set of causal estimands selected to make maximal use of the experiment to answer questions of substantive interest. For the case of a randomized experiment under arbitrary but known forms of interference, we provide unbiased estimators of unit-level average causal effects induced by treatment exposure. We also provide estimators for the randomization variance of the estimated average causal effects. These variance estimators are assured of being conservative (that is, nonnegatively biased). We establish conditions for consistency and large-$N$ confidence intervals based on a normal approximation. We propose ratio estimator and covariate-adjusted refinements for increased efficiency. Finally, we sketch how one could apply this framework for more irregular forms of interference, alternative estimands, observational data, and situations when there is uncertainty about the form of interference. Related literature and our contribution ======================================= Our framework extends from the foundational work of @hudgens_halloran08, who study two-stage, hierarchical randomized trials in which some groups are randomly assigned to host treatments, treatments are then assigned at random to units within the selected groups, and interference is presumed to operate only within groups. Hudgens and Halloran provide randomization-based estimators for group-average causal effects, conditional on assignment strategies that determine the density of treatment within groups. @tchetgen_vanderweele2010 extend Hudgens and Halloran’s results, providing conservative variance estimators, a framework for finite sample inference with binary outcomes, and extensions to observational studies. Related to these contributions is work by @rosenbaum07_interference, which provides methods for inference with exact tests under partial interference. Under hierarchical treatment assignment and partial interference, estimation and inference can proceed assuming independence across groups. In some settings, however, the hierarchical structuring may not be valid, as with experiments carried out over networks of actors that share links as a result of a complex, endogenous process. A key contribution of this paper is to go beyond the setting of hierarchical experiments with partial interference and generalize estimation and inference theory to settings that exhibit arbitrary forms of interference and treatment assignment dependencies. In addition, our framework allows the analyst to work with different estimands, including both the types of group-average causal effects defined by the authors above as well as unit-level causal effects. Unit-level causal effects are often the estimand of primary interest, as is the case, for example, when exploring unit-level characteristics that moderate the magnitude of treatment effects. Treatment assignment and exposure mappings\[sec:exposure\_mapping\] =================================================================== In this section, we define the first two components of our analytical framework: the experiment design and exposure mapping. We focus on the case of a randomized experiment with an arbitrary but known exposure mapping. The first step is to distinguish between (i) treatment assignments over the set of experimental units and (ii) each unit’s treatment exposure under a given assignment. Treatment assignments can be manipulated arbitrarily with the experimental design. However, treatment exposures may be constrained on the basis of the varying potential for interference of different experimental units. For example, interference or spillover effects may spread over a spatial gradient. If so, different treatment assignments may result in different patterns of interference depending on where treatments are applied on the spatial plane. Formally, suppose we have a finite population $U$ of units indexed by $i=1,...,N$ on which a randomized experiment is performed. Define a treatment assignment vector, ${\mathbf{z}}= (z_1, ..., z_N)'$, where $z_i \in \{1,..,M\}$ specifies which of $M$ possible treatment values that unit $i$ receives. An [*experimental design*]{} contains a plan for randomly selecting a particular value of ${\mathbf{z}}$ from the $M^N$ different possibilities with predetermined probability $p_{\mathbf{z}}$. Restricting our attention only to treatment assignments that can be generated by a given experimental design, define $\Omega = \{{\mathbf{z}}: p_{\mathbf{z}}> 0\}$, so that ${\mathbf{Z}}= (Z_1, ..., Z_N)'$ is a random vector with support $\Omega$ and $\Pr({\mathbf{Z}}= {\mathbf{z}}) = p_{\mathbf{z}}$. We define an [*exposure mapping*]{} as a unit-specific onto function that maps an assignment vector and unit specific traits to an exposure value: $f : \Omega \times \Theta \rightarrow \Delta$, where $\theta_i \in \Theta$ quantifies relevant traits of unit $i$.[^2] The codomain $\Delta$ contains all of the possible treatment exposures that might be induced in the experiment. The contents of $\Delta$ depend on the nature of interference or treatment heterogeneity. These exposures may be represented as vectors, discrete classes, or real numbers. As we will show formally below, each of the distinct exposures in $\Delta$ may give rise to distinct potential outcomes for each unit in $U$. The estimation of causal effects under interference or treatment heterogeneity amounts to using information about treatment [*assignments*]{}, which come from the experiment’s design, to estimate effects defined in terms of [*treatment exposures*]{}, which result from the interaction of the design (captured by ${\mathbf{Z}}$) and other underlying features of the population (captured by $f$ and the $\theta_i$s). To make things more concrete, consider some examples of exposure mappings. The Neyman-Rubin causal model under SUTVA corresponds to assuming an exposure mapping in which we set $\Delta = \{1,...,M\}$ and $f({\mathbf{z}}, \theta_i) =f({\mathbf{z}}) = z_i$ for all $i$. This model has been a workhorse for much of the causal inference literature [@neyman23; @rubin78; @holland86; @imbens_rubin11]. An exposure mapping that allowed for completely arbitrary interference or treatment heterogeneity would be one for which $|\Delta| = |\Omega| \times N$, in which case each unit has a unique type of exposure under each treatment assignment, and $f({\mathbf{z}}, \theta_i)$ would be unique for each ${\mathbf{z}}$. If such an exposure mapping were valid, then it is clear that there would be no meaningful way to use the results of the experiment. Instead, the analyst must use substantive judgment about the extent of interference to fix a mapping somewhere between the traditional randomized experiment and completely arbitrary exposure mappings in order to carry out analyses under interference or treatment heterogeneity. For example, @hudgens_halloran08 consider a setting that allows unit $i$’s exposure to vary with each possible treatment assignment within $i$’s group, but conditional on the assignment for $i$’s group, $i$’s exposure does not vary in the treatment assignments of other groups. Then, $\theta_i$ would be unit $i$’s group index, and $|\Delta|$ would equal the largest number of assignment possibilities for any group. In the simulation study below and illustrative applications, we provide more examples of exposure mappings. Units’ probabilities of falling into one or another exposure condition are crucial for the estimation strategy that we develop below. Define $D_i = f({\mathbf{Z}}, \theta_i)$, a random variable with support $\Delta_i \subseteq \Delta$ and for which $\Pr(D_i = d) = \pi_i(d)$. Note that because $|\Delta| \le |\Omega| \times N$, $\Delta$ is a finite set of $K \le |\Omega|\times N$ values, such that $\Delta=\{d_1, ..., d_K\}$. Then for each unit, $i$, we have a vector of probabilities, $(\pi_i(d_1),...,\pi_i(d_K))' = {\boldsymbol{\pi}}_i$. Invoking @imbens00’s [*generalized propensity score*]{}, we call ${\boldsymbol{\pi}}_i$ the [*generalized probability of exposure*]{} for $i$. A unit $i$’s generalized probability of exposure tells us the probability of $i$ being subject to each of the possible exposures in $\{d_1,...,d_K\}$. Because $f$ is onto, $$\pi_i(d_k) = \sum_{{\mathbf{z}}\in \Omega} {\mathbf{I}}(f({\mathbf{z}},\theta_i)=d_k)\Pr({\mathbf{Z}}={\mathbf{z}}) = {\sum_{{\mathbf{z}}\in \Omega} p_{\mathbf{z}}{\mathbf{I}}(f({\mathbf{z}},\theta_i)=d_k)}.$$ Given an experiment in which the design is known exactly (that is, $\Pr({\mathbf{Z}}= {\mathbf{z}})$ for all $z \in \Omega$ is known exactly), the generalized probability of exposure for unit $i$ is also known exactly. Each component probability, $\pi_i(d_k)$, is equal to the expected proportion of treatment assignments that induce exposure $d_k$ for unit $i$. Below, we will refer to joint exposure probabilities when discussing variance estimators. That is, we define $\pi_{ij}(d_k)$ as the probability of the joint event that both units $i$ and $j$ are subject to exposure $d_k$, and we define $\pi_{ij}(d_k, d_l)$ as the probability of the joint event that units $i$ and $j$ are subject to exposures $d_k$ and $d_l$, respectively. To compute both individual and joint exposure probabilities from the experiment’s design, first define the $N \times |\Omega|$ matrix $$\begin{array}{l} {\mathbf{I}}_{k} = [{\mathbf{I}}(f({\mathbf{z}},\theta_i)=d_k)]_{\stackrel{{\mathbf{z}}\in \Omega}{i = 1,...,N}} = \\ \\ \qquad \qquad \left[ \begin{array}{cccc} {\mathbf{I}}(f({\mathbf{z}}_1,\theta_1)=d_k) & {\mathbf{I}}(f({\mathbf{z}}_2,\theta_1)=d_k) & \hdots & {\mathbf{I}}(f({\mathbf{z}}_{|\Omega|},\theta_1)=d_k) \\ {\mathbf{I}}(f({\mathbf{z}}_1,\theta_2)=d_k) & {\mathbf{I}}(f({\mathbf{z}}_2,\theta_2)=d_k) & \hdots & {\mathbf{I}}(f({\mathbf{z}}_{|\Omega|},\theta_2)=d_k) \\ \vdots & \vdots & \ddots & \\ {\mathbf{I}}(f({\mathbf{z}}_1,\theta_N)=d_k) & {\mathbf{I}}(f({\mathbf{z}}_2,\theta_N)=d_k) & & {\mathbf{I}}(f({\mathbf{z}}_{|\Omega|},\theta_N)=d_k) \end{array} \right], \end{array}$$ which is a matrix of indicators for whether units are in exposure condition $k$ over possible assignment vectors. Define the $|\Omega| \times |\Omega|$ diagonal matrix ${\mathbf{P}}= \text{diag}(p_{{\mathbf{z}}_1}, p_{{\mathbf{z}}_2}, ..., p_{{\mathbf{z}}_{|\Omega|}})$. Then $${\mathbf{I}}_k {\mathbf{P}}{\mathbf{I}}_k' = \left[\begin{array}{cccc} \pi_1(d_k) & \hdots \\ \pi_{12}(d_k) & \pi_{2}(d_k) & \hdots \\ \vdots & \vdots & \ddots & \hdots\\ \pi_{1N}(d_k) & \pi_{2N}(d_k) & & \pi_{N}(d_k) \\ \end{array} \right],$$ is an $N \times N$ symmetric matrix with individual exposure probabilities, the $\pi_{i}(d_k)$’s, on the diagonal and joint exposure probabilities, the $\pi_{ij}(d_k)$’s, on the off-diagonals. The non-symmetric $N \times N$ matrix $${\mathbf{I}}_k {\mathbf{P}}{\mathbf{I}}_l' = \left[\begin{array}{cccc} 0 & \pi_{12}(d_k,d_l) & \hdots & \pi_{1N}(d_k,d_l) \\ \pi_{21}(d_k,d_l) & 0 & \hdots & \pi_{2N}(d_k,d_l) \\ \vdots & \vdots & \ddots & \\ \pi_{N1}(d_k,d_l) & \pi_{N2}(d_k,d_l) & & 0 \\ \end{array} \right],$$ yields all joint probabilities across exposure conditions $k$ and $l$. The zeroes on the diagonal are due to the fact that a unit cannot be subject to multiple exposure conditions at once.[^3] Average potential outcomes and causal effects ============================================= We develop the case of estimating average unit-level causal effects of exposures. An average unit level causal effect is defined in terms of a difference between the average of units’ potential outcomes under one exposure versus the average under another exposure. The starting point is the estimation of average potential outcomes under each of the exposure conditions. With that, the analyst is in principle free to compute a variety of causal quantities of interest, not just average unit-level causal effects. For example, one could consider effects that are defined as differences between the average of potential outcomes under one set of exposures versus the average under another set of exposures.[^4] Our focus on the average unit-level causal effect is due to it being the natural extension of the “average treatment effect” that is the focus of much current causal inference and program evaluation literature (e.g., [@imbens_wooldridge09]). Suppose all units have non-zero probability of being subject to each of the $K$ exposures: $0 < \pi_i(d_k) < 1$ for all $i$ and $k$.[^5] Then, each unit $i$ has $K$ potential outcomes, which we denote by $\{y_i(d_1),...,y_i(d_K)\}$, that do not depend on the value of ${\mathbf{Z}}$. We seek estimates for all $k$ of $\mu(d_k) = \frac{1}{N}\sum_{i=1}^N y_i(d_k)=\frac{1}{N}y^T(d_k)$, where $y^T(d_k)$ is the total of the potential outcomes under $d_k$.[^6] The number of units in the population, $N$, is fixed, but we cannot estimate $y^T(d_k)$ directly, as we only observe $y_i(d_k)$ for those with $D_i = d_k$. However, by design, the collection of units for which we observe $y_i(d_k)$ is an unequal-probability without-replacement sample from $\{y_1(d_k),...,y_N(d_k)\}$, with the sampling probabilities known exactly. By @horvitz_thompson, an unbiased estimator for $y^T(d_k)$ is the inverse probability weighted estimator $$\widehat{y^T_{HT}}(d_k) = \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{y_i(d_k)}{\pi_i(d_k)}. \label{eq:ht_esimator}$$ With potential outcomes and the randomization plan fixed, the exact variance for $\widehat{y^T_{HT}}(d_k)$ is $$\begin{aligned} \Var[\widehat{y^T_{HT}}(d_k)] & = \sum_{i=1}^N \sum_{j=1}^N \Cov \left[{\mathbf{I}}(D_i=d_k),{\mathbf{I}}(D_j=d_k)\right]\frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_k)}{\pi_j(d_k)} \nonumber\\ & = \sum_{i=1}^N \pi_i(d_k)[1-\pi_i(d_k)] \left[ \frac{y_i(d_k)}{\pi_i(d_k)} \right]^2 \nonumber \\ & \hspace{1em} + \sum_{i=1}^N \sum_{j \ne i} [\pi_{ij}(d_k)- \pi_i(d_k)\pi_j(d_k)]\frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_k)}{\pi_j(d_k)}. \label{eq:total_variance}\end{aligned}$$ The estimator for the mean of all $N$ potential outcomes under exposure $d_k$ is thus $$\widehat{\mu_{HT}}(d_k) = (1/N)\widehat{y^T_{HT}}(d_k), \label{eq:exposure_mean}$$ with exact variance, $$\Var(\widehat{\mu_{HT}}(d_k)) = (1/N^2)\Var[\widehat{y^T_{HT}}(d_k)].\label{eq:var_of_exposure_mean}$$ This allows us to construct the difference in estimated means $$\widehat {\tau_{HT}}(d_k,d_l) = \widehat{\mu_{HT}}(d_k) - \widehat{\mu_{HT}}(d_l) = \frac{1}{N}\left[\widehat{y^T_{HT}}(d_k) - \widehat{y^T_{HT}}(d_l)\right] \label{eq:ht_causal_effect}$$ which is an unbiased estimate of $\tau(d_k,d_l) = \frac{1}{N}\sum_{i=1}^N\left[y_i(d_k)-y_i(d_l)\right]$, the average unit-level causal effect of exposure $k$ versus exposure $l$. The exact variance of the difference in estimated means is $$\begin{aligned} \Var(\widehat {\tau_{HT}}(d_k,d_l)) = & \frac{1}{N^2}\left\{\Var[\widehat{y^T_{HT}}(d_k)] + \Var[\widehat{y^T_{HT}}(d_l)] \right. \nonumber \\ & \left. \hspace{3em} - 2\Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)]\right\},\label{eq:tru_var}\end{aligned}$$ where [@wood08] $$\begin{aligned} \Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)] & = \sum_{i=1}^N \sum_{j=1}^N \Cov \left[{\mathbf{I}}(D_i=d_k),{\mathbf{I}}(D_j=d_l)\right]\frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_l)}{\pi_j(d_l)} \nonumber\\ & = \sum_{i=1}^N \sum_{j \ne i} \frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_l)}{\pi_j(d_l)} \left[\pi_{ij}(d_k,d_l)- \pi_i(d_k)\pi_j(d_l) \right] \nonumber \\ & \hspace{1em} - \sum_{i=1}^N y_i(d_k)y_i(d_l), \label{eq:totals_covariance}\end{aligned}$$ with $\pi_{ij}(d_k,d_l) = \Pr[{\mathbf{I}}(D_i=d_k),{\mathbf{I}}(D_j=d_l)]$, and the last line follows from the fact that $\pi_{ii}(d_k,d_l) = 0$. Expressions and allow us to see the conditions under which exact variances are identified. So long as all joint exposure probabilities are non-zero (that is, $\pi_{ij}(d_k) > 0$ for all $i,j$), unbiased estimators for $\Var[\widehat{y^T_{HT}}(d_k)]$ are identified for the population $U$. Because we only observe one potential outcome for each unit, the last term in is always unidentified, and thus $\Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)]$ is always unidentified. This is a familiar problem in estimating the randomization variance for the average treatment effect—e.g., @neyman23 or @freedman_pisani_purves98 [A32-A34]. If $\pi_{ij}(d_k) = 0$ for some $i,j$, $\Var[\widehat{y^T_{HT}}(d_k)]$ is unidentified. Similarly, if $\pi_{ij}(d_k,d_l) = 0$ for some $i,j$, then additional components of $\Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)]$ are unidentified. Nonetheless, we can always identify estimators for $\Var[\widehat{y^T_{HT}}(d_k)]$ and $\Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)]$ that are guaranteed to have nonnegative bias. Thus, we can always identify a conservative approximation to the exact variances. We take this and related issues up in the next section. Variance estimators {#varests} =================== We derive conservative estimators for both $\Var[\widehat{y^T_{HT}}(d_k)]$ and $\Var[\widehat {\tau_{HT}}(d_k,d_l)]$. Although not necessarily unbiased, the estimators we present here are guaranteed to have a nonnegative bias relative to the randomization distributions of the estimators. Given $\pi_{ij}(d_k) > 0$ for all $i,j$, the unbiased Horvitz-Thompson estimator for $\Var[\widehat{y^T_{HT}}(d_k)]$ is $$\begin{aligned} \widehat{\Var}[\widehat{y^T_{HT}}(d_k)] & = \sum_{i \in U} \sum_{j \in U} {\mathbf{I}}(D_i=d_k){\mathbf{I}}(D_j=d_k) \nonumber \\ & \hspace{4em} \times \frac{\Cov \left[{\mathbf{I}}(D_i=d_k),{\mathbf{I}}(D_j=d_k)\right]}{\pi_{ij}(d_k)}\frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_k)}{\pi_j(d_k)} \nonumber \\ & = \sum_{i \in U}{\mathbf{I}}(D_i=d_k)[1-\pi_{i}(d_k)] \left[ \frac{y_i(d_k)}{\pi_i(d_k)} \right]^2 \nonumber \\ & \hspace{1em} + \sum_{i \in U} \sum_{j \in U \backslash i} {\mathbf{I}}(D_i=d_k){\mathbf{I}}(D_j=d_k) \nonumber \\ & \hspace{4em} \times \frac{\pi_{ij}(d_k)-\pi_{i}(d_k)\pi_{j}(d_k)}{\pi_{ij}(d_k)}\frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_k)}{\pi_j(d_k)}. \label{eq:ht_variance_estimator}\end{aligned}$$ Then an unbiased estimator for the variance of $\widehat{\mu_{HT}}(d_k)$ is $$\widehat{\Var}[\widehat{\mu_{HT}}(d_k)] = (1/N^2)\widehat{\Var}[\widehat{y^T_{HT}}(d_k)].$$ In the case where $\pi_{ij}(d_k) = 0$ for some $i,j$, there exist no unbiased estimators for $\Var[\widehat{y^T_{HT}}(d_k)]$. As demonstrated in @aronow_samii_zeropairwise [Proposition 1], the bias of $\widehat{\Var}[\widehat{\mu_{HT}}(d_k)]$, is $$A= \sum_{i \in U} \sum_{j \in \{U \backslash i:\pi_{ij}(d_k)=0\}} y_i(d_k)y_j(d_k).$$ $\widehat{\Var}[\widehat{\mu_{HT}}(d_k)]$ is guaranteed to have nonnegative bias when $y_i(d_k)y_j(d_k) \ge 0$ for all $i,j$ with $\pi_{ij}(d_k) = 0$. The bias will be small when the terms in the sum tend to offset each other, as when the relevant $y_i(d_k)$ and $y_j(d_k)$ values are centered on 0 and have low correlation with each other. (This notation requires that we maintain the assumption that ${0}/{0} = 0$.) Another option is to use the following correction term (derived via Young’s inequality), $$\widehat{A_2}(d_k) = \sum_{i \in U}\sum_{j \in \{ U \backslash i : \pi_{ij}(d_k) = 0\}} \left[\frac{{\mathbf{I}}(D_i=d_k)y_i(d_k)^2}{2\pi_{i}(d_k)} + \frac{{\mathbf{I}}(D_j=d_k)y_j(d_k)^2}{2\pi_{j}(d_k)}\right],$$ noting that $\widehat{A_2}(d_k) = 0$ if $\pi_{ij}(d_k) > 0 \textrm{ for all } i,j$. By @aronow_samii_zeropairwise [Corollary 2], $$\E \left[ \widehat{\Var}[\widehat{y^T_{HT}}(d_k)] + \widehat{A_2}(d_k) \right] \ge \Var[\widehat{y^T_{HT}}(d_k)],$$ in which case, $$\widehat{\Var_A}[\widehat{\mu_{HT}}(d_k)] = (1/N^2)\left[\widehat{\Var}[\widehat{y^T_{HT}}(d_k)] + \widehat{A_2}(d_k)\right],$$ provides a conservative estimator for the variance of the estimated average of potential outcomes under exposure $d_k$. As discussed above, $\Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)]$ is unidentified, which is to say that there exist no unbiased or consistent estimators for this quantity. However, we can compute an approximation that is guaranteed to have expectation less than or equal to the true covariance, providing a conservative (here, nonnegatively biased) estimator for $\Var(\widehat {\tau_{HT}}(d_k,d_l))$. For the case where $\pi_{ij}(d_k,d_l) > 0$ for all $i,j$ such that $i \neq j$, we propose the Horvitz-Thompson-type estimator for the covariance $$\begin{aligned} \widehat{\Cov}[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)] & = \sum_{i \in U} \sum_{j \in U \backslash i} \frac{{\mathbf{I}}(D_i = d_k){\mathbf{I}}(D_j = d_l)}{\pi_{ij}(d_k,d_l)} \frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_l)}{\pi_j(d_l)} \nonumber \\ & \hspace{5.5em} \times [\pi_{ij}(d_k,d_l) - \pi_i(d_k)\pi_j(d_l)] \nonumber \\ & \hspace{1em} - \sum_{i \in U}\left[ \frac{{\mathbf{I}}(D_i = d_k) y_i(d_k)^2}{2\pi_i(d_k)} + \frac{{\mathbf{I}}(D_i = d_l) y_i(d_l)^2}{2\pi_i(d_l)}\right]. \label{eq:ht_cov_estimator}\end{aligned}$$ The term on the second line in expression has expected value less than or equal to the quantity in the last line of expression , again via Young’s inequality. This estimator is exactly unbiased if, for all $i \in U$, $y_i(d_l) = y_i(d_k)$, implying no effect associated with condition $l$ relative to condition $k$. For the case where $\pi_{ij}(d_k,d_l) = 0$ for some $i,j$ and $k,l$, we can refine the expression for the covariance given in to $$\begin{aligned} \Cov[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)] = & \sum_{i \in U} \sum_{j \in \{U \backslash i : \pi_{ij}(d_k,d_l) > 0\}} \frac{y_i(d_k)}{\pi_i(d_k)}\frac{y_j(d_l)}{\pi_j(d_l)}\nonumber \\ & \hspace{4em} \times [\pi_{ij}(d_k,d_l) - \pi_i(d_k)\pi_j(d_l)] \nonumber \\ & \hspace{1em} - \sum_{i \in U} \sum_{j \in \{U : \pi_{ij}(d_k,d_l) = 0\}}y_i(d_k) y_j(d_l),\label{eq:totals_covariance_general}\end{aligned}$$ where the term on the last line subsumes the term on the last line in expression . This leads us to propose a more general estimator for the covariance $$\begin{aligned} \widehat{\Cov_A}[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)] & = \sum_{i \in U} \sum_{j \in \{ U \backslash i : \pi_{ij}(d_k,d_l) > 0 \}} \frac{{\mathbf{I}}(D_i = d_k){\mathbf{I}}(D_j = d_l)}{\pi_{ij}(d_k,d_l)} \nonumber \\ & \hspace{10em} \times \frac{y_i(d_k)}{\pi_i(d_k)} \frac{y_j(d_l)}{\pi_j(d_l)} \nonumber \\ & \hspace{10em} \times [\pi_{ij}(d_k,d_l) - \pi_i(d_k)\pi_j(d_l)] \nonumber \\ & \hspace{1em} - \sum_{i \in U} \sum_{j \in \{ U : \pi_{ij}(d_k,d_l) = 0\} }\left[ \frac{{\mathbf{I}}(D_i = d_k) y_i(d_k)^2}{2\pi_i(d_k)} \right. \nonumber \\ & \hspace{10em} \left. + \frac{{\mathbf{I}}(D_j = d_l) y_j(d_l)^2}{2\pi_j(d_l)}\right]. \label{eq:ht_cov_general_estimator}\end{aligned}$$ Again, the term in the last line in has expected value no greater than the term in the last line of by Young’s inequality. Combining expressions we obtain a conservative variance estimator for $\Var(\widehat {\tau_{HT}}(d_k,d_l))$ as $$\begin{aligned} \widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)] & = \frac{1}{N^2} \left\{ \widehat{\Var}[\widehat{y^T_{HT}}(d_k)] + \widehat{A_2}(d_k) + \widehat{\Var}[\widehat{y^T_{HT}}(d_l)] + \widehat{A_2}(d_l) \right.\nonumber \\ & \left.\hspace{3.75em} - 2\widehat{\Cov_A}[\widehat{y^T_{HT}}(d_k),\widehat{y^T_{HT}}(d_l)] \right\} .\label{eq:ate_var_estimator}\end{aligned}$$ Asymptotics and intervals {#asymptot} ========================= Consider a sequence of subpopulations, $U^{(b)}$, with $b=1,...,B$ [@brewer1979; @isaki_fuller82], where each subpopulation $U^{(b)}$ consists of $1 \leq n_b < \infty$ units. The $B$ subpopulations are collected into a population of $N_B =\sum_{b=1}^B n_b$ units, which we shall label $U_B$, and estimates are produced using values from this population. To define a notion of asymptotic growth, we let $B$ (and therefore $N$) tend to infinity, allowing for the design and exposure mapping to vary for each $U_B$. Consistency and the asymptotic validity of Wald-type confidence intervals will then follow from restrictions on the growth process of the design and exposure mapping. Consistency {#consist} ----------- We first establish conditions for the estimator $\widehat{\tau_{HT}}(d_k,d_l)$ to converge to $\tau(d_k,d_l)$ as $N$ grows. We will show that, under two regularity conditions, $\widehat{\tau_{HT}}(d_k,d_l) - \tau(d_k,d_l) \overset{p}{\longrightarrow} 0$. The ratio of each potential outcome to its exposure probability is bounded, so that for all values $i$ and $d_k$, $|y_i(d_k)| / \pi_i(d_k) \leq c < \infty$. \[cond1\] Condition \[cond1\] can be relaxed, though condition \[cond2\] would need to be strengthened accordingly. Define $g_{ij} = 0$ if $\pi_{ij}(d_k,d_l) = \pi_i(d_k) \pi(d_l)$, else fix $g_{ij} = 1$. Then we require that $\sum_{i=1}^N \sum_{j=1}^N g_{ij} = o(N^{2})$. \[cond2\] Condition \[cond2\] entails that, as $N$ grows, the amount of clustering in exposure conditions induced by the design and exposure mapping is limited in scope. For example, in the case of a Bernoulli-randomized design, Condition 2 would be violated if changing one unit’s assigned treatment would affect the exposure received by all $N$ units. Consistency is straightforward to demonstrate when conditions \[cond1\] and \[cond2\] hold. A proof follows closely from the logic of @robinson82. $\widehat{\mu_{HT}}(d_k)$ is unbiased for $\mu(d_k)$, and thus we need only consider the variance. Substituting from , $N^2 \Var(\widehat{\mu_{HT}}(d_k)) \leq c^2 N + c^2 \sum_{i=1}^N \sum_{j=1}^N g_{ij}.$ Consistency of $\widehat{\mu_{HT}}(d_k)$ for $\mu(d_k)$ is therefore ensured when $\sum_{i=1}^N \sum_{j=1}^N g_{ij} = o(N^{2})$, as this implies that $\widehat{\mu_{HT}}(d_k) - \mu_{HT}(d_k) \overset{p}{\longrightarrow} 0$. Consistency of $\widehat{\tau_{HT}}(d_k,d_l)$ for $\tau(d_k,d_l)$ follows by Slutsky’s Theorem. Confidence intervals -------------------- We now establish conditions for the asymptotic validity of Wald-type confidence intervals under stricter conditions on the asymptotic growth process. Consistency for the variance estimators, asymptotic normality and therefore confidence intervals follow straightforwardly when the growth process involves designs and exposure mappings that are independent across subpopulations, implying partial interference [@sobel06]. These results are based on similar conditions as those studied by @hudgens2012_asymptotics_interference. We shall assume that condition \[cond1\] holds, but will strengthen condition \[cond2\] as follows. Each of the $B$ subpopulations of size $n_b$ hosts its own ${\mathbf{Z}}^{(b)}$ and an application of the exposure mapping, generating $B$ separate ${\mathbf{I}}_k^{(b)}$, independent across the $B$ subpopulations. Condition 3 is inspired by @brewer1979, which describes a stricter asymptotic scaling produced by addition of independent subpopulations. Condition 2 is subsumed by condition 3, as $\sum_{i=1}^N \sum_{j=1}^N g_{ij} = O(N)$ when condition 3 holds. Define $\widehat{\tau_{HT}}_b(d_k,d_l)$ as the causal effect estimator as applied to the subpopulation indexed by $b$. Assume that, for all $b$, $n_b^2 \Var[\widehat{\tau_{HT}}_b(d_k,d_l)] \geq \epsilon$, for some $\epsilon > 0$. Condition 4 serves two purposes. First, given condition 1 (boundedness), each $n_b \widehat{\tau_{HT}}_b(d_k,d_l)$ is bounded and $\sum_{b=1}^B n_b^2 \Var[\widehat{\tau_{HT}}_b(d_k,d_l)] \rightarrow \infty$. Thus condition 4 ensures that the sequence of subpopulation estimators satisfy the Lindeberg condition. Second, condition 4 ensures that $N \Var [ \widehat {\tau_{HT}}(d_k,d_l)]$ converges to a positive constant. Our results now follow from conditions 1, 3 and 4. To establish normal approximation confidence intervals for the estimated causal effect, $\widehat{\tau_{HT}}(d_k, d_l)$, consider the limiting behavior of $\widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)]$. By independence of the ${\mathbf{I}}_k^{(b)}$, $$\begin{aligned} \widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)] &= \sum_{b=1}^B \widehat{\Var}_b [\frac{n_b}{N_B} \widehat{\tau_{HT}}_b(d_k,d_l)] = \frac{1}{N_B^2} \sum_{b=1}^B n_b^2 \widehat{\Var}_b [ \widehat{\tau_{HT}}_b(d_k,d_l)] . $$ By the weak law of large numbers and condition 1 (boundedness), there exist $\bar{V}_U \geq \epsilon >0$ and $\bar N \geq 1$ such that $\sum_{b=1}^B n_b^2 \Var[\widehat{\tau_{HT}}_b(d_k,d_l)] / B \overset{p}{\longrightarrow} \bar{V}_U$ and $N_B/B = \sum_{b=1}^B n_b / B \overset{p}{\longrightarrow} \bar N$. Then, by Slutsky’s theorem, $$N_B \widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)] = \frac{ \sum_{b=1}^B n_b^2 \widehat{\Var}_b [ \widehat{\tau_{HT}}_b(d_k,d_l)] / {B}}{N_B/B} \overset{p}{\longrightarrow} \frac{\bar{V}_U}{\bar N},$$ where $0 < {\bar{V}_U}/ {\bar N} < \infty$. Since $\E\left[\widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)]\right] \geq {\Var}[\widehat {\tau_{HT}}(d_k,d_l)]$, we have thus established that $N$ times the variance estimator for the average causal effect converges to a quantity that is at least as large as $N$ times the true variance. Finally, define $$\begin{aligned} t &= \frac{\widehat {\tau_{HT}}(d_k,d_l)- {\tau_{HT}}(d_k,d_l)}{\sqrt{\Var[\widehat {\tau_{HT}}(d_k,d_l)]}} \left( \frac{\Var[\widehat {\tau_{HT}}(d_k,d_l)]}{\widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)]} \right)^{1/2}.\nonumber\end{aligned}$$ Under the given conditions, $\left(\widehat {\tau_{HT}}(d_k,d_l) - {\tau_{HT}}(d_k,d_l)\right)/\sqrt{\Var[\widehat {\tau_{HT}}(d_k,d_l)]}$ is asymptotically $\N(0,1)$ by the Lindeberg central limit theorem, while $( \Var[\widehat {\tau_{HT}}(d_k,d_l)]/\widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)])^{1/2}$ converges to a quantity less than one. Therefore, $t$ is asymptotically normal, and Wald-type intervals constructed as $$\widehat {\tau_{HT}}(d_k,d_l) \pm z_{1-\alpha/2}\sqrt{\widehat{\Var}[\widehat {\tau_{HT}}(d_k,d_l)]}$$ will tend to cover $\tau_{HT}(d_k,d_l)$ at least $100(1-\alpha)\%$ of the time for large $N$.[^7] Refinements =========== The mean and difference-in-means estimators presented thus far are unbiased by sample theoretic arguments, and we have derived conservative variance estimators. However, we may wish to improve efficiency by incorporating auxiliary covariate information. In addition, by analogy to results from the unequal probability sampling literature, ratio approximations of the Horvitz-Thompson estimator may significantly reduce mean square error with little cost in terms of bias [@sarndal_etal92 pp. 181-184]. We discuss such refinements here. Covariance adjustment --------------------- Auxiliary covariate information may help to improve efficiency. A first method of covariance adjustment is based on the so-called “difference estimator” [@raj65; @sarndal_etal92 Ch. 6]. Covariance adjustment of this variety can reduce the randomization variance of the estimated exposure means and average causal effects without compromising unbiasedness. In addition, the difference estimator addresses the problem of location non-invariance that afflicts Horvitz-Thompson-type estimators [@fuller09_samp_stat 9-10]. The estimator requires prior knowledge about how outcomes relate to covariates, perhaps obtained from analysis of auxiliary datasets. Assume an auxiliary covariate vector $\mathbf{x_i}$ is observed for each $i$. We have some predefined function $g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right) \rightarrow \mathbb{R}$, where $\mathbf{\xi_i}$ is a parameter vector. Ideally $g(.)$ is calibrated on auxiliary data to produce values that approximate $y_i(d_k)$. We assume $\Cov[g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right),Z_i] = 0$ as a sufficient condition for unbiasedness.[^8] Define $$\label{eq:gen1} \widehat{y^{T}_{G}}(d_k) = \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{y_i(d_k)}{\pi_i(d_k)} - \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right)}{\pi_i(d_k)} + \sum\limits_{i=1}^Ng\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right),$$ which is unbiased for $y^T(d_k)$ by $$\E\left[- \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right)}{\pi_i(d_k)} + \sum\limits_{i=1}^Ng\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right)\right] = 0.$$ Define $\epsilon_i(d_k) = y_i(d_k) - g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right)$. Then, by substitution, $$\label{eq:gen2} \widehat{y^{T}_{G}}(d_k) = \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{\epsilon_i(d_k)}{\pi_i(d_k)} + \sum\limits_{i=1}^Ng\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right).$$ Estimation proceeds as above using $\widehat{y^{T}_{G}}(d_k)$ in place of $\widehat{y^{T}}(d_k)$ to estimate $y^T(d_k)$. @middleton_aronow11 and @aronow_middleton11_unbiased demonstrate that $\widehat{y^{T}_{G}}(d_k)$ is location invariant. Variance estimation proceeds as in section \[varests\], using $\epsilon_i(d_k)$ in place of $y_i(d_k)$ so long as $g\left(\mathbf{x_i}, \mathbf{\xi_i}(d_k) \right)$ is fixed. An approximation to the difference estimator is given by regression adjustment using the sample at hand. Regression can be thought of as a way to automate selection of the parameters in the difference estimator. In doing so, unbiasedness is compromised although the regression estimator is typically consistent [@sarndal_etal92 pp. 225-239]. We may use weighted least squares to estimate a sensible parameter vector.[^9] Define an estimated parameter vector associated with exposure condition $d_k$ $$\mathbf{\widehat \xi}(d_k) = \arg\min\limits_{ \mathbf{\xi}(d_k)} \sum\limits_{i: D_i = d_k} \frac{1}{\pi_i(d_k)} \left[ y_i(d_k) - g\left(\mathbf{x_i}, \mathbf{\xi}(d_k) \right) \right]^2,$$ where $g(.)$ is the specification for the regression of $y_i(d_k)$ on ${\mathbf{I}}(D_i=d_k)$ and $\mathbf{x_i}$. Then the regression estimator for the total is $$\label{eq:regression} \widehat{y^{T}_{R}}(d_k) = \sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{y_i(d_k) - g\left(\mathbf{x_i}, \mathbf{\widehat \xi}(d_k) \right)}{\pi_i(d_k)} + \sum\limits_{i=1}^Ng\left(\mathbf{x_i}, \mathbf{\widehat \xi}(d_k) \right),$$ Estimation proceeds as above using $\widehat{y^{T}_{R}}(d_k)$ in place of $\widehat{y^{T}_{HT}}(d_k)$ to estimate $y^T(d_k)$. Under weak regularity conditions on $g(.)$, a variance estimator based on a Taylor linearization of $\widehat{y^{T}_{R}}(d_k)$ is consistent [@sarndal_etal92 236-237]. The linearized variance estimator can be computed by substituting the residuals, $e_i = y_i(d_k) - g(\mathbf{x_i}, \mathbf{\widehat \xi}(d_k))$, for the $y_i(d_k)$ terms in constructing the variance estimator given in expression . Hajek ratio estimation via weighted least squares ------------------------------------------------- The @hajek71 ratio estimator is a refinement of the standard Horvitz-Thompson estimator that often facilitates efficiency gains at the cost of some finite $N$ bias and complications in variance estimation. Let us first consider the problem that the Hajek estimator is designed to resolve. The high variance of $\widehat{\mu_{HT}}(d_k)$ is often driven by the fact that some randomizations may yield an unusually large or small number of units or, depending on the nature of ${\mathbf{I}}_k$, an unusually large or small number of units with high values of the weights $1/\pi_i(d_k)$. The Hajek refinement allows the denominator of the estimator to vary according to the sum of the weights $1/\pi_i(d_k)$, thus shrinking the magnitude of the estimator when its value is large, and raising the magnitude of the estimator when its value is small. The Hajek ratio estimator is $$\widehat{\mu_{H}}(d_k) = \frac{\sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{y_i(d_k)}{\pi_i(d_k)}}{\sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{1}{\pi_i(d_k)}}. \label{eq:hajek}$$ Note that $\E[\sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{1}{\pi_i(d_k)}] = N$, so that the Hajek estimator is the ratio of two unbiased estimators. It is well known that the ratio of two unbiased estimators is not an unbiased estimator of the ratio. However, the bias will tend to be small relative to the estimator’s sampling variability, and we may place bounds on its magnitude. By @hartley54 and @sarndal_etal92 [176], $$\left|\E[\widehat{\mu_{H}}(d_k)]-\mu(d_k)\right| \leq \sqrt{\Var\left(\frac{1}{N}\sum_{i=1}^N {\mathbf{I}}(D_i=d_k)\frac{1}{\pi_i(d_k)}\right){\Var\left(\widehat{\mu_{H}}(d_k)\right)}}$$ Under the asymptotic growth process hypothesized in section \[asymptot\], both variances will converge to zero, and thus the bias ratio will converge to zero. Practically speaking, the Hajek estimator can be computed with weighted least squares, with covariance adjustment through weighted least squares residualization. Variance estimation proceeds via Taylor linearization [@sarndal_etal92 172-176]. The linearized variance estimator can be computed by substituting the residuals, $u_i = y_i(d_k) - \widehat{\mu_{H}}(d_k)$, for the $y_i(d_k)$ terms in constructing the variance estimator given in expression . A naturalistic simulation with social network data ================================================== We use a naturalistic simulation to illustrate how our framework may be applied and also to study operating characteristics of the proposed estimators in a finite sample. We estimate direct and indirect effects of an experiment with individuals linked in a complex, undirected social network. We use friendship network data from American high school classes collected through the National Longitudinal Study of Adolescent Health (Add Health).[^10] The richness of these data makes Add Health a canonical dataset for methodological research related to social networks, as with @bramoulle_etal2009_peer_effects, @chung_etal2008_latent_transition_analysis, @goel_salganik10_assessing_rds, @goodreau_etal2009_ergm, @goodreau2007_ergms, @handcock_etal2007_network_clustering, and @hunter_etal2008_social_network_models_fit. We simulate experiments in which a treatment, ${\mathbf{Z}}$, is randomly assigned without replacement and with uniform probability to $1/10$ of individuals in a high school network. Indirect effects are transmitted only within a subject’s high school. This simulated experiment resembles various studies of network persuasion campaigns [@chen_etal2010_diffusion; @aral_walker2011_viral; @paluck2011_peer_pressure]. We define the exposure mapping as a vector valued function, $f({\mathbf{z}}, \theta_i)$, such that the parameter, $\theta_i$, equals subject $i$’s row in a network adjacency matrix (modified such that we have zeroes on the diagonal). The cross product, ${\mathbf{z}}'\theta_i$, counts the number of subjects $i$’s peers assigned to treatment. We use a simple exposure mapping that captures direct and indirect effects of the treatment, with indirect effects being transmitted to a subject’s immediate peers: $$\begin{aligned} f({\mathbf{z}}, \theta_i) = \left(\begin{array}{c} d_{11} \\ d_{10} \\ d_{01} \\ d_{00}\end{array} \right)& =\left(\begin{array}{c} z_i{\mathbf{I}}({\mathbf{z}}'\theta_i>0)\\ z_i {\mathbf{I}}({\mathbf{z}}'\theta_i=0)\\ (1-z_i){\mathbf{I}}({\mathbf{z}}'\theta_i>0) \\ (1-z_i){\mathbf{I}}({\mathbf{z}}'\theta_i=0) \end{array} \right)\nonumber \\ & =\left( \begin{array}{c} \text{``direct + indirect'' exposure} \\ \text{``isolated direct'' exposure}\\ \text{``indirect'' exposure}\\ \text{``control''} \end{array} \right),\nonumber \end{aligned}$$ where each unit falls into exactly one of the four exposure conditions (and so $(\begin{array}{cccc} 1 & 1& 1& 1\end{array})f({\mathbf{z}}, \theta_i) = 1$ for all $i$). This experiment is repeated independently across the 144 high school classes included in Add Health, with an average class size of 626 students. To ensure that our effect estimates all refer to the same underlying population, we dropped subjects that reported zero friendship ties (see footnote \[fn:isolates\_problem\]). We chose this exposure mapping because of its parsimony; the analyst is free to choose more complex mappings. ![Illustration of a treatment assignment (left) and then treatment-induced exposures (right) for one of the high school classes in the study. Each dot is a student, and each line represents an undirected friendship tie.\[fig:network\]](simexamplenetwork.pdf){width="100.00000%"} Figure \[fig:network\] illustrates a treatment assignment and corresponding treatment-induced exposures under this mapping. The figure illustrates two key issues that our methods address. First is the connection between a unit’s underlying traits, in this case its network degree, and propensity to fall into one or another exposure condition. The second is the irregular clustering that occurs in exposure conditions. Such irregular clustering is precisely what one must address in deriving variance estimates and intervals for estimated effects. We use as our outcome a variable in the dataset that records the number of after-school activities in which each student participates. This variable defines the $y_i(d_{00})$ values—that is, potential outcomes under the “control” exposure. This makes our simulation naturalistic not only in the networks that define the interference patterns, but also in the outcome data. The variable exhibits a high degree of right skew, with mean 2.14, standard deviation 2.64, and 0, .25, .5, .75, and 1 quantiles of 0, 2, 3, and 33, respectively. We consider a simple “dilated effects” scenario [@rosenbaum1999_dilated_effects_sensitivity] where potential outcomes are such that $y_i(d_{11})= 2\times y_i(d_{00}), y_i(d_{10})= 1.5 \times y_i(d_{00}), y_i(d_{01})= 1.25\times y_i(d_{00})$. We run 500 simulated replications of the experiment, applying five estimators in each scenario: - The Horvitz-Thompson estimator for the causal effect given in expression , with the associated conservative variance estimator, given in expression ; - The Hajek ratio estimator given in expression , with the associated linearized variance estimator; - The weighted least squares (WLS) estimator given in expression , adjusting for network degree as the sole covariate, with the associated linearized variance estimator; - An ordinary least squares (OLS) estimator that regresses the outcome on indicator variables for the exposure conditions, adjusting for network degree as a covariate, with @mackinnon_white1985_hc2’s finite sample adjusted “HC2” heteroskedasticity consistent variance estimator; - A simple difference in sample means (DSM) for the exposure conditions, also with the HC2 estimator. With respect to point estimates, the Horvitz-Thompson estimator is unbiased but possibly unstable, while the Hajek and WLS estimators are consistent and expected to be more stable. The DSM estimator is expected to be biased because it totally ignores relationships between exposure probabilities and outcomes. The OLS estimator controls for network degree, and so this will remove bias due to correlation between exposure probabilities and outcomes. However, OLS is known to be biased in its aggregation of unit level heterogeneity in causal effects [@angrist_krueger99_handbook]. With respect to standard error estimates and confidence intervals, the variance estimators for the Horvitz-Thompson, Hajek, and WLS estimators are expected to be conservative though informative. The variance estimators for OLS and DSM may be anti-conservative because they ignore the clustering in exposure conditions. Table \[tab:simresults\_dilated\] shows results of the simulation study, which conform to expectations. The Horvitz-Thompson, Hajek, and WLS estimators exhibit no perceivable bias. The Horvitz-Thompson estimator exhibits higher variability than the Hajek and WLS estimators. The OLS estimator and DSM estimator are heavily biased when considered relative to the variability of the effect estimates. The bias in OLS is expected because unit level causal effects, defined in terms of differences, are heterogenous from unit to unit when underlying potential outcomes are based on dilated effects. Thus OLS will suffer from an aggregation bias in addition to any biases due to inadequate conditioning on network degree. The standard error estimates for the Horvitz-Thompson, Hajek, and WLS estimators are informative but conservative, resulting in empirical coverage rates that exceed nominal levels. The intervals for the OLS and DSM variance estimators badly undercover, primarily due to the bias in the point estimates rather than understatement of variability. ----------- ----------------------- ----------- ------ ---------- ------ ----------- ----------- -- Mean 95% CI 90% CI Estimator Estimand Bias S.D. RMSE S.E. Coverage Coverage HT $\tau(d_{01},d_{00})$ [0.00 ]{} 0.04 0.04 0.05 0.960 0.924 $\tau(d_{10},d_{00})$ [0.00 ]{} 0.10 0.10 0.19 0.986 0.970 $\tau(d_{11},d_{00})$ [0.00 ]{} 0.13 0.13 0.28 0.990 0.970 Hajek $\tau(d_{01},d_{00})$ [0.00 ]{} 0.03 0.03 0.03 0.968 0.916 $\tau(d_{10},d_{00})$ [0.00 ]{} 0.07 0.07 0.13 0.992 0.970 $\tau(d_{11},d_{00})$ [0.00 ]{} 0.12 0.12 0.25 0.986 0.970 WLS $\tau(d_{01},d_{00})$ [0.00 ]{} 0.03 [0.03]{} 0.03 0.970 0.928 $\tau(d_{10},d_{00})$ [0.00 ]{} 0.07 [0.07]{} 0.12 0.992 0.968 $\tau(d_{11},d_{00})$ [0.00 ]{} 0.11 [0.11]{} 0.25 0.988 0.950 OLS $\tau(d_{01},d_{00})$ [-0.02]{} 0.03 0.03 0.02 [0.842]{} [0.768]{} $\tau(d_{10},d_{00})$ [-0.08]{} 0.06 0.10 0.07 [0.706]{} [0.576]{} $\tau(d_{11},d_{00})$ [0.12]{} 0.09 0.15 0.09 [0.660]{} [0.530]{} DSM $\tau(d_{01},d_{00})$ [0.42]{} 0.02 0.42 0.02 [0.000]{} [0.000]{} $\tau(d_{10},d_{00})$ [-0.08]{} 0.06 0.10 0.07 [0.726]{} [0.614]{} $\tau(d_{11},d_{00})$ [0.56]{} 0.09 0.57 0.09 [0.000]{} [0.000]{} ----------- ----------------------- ----------- ------ ---------- ------ ----------- ----------- -- : Results from high school friends’ network simulated experiment[]{data-label="tab:simresults_dilated"} HT = Horvitz-Thompson estimator with conservative variance estimator.\ Hajek = Hajek estimator with linearized variance estimator.\ WLS = Least squares weighted by exposure probabilities with covariate adjustment for network degree and linearized variance estimator.\ OLS = Ordinary least squares with covariate adjustment for network degree and heteroskedasticity consistent variance estimator.\ DSM = Simple difference in sample means with no covariate adjustment and heteroskedasticity consistent variance estimator.\ S.D. = Empirical standard deviation from simulation; RMSE = Root mean square error; S.E. = standard error estimate; CI = Normal approximation confidence interval. Illustrative applications ========================= Our focus has been on the case of estimating average unit level causal effects of exposures when interference is present. While our analytical framework can be readily applied to other inferential targets, the case developed in this paper provides a principled basis for estimation any time one indirectly randomizes the assignment of units to exposure conditions, a situation that arises in a broad range of scenarios of substantive interest. The high school “network experiment” example above was one application. We illustrate two other types of potential applications here. Spatial spillover in an environmental protection experiment ----------------------------------------------------------- An interesting set of applications comes from when the effects of experimental treatments have the potential to transmit over space or through networks, and treatments are allocated to point locations in the space or network. For example, consider an environmental protection experiment in which forest monitoring stations are positioned at fixed points around the perimeter of a protected forest.[^11] The goal is to determine an optimal allocation of monitoring stations so as to reduce risks (such as illegal cutting) sufficiently while not committing excessive resources. In this case, the units of analysis are segments of the forest, and exposure might be defined in terms of whether the segment centroid is very close, moderately close, or far from the nearest monitoring station. In most cases, there will be irregularities in the places where stations could be established as well as irregularities in the spacing and orientation of the segments. For this reason, some segments may be in close proximity to multiple potential stationing points, whereas other may be in close proximity to only a few. Suppose the research design randomly selects $M$ out of $N$ potential locations to receive a monitoring station. Then, a forest segment’s probability of being very close, moderately close, or far from a monitoring station will be determined by the combination of this random assignment (${\mathbf{Z}}$) and the segment’s location relative to the different potential stationing sites ($\theta_i$). Using the methods above, one could generate the set of all stationing possibilities, record the exposure profile of the segments for each of the stationing possibilities, and then empirically determine the generalized probability of exposures for each segment. Dynamic experiments ------------------- Dynamic experiments have time-varying treatment assignments. Exposure in this context could be defined in terms of a unit’s treatment history. A prominent example of a dynamic experiment is the “stepped-wedge” design, in which there are a fixed number of periods, and in each period, some proportion of non-treated subjects are permanently assigned to treatment for all future periods. Outcomes are observed for all subjects in each period, so that the unit of inference is the subject-period. For analyzing per-period effects, one would want to account for possible interference due to effects from a subject’s assignment in previous periods carrying over into the current period. For example, suppose a stepped-wedge experiment with three periods. An exposure mapping in this case might define, $\Delta_1 = \{(1,1,1),(0,1,1), (0,0,1), (0,0,0)\}$, to define treatment initiated in periods 1, 2, 3, or never, respectively. Then, simple random assignment to each of these three exposure conditions provides for very straightforward identification and inference. Suppose, however, that there is good reason to believe that carry-over effects are likely to last only one period. The analyst then may use an alternative exposure mapping, $\Delta_2 = \{(1,1),(0,1), (0,0) \}$, to indicate two consecutive periods of exposure, only one period of exposure, and no exposure, respectively. If the experiment randomly assigned the histories enumerated in $\Delta_1$, then the probabilities of assignment to the conditions in $\Delta_2$ would vary over the $\Delta_1$ conditions. @brown_lilford2006_steppedwedge review applications of stepped-wedge designs in medical research, and @gerber_etal2011_texas is an application from political science. Uncertainty about exposure mappings =================================== Some readers may have concerns about how the methods proposed here rely on an exposure mapping. Does this not introduce arbitrary modeling assumptions to the analysis? The question is misguided: there is no escaping specification of exposure mappings for causal analysis. Consider the classical approach to inference under the Neyman-Rubin model. Here, analysts typically assume a very specific exposure mapping—namely, one that assumes no interference relative to unit-level treatment assignments. This typical Neyman-Rubin model is nested within more general exposure mappings that allow for some forms of interference, which are in turn nested within other exposure mappings that place fewer restrictions on the form of the interference. Our approach permits estimation and testing under an arbitrarily general exposure mapping. Nonetheless, two types of uncertainty may complicate application of the methods proposed here. First is uncertainty over the correct way to map treatments to exposures. We can express this as “uncertainty about $f(.)$”. Second is uncertainty over the attributes that one needs to apply $f(.)$. We can express this as “uncertainty about the $\theta_i$s”. With respect to uncertainty over $f(.)$, unless $|\Delta| = |\Omega| \times N$, the analyst may always estimate average potential outcomes under a less restrictive exposure mapping that allows for additional forms of interference, thus allowing for the enumeration of additional potential outcomes. Then the analyst may test for significant differences between the hypothesized average potential outcomes and those associated with a nested mapping. Rejection of the null hypothesis of no mean difference between potential outcomes provides support for the more complex exposure mapping. While issues of model specification may be unavoidable, the proposed framework allows for inference under arbitrarily flexible (and testable) assumptions on the exposure mapping. With respect to uncertainty about the $\theta_i$s, a concrete example comes from the network experiment simulation above. Suppose we do not know for sure the links between subjects. Such uncertainty could be formalized in terms of a model that defines a probability distribution over the domain of possible adjacency matrices. For example, @chandrasekhar_lewis2012_sampled_networks demonstrate methods for completing partially observed peer networks in analyzing a micro-finance field experiment in India. A practical implementation of this procedure would be a multiple imputation approach: impute $M$ random draws of the adjacency matrix from a random graph model, estimate causal effects on each, and then aggregate estimates with the usual multiple imputation combination formulas [@rubin87]. Conclusion ========== This paper proposes an analytical framework for causal inference under interference. The framework integrates (i) an experimental design the defines the probability distribution for treatments assigned, (ii) an exposure mapping that relates treatments assigned to exposures received, and (iii) an estimand chosen to make maximal use of an experimental design to answer questions of substantive interest. Using this framework, we develop methods for estimating average unit level causal effects of exposures from a randomized experiment. Our approach combines the known randomization process with the analyst’s definition of treatment exposure, thus permitting inference under clear and defensible assumptions. Importantly, the union of the design of the experiment and the exposure mapping may imply unequal probabilities of exposure and forms of dependence between units that may not be obvious ex ante. We develop estimators based on results from the literature on unequal probability sampling rooted in the foundational insights of @horvitz_thompson. The estimators are derived from the known sampling distribution of the “direct” treatment, ${\mathbf{Z}}$, and provide a basis for unbiased effect estimation and conservative variance estimation. Wald-type intervals based on a normal approximation provide a reasonable reflection of large $N$ behavior when clustering of exposure indicator values is limited. Nonetheless, it is well known that Horvitz-Thompson-type estimators may be volatile in cases where selection probabilities vary greatly or exhibit strong inverse correlation with outcome values [@basu1971_elephants]. Thus, we provide refinements that allow for variance control via covariance adjustment and Hajek estimation. In addition, we provide a method of variance estimation based on hypotheses of the nature of causal effects which may be preferred when design-based estimators are unstable. Our approach combines minimal assumptions about restrictions on potential outcomes with randomization-based estimators and may be characterized as design consistent. The framework is readily applicable to deriving estimators for estimands other than the average unit level effect of exposures.[^12] The framework developed here represents an alternative to parametric approaches that are often employed with little substantive justification. The framework and resulting methods greatly extend the reach of randomization-based estimation of causal effects. [^1]: Peter M. Aronow is Ph.D. Candidate, Department of Political Science, Yale University, 77 Prospect St., New Haven, CT 06520 (Email: peter.aronow@yale.edu). Cyrus Samii is Assistant Professor, Department of Politics, New York University, 19 West 4th St., New York, NY 10012 (Email: cds2083@nyu.edu). The authors are grateful for helpful feedback from Bernd Beber, Jake Bowers, Dean Eckles, Don Green, Kosuke Imai, Brian Karrer, Luke Keele, Winston Lin, Joel Middleton, Elizabeth Ogburn, Allison Sovey, Eric Tchetgen-Tchetgen, Teppei Yamamoto, and participants at the JSM 2012, 2012 Atlantic Causal Inference Conference, 2012 NYU-CESS Experimental Political Science Conference, NYU Development Economics Workshop, Princeton Political Methodology Research Seminar, 2011 Workshop on Information in Networks, Johns Hopkins University Causal Inference Group Seminar, and Yale Field Experiments Seminar. [^2]: The exposure mapping construction is functionally equivalent to the “effective treatments” function used by @manski2012_identification_social (denoted $c_j(.)$ by Manski). We find it helpful however to denote separately the unit-specific attributes, $\theta_i$, that feed into the exposure mapping, $f(.)$. As discussed below, uncertainty about either of these has quite distinct implications for how the analysis should proceed. [^3]: In practice, $|\Omega|$ may be so large that it is impractical to construct $\Omega$ to compute the ${\boldsymbol{\pi}}_i$s and the joint probability matrices exactly. One may nonetheless approximate the ${\boldsymbol{\pi}}_i$s and joint probabilities with arbitrary precision through replication [@fattorini06]. That is, produce $R$ random replicate ${\mathbf{z}}$s based on the randomization plan. From these $R$ replicates, we can construct an $N \times R$ indicator matrix, $\widehat{{\mathbf{I}}}_k$, for each of the $k=1,...,K$ exposure conditions. Then an estimator for ${\mathbf{I}}_k {\mathbf{P}}{\mathbf{I}}_k'$ is $\widehat{{\mathbf{I}}}_k\widehat{{\mathbf{I}}}'_k/R$, and similarly for ${\mathbf{I}}_k {\mathbf{P}}{\mathbf{I}}_l'$. The replication procedure would be equivalent to drawing a random sample without replacement from $\Omega$ with probabilities of selection equal to those which are defined in the randomization plan. As such, the resulting exposure probability and joint probability estimates would be unbiased. @chen_etal2010_diffusion apply a similar approach. [^4]: \[fn:hudgens\] The direct, indirect and overall effects of @hudgens_halloran08 are defined in this way using the construction of the “individual average potential outcome.” The hierarchical designs that they consider are specifically tailored to ensure that estimators for such effects are non-parametrically identified. While our focus is on estimation of unit-level causal effects that are defined for arbitrary designs, such design-specific estimators can certainly be derived and analyzed using the framework developed here. [^5]: Cases may arise when some units have zero probability of being in one or another exposure condition. Then, when computing the causal effect of exposure condition $k$ versus $l$, design-based principles would have one only ever include units for which probability of exposure to both $k$ and $l$ are non-zero. This implies that different causal effects may be estimated on different subpopulations. This needs to be kept in mind when interpreting results and does not alter the approach to estimation and inference otherwise.\[fn:isolates\_problem\] [^6]: This construction owes greatly to Joel Middleton. [^7]: We conjecture that asymptotic normality follows under weaker conditions on the asymptotic growth process that do not imply partial interference. For example, suppose potential outcomes and exposure probabilities are bounded and dependence between any units’ exposure indicators is non-zero only if the distance between units is below some threshold. Then if the finite population grows in a manner that continually expands the population space while also remaining within a finite dimensional manifold, consistency of the mean and variance estimators and asymptotic normality would follow by existing central limit theorems for bounded $m$-dependent series [@hoeffding_robbins1948; @sajjan2000_cls_lattices; @christofides_mavrikou2003_cls_multidimensional_m_dependence; @jenish_prucha2009_clt_spatial_interaction; @harvey_etal2010_asymptotics_3d_lattice]. [^8]: This allows for the possibility that $\mathbf{\xi_i}(d_k)$ is a random variable. The condition $\mathbf{\xi_i}(d_k) {\protect\mathpalette{\protect\independenT}{\perp}}D_i$ is also a sufficient condition for unbiasedness. @aronow_middleton11_unbiased provide greater discussion of conditions for unbiased effect estimation and conservative variance estimation. [^9]: For some common experimental designs, the least squares criterion will be optimal [@lin11_freedmans_critique], and weighting by $1/\pi_i(d_k)$ ensures that the regression proceeds on a sample representative of the population of potential outcomes. With additional details on ${\mathbf{I}}_k$ and $g(.)$, it is possible to estimate optimal parameter vectors [@sarndal_etal92 219-244], though such values will typically be close to those produced by the weighted least squares estimator (barring unusual and extreme forms of clustering). [^10]: Study description and data are at <http://www.cpc.unc.edu/projects/addhealth>. [^11]: The example is based on an actual evaluation in which the authors have been involved. [^12]: See section \[sec:exposure\_mapping\] as well as footnote \[fn:hudgens\].
--- author: - 'Xue Ao, Pulak K. Ghosh, Yunyun Li[^1], Gerhard Schmid, Peter Hänggi' - Fabio Marchesoni title: Active Brownian motion in a narrow channel --- Introduction {#intro} ============ Rectification of Brownian motion in a narrow, periodically corrugated channel has been the focus of a concerted effort [@ChemPhysChem; @RMP2009; @Denisov] aimed at establishing net particle transport in the absence of external biases. To this purpose two basic ingredients are required: a spatial asymmetry of the channel and a time correlation of the perturbations, random or deterministic, applied to the diffusing particles. The ensuing spontaneous current is a manifestation of the so-called ratchet effect [@RMP2009]. Typically, demonstrations of the ratchet effect had recourse to external [*unbiased*]{} time-dependent drives (rocked and pulsated ratchets). Rectification induced by time-correlated, or colored, non-equilibrium fluctuations (thermal ratchets) is conceptually feasible, but has been, so far, of limited practical use. The idea in itself, however, is appealing: The diffusing particles would harvest kinetic energy directly from their non-equilibrium environment, without requiring any externally applied field at all, and transport would ensue as an [*autonomous*]{} symmetry-directed particle flow. ![(Color online) Levogyre Janus particle in a narrow channel: (a) noiseless particle with velocity ${\vec v}_0$ and torque frequency $\Omega>0$, Eq. (\[LE\]), moving along a circular arc of radius $R_\Omega$ (dashed line); (b) upside-down asymmetric channel compartment, Eq. (\[wx\]) with $\phi=0$ and $\epsilon=0.25$. In (b) the open boundary-trajectories and a closed circular trajectory of radius $R_\Omega$ are drawn for explanatory purposes (see Sec. \[upsidedown\] for actual simulation data).[]{data-label="F1"}](MSlongFig1.pdf){width="0.95\linewidth"} To enhance rectification of time correlated diffusion in a modulated channel with zero drives, we recently proposed [@MSshort] to make use of a special type of diffusive tracers, namely of active, or self-propelled, Brownian particles. Self-propulsion is the ability of most living organisms to move, in the absence of external drives, thanks to an “engine” of their own [@Purcell]. Optimizing self-propulsion of micro- and nano-particles (artificial microswimmers) is a growing topic of today’s nanotechnology [@Schweitzer; @Rama; @Ebeling]. Recently, a new type of artificial microswimmers has been synthesized [@Granick; @Chen], where self-propulsion takes advantage of the local gradients asymmetric particles can generate in the presence of an external energy source (self-phoretic effects). Such particles, called Janus particles (JP), consist of two distinct “faces", only one of which is chemically or physically active. Thanks to their functional asymmetry, these active particles can induce either concentration gradients (self-diffusiophoresis) by catalyzing a chemical reaction on their active surface [@Paxton1; @Gibbs; @Bechinger], or thermal gradients (self-thermophoresis), e.g., by inhomogeneous light absorption [@Sano] or magnetic excitation [@ASCNano2013JM]. A self-propulsion mechanism acts on an pointlike particle by means of a force and, possibly, a torque. In the absence of a torque, the line of motion is directed parallel to the self-phoretic force and the JP propels itself along a straight line, until it changes direction, due to gradient fluctuations [@Sen_propulsion] or random collisions against other particles or geometric boundaries [@Vicsek]. This is the highly stylized case mostly studied in the recent literature, where, for simplicity, the JPs are assumed to be rotationally symmetric around their line of motion (symmetric JPs). In the presence of an additional torque the self-phoretic force and the line of motion are no longer aligned and the microswimmer tends to execute circular orbits [@Lowen; @Julicher]. Active chiral motion has long been known in biology [@Brokaw; @Julicher; @Volpe] and more recently observed in asymmetrically propelled micro- and nano-rods: A torque can be intrinsic to the propulsion mechanism, due to the presence of geometrical asymmetries in the particle fabrication, engineered or accidental (asymmetric JP’s) [@LowenKumm; @Ibele; @composite], or externally applied, for instance, by laser irradiation [@Sano] or hydrodynamic fields [@Stark]. In the finite damping regime, the Lorentz force exerted by a magnetic field on a charged active Brownian particle also amounts to an external torque [@Schimansky]; however, the effects of such magnetic torques vanishes in the overdamped limit [@Kline]. In this paper we discuss the interplay of chiral propulsion and channel spatial asymmetry on controlling autonomous rectification. In Sec. \[model\] we remind that active Brownian motion is time correlated [*per se*]{}, which means that transport control of, say, a JP with assigned self-propulsion properties, can only be achieved by suitably tailoring the channel boundaries. In Sec. \[rectificaton\] we briefly review autonomous rectification of JP’s with and without external torque. In both cases the net on-plane drive exerted on the particle is null (unbiased diffusion). We distinguish between two categories of channels, left-right asymmetric channels, Sec. \[leftright\], where even nonchiral JP’s can be rectified, and upside-down asymmetric channels, Sec. \[upsidedown\], where particle ratcheting requires a nonzero torque. Based on numerical evidence, in Sec. \[asymmetry\] we establish the minimal asymmetry conditions that make rectification of active Brownian particles possible. Finally, in Sec. \[diffusion\] we present new results on active diffusion in channels of various geometries. In particular, we show that diffusion of JP’s is not controlled by the channel geometry as much as by the angular asymmetry of the self-propulsion mechanim. The diffusion of nonchiral and chiral JP’s is investigated in Secs. \[diffnonchiral\] and \[diffchiral\], respectively. Ideas for future work are discussed in the concluding Sec. \[conclusions\]. ![(Color online) Logarithmic contour plots of the stationary particle density, $P(x,y)$, in a triangular compartment of size $x_L=y_L=1$ and pore width $\Delta=0.1$; the compartment acts as a funnel of angular width $2\alpha$. Other simulation parameters are: $D_\theta =0.006$, $v_0=1$ and (a) $D_0=0.03$, (b) $D_0=0$ (no thermal noise); sliding b.c. have been adopted throughout. Both densities are singular at the corners and along the side walls, where graphics resolution effects are apparent. \[F2\]](MSlongFig2a.pdf "fig:"){width="45.00000%"} ![(Color online) Logarithmic contour plots of the stationary particle density, $P(x,y)$, in a triangular compartment of size $x_L=y_L=1$ and pore width $\Delta=0.1$; the compartment acts as a funnel of angular width $2\alpha$. Other simulation parameters are: $D_\theta =0.006$, $v_0=1$ and (a) $D_0=0.03$, (b) $D_0=0$ (no thermal noise); sliding b.c. have been adopted throughout. Both densities are singular at the corners and along the side walls, where graphics resolution effects are apparent. \[F2\]](MSlongFig2b.pdf "fig:"){width="45.00000%"} Model ===== In order to avoid unessential complications, we restrict this report to the case of 2D channels and pointlike artificial microswimmers of the JP type [@Granick]. A chiral JP gets a continuous push from the suspension fluid, which in the overdamped regime amounts to a rotating self-propulsion velocity ${\vec v_0}$ with constant modulus $v_0$ and angular velocity $\Omega$. Additionally, the self-propulsion direction varies randomly with time constant $\tau_\theta$, under the combined action of thermal noise and orientational fluctuations intrinsic to the self-propulsion mechanism. Accordingly, the microswimmer mean free self-propulsion path approximates a circular arc of radius $R_\Omega=v_0/|\Omega|$ and length $l_\theta=v_0\tau_\theta$ [@Lowen]. Chiral effects are prominent when $R_\Omega \lesssim l_\theta$, or equivalently, $|\Omega|\tau_{\theta} \gtrsim 1$ (strong chirality regime). The bulk dynamics of such an overdamped chiral JP obeys the Langevin equations (LE) [@Lowen] $$\begin{aligned} \label{LE} \dot x &=& v_0\cos \theta +\xi_x(t) \\ \nonumber \dot y &=& v_0\sin \theta +\xi_y(t) \\ \nonumber \dot \theta &=&\Omega +\xi_\theta(t),\end{aligned}$$ where ${\bf r}=(x,y)$ are the coordinates of a particle subject to the Gaussian noises $\xi_{i}(t)$, with $\langle \xi_{i}(t)\rangle=0$ and $\langle \xi_{i}(t)\xi_{j}(0)\rangle=2D_0\delta_{ij}\delta (t)$ for $i=x,y$, modeling the equilibrium thermal fluctuations in the suspension fluid. The channel is directed along the $x$ axis, the self-propulsion velocity is oriented at an angle $\theta$ with respect to it and the sign of $\Omega$ is chosen so as to coincide respectively with the positive (levogyre) and negative (dextrogyre) chirality of the swimmer, see Fig. \[F1\](a). The orientational fluctuations of the propulsion velocity are modeled by the Gaussian noise $\xi_\theta(t)$ with $\langle \xi_{\theta}(t)\rangle=0$ and $\langle \xi_{\theta}(t)\xi_{\theta}(0)\rangle=2D_{\theta}\delta(t)$, where the noise strength $D_\theta$ is the relaxation rate of the self-propulsion velocity, $D_{\theta}=2/\tau_{\theta}$, see Sec. \[diffnonchiral\] for more details. In this work the JP’s were assumed to be pointlike as we intended to focus on the causes of autonomous transport and its control. However, numerical and experimental evidence clearly show that the dynamical parameters of a real JP in the bulk, i.e., its self-propulsion speed, friction coefficient, thermal and rotational diffusion coefficients [@finitesize] and effective shape, all depend on its size as well as on its shape [@Spagnolie]. These effects can be accounted for, at least qualitatively, by an appropriate choice of the free parameters introduced in our model Eqs. (\[LE\]). The simplifications introduced here are not limited to the dimensionality of the channel or the size of the particle. All noise sources in Eq. (\[LE\]) have been treated as independently tunable, although, strictly speaking, thermal and orientational fluctuations may be statistically correlated (see, e.g., [@Volpe]). Moreover, we ignored hydrodynamic effects, which not only favor clustering in dense mixtures of JP’s [@Ripoll; @Buttinoni], but may even cause their capture by the channel walls [@Takagi]. However, we made sure that the parameters used in our simulations are experimentally accessible, as apparent on expressing times in seconds and lengths in microns (see Refs [@Bechinger; @Volpe] for a comparison). When confined to a channel directed along the $x$ axis, the particle transverse coordinate, $y$, is bounded by the wall functions $w_{\pm}(x)$, $w_{-}(x)\leq y \leq w_{+}(x)$. All wall geometries considered below are periodic with compartment length $x_L$, namely $w_{\pm}(x+x_L)=w_{\pm}(x)$. The channel compartments are connected by pores of width $\Delta$, much narrower than their maximum cross-section. Simulating a constrained JP requires defining its collisional dynamics at the boundaries. For the translational velocity $\vec{\dot r}$ we assumed elastic reflection. Regarding the coordinate $\theta$ we considered two possibilities:\ [*(a) frictionless collisions, $\theta$ unchanged.*]{} The active particle slides along the walls for an average time of the order of $\tau_\theta$, until the $\theta (t)$ fluctuations redirect it toward the interior of the compartment. For simplicity, all simulation results presented in this report have been obtained for sliding b.c. The panels of Fig. \[F2\] show how the stationary particle probability density $P(x,y)$ accumulates along the boundaries; this effect is even stronger in the noiseless case, $D_0=0$;\ [*(b) rotation induced by a tangential friction, $\theta$ randomized.*]{} These b.c. cause the particle to diffuse away from the boundary, which, as discussed below, tends to weaken the rectification effect. Note that, as pointed out in Refs. [@inertia], should one assume elastic boundary reflection for both $\vec{\dot r}$ and $\vec{v}_0$, then the self-propelled motion of a JP would coincide with an ordinary Brownian motion with finite damping constant, $\gamma=2/\tau_\theta$, and self-diffusion constant $D_s$ defined in Sec. \[diffnonchiral\]. Being that an equilibrium random process, it could not be rectified, no matter what the spacial asymmetry of the channel. We further stress that on modeling the boundary conditions we heavily relied on the pointlike particle assumption to neglect (i) the dependence of the self-propulsion mechanism on the particle distance from the walls; (ii) the hydrodynamic interactions between particle and walls [@Spagnolie]; and (iii) the ensuing particle accumulation against the walls [@Takagi]. Finally, Eqs. (\[LE\]) have been numerically integrated by using a standard Milstein algorithm [@MSshort] with a very short time step, $10^{-5} - 10^{-7}$, to ensure numerical stability. As initial conditions we have assumed that at $t=0$ the particle is uniformly distributed with random orientation in a channel compartment located between $x=0$ to $x= x_L$. The total observation time was set to $10^4 \times \tau_\theta$, or $10^4 \times \Omega^{-1}$, or $10^4$, whichever is greater, so that effects due to the initial conditions and transient processes can be neglected. The results reported in the figure shown here have been obtained by ensemble averaging over $10^4 - 10^6$ trajectories, depending on the observable. ![(Color online) Rectification of a nonchiral JP in a triangular channel with compartment geometry as in Fig. \[F2\]. (a) rectification power, $\eta$ (solid symbols) vs $l_\theta=v_0 \tau_\theta$ for different $D_0$ and $v_0$; (b) $\eta$ vs. $D_0$ for $v_0=1$ and different $\tau_\theta$. The particle flow is oriented to the right, i.e., $\bar v>0$. \[F3\]](MSlongFig3a.pdf "fig:"){width="45.00000%"} ![(Color online) Rectification of a nonchiral JP in a triangular channel with compartment geometry as in Fig. \[F2\]. (a) rectification power, $\eta$ (solid symbols) vs $l_\theta=v_0 \tau_\theta$ for different $D_0$ and $v_0$; (b) $\eta$ vs. $D_0$ for $v_0=1$ and different $\tau_\theta$. The particle flow is oriented to the right, i.e., $\bar v>0$. \[F3\]](MSlongFig3b.pdf "fig:"){width="45.00000%"} Autonomous currents {#rectificaton} =================== We discuss first autonomous rectification of a JP in two different classes of 2D asymmetric channels. To characterize its drift in the absence of external biases, we introduce the [*rectification power*]{} $$\label{RP} \eta={|\bar v|}/v_0,$$ where the net drift velocity of the particle, $\bar v = \lim_{t\to \infty}\langle x(t)-x(0) \rangle/t$, is expressed in units of its self-propulsion velocity, $v_0$. Left-right asymmetric channels {#leftright} ------------------------------ The LE system of Eq. (\[LE\]) was first numerically simulated in for a JP confined to a directed channel made of triangular compartments with dimensions $x_L \times y_L$ and pore size $\Delta$, see Fig. \[F2\]. The compartment aspect ratio was kept constant, $r=x_L/y_L=1$, and by rescaling the coordinates $x$ and $y$ by an appropriate factor $\kappa$, $x \to x/\kappa$ and $y\to y/\kappa$, its dimensions can always be conveniently rescaled to $x_L=y_L=1$ (as done in Figs. \[F2\]-\[F5\] and \[F11\]). Analogously, by time rescaling, $t\to v_0 t/\kappa$, one can work with self-propulsion velocities of constant modulus, $v_0=1$. In conclusion, the output of our numerical analysis only depends on four characteristic lengths: the pore width, $\Delta$; the thermal length, $D_0/v_0$; the self-propulsion length, $l_{\theta}$; and the chiral radius, $R_\Omega$, all to be compared with the compartment dimensions set to one. Throughout our simulation work we assumed narrow pores and low thermal noise, so that the first two lengths play no key role in the discussion of our results \[see Eq. (\[furth\])\]. Equivalently, when appropriate, instead of the ratios $l_\theta/x_L$ and $R_\Omega/x_L$ one can make use of the dimensionless quantities $\tau_\theta/\tau_x$ and $|\Omega| \tau_\theta$, where $\tau_\theta$ is the self-propulsion time, $\tau_x=x_L/v_0$ a characteristic compartment crossing time, and $\Omega$ the chiral angular frequency. Note that, in view of their orientation, the triangular compartments of Fig. \[F2\] tends to funnel the particle to the right (easy-flow direction) with ${\bar v}>0$. The magnitude of this effect depends on the modulus and not on the sign of $\Omega$. ![(Color online) Rectification of nonchiral JP with $v_0=1$ in asymmetric channels with different geometries. A typical compartment is sketched in the inset: $x_L$, $y_L$, and $\Delta$ are as in Fig. \[F2\], but the corners are shifted by $x_0$. Rectification power $\eta$ vs. $x_0$ for $D_0=0.03$, and different $D_\theta=2v_0/l_\theta$. The particle orientation is as in Fig. \[F3\] for all $x_0=0$. \[F4\]](MSlongFig4.pdf){width="65.00000%"} [*Non-chiral Janus particles, $\Omega=0$.*]{} This is the case first reported in Ref. [@MSshort]. When the Janus self-propulsion length $l_\theta$ is larger than the compartment dimensions, the particle undergoes several collision, during the time interval $\tau_\theta$ (Knudsen regime [@Brenner]). As the Janus dynamics gets more sensitive to the compartment asymmetry, the curves ${\bar v}(\tau_\theta)$ increase monotonously with $\tau_\theta$, until they level off to an asymptotic upper bound [@tumble], see Fig. \[F3\](a). Most importantly, such asymptotic $\eta$ values are much larger than the rectification power of the thermal ratchets investigated in the earlier literature [@RMP2009]. The impact of thermal noise on the rectification of a JP can be summarized as follows. In Ref. [@MSshort] we showed that for $l_\theta \ll x_L$ the self-propulsion velocity changes orientation before the particle slides along a compartment side and through the pore. Thermal noise, by pulling the particle towards the central lane of the channel, acts as a lubricant. On the contrary, for $l_\theta \gg x_L$, thermal fluctuations help the particle overcome the blocking action of the compartment corners, see Fig. \[F2\], thus suppressing rectification. In the intermediate regime, where $\eta$ is the strongest, these two opposite actions of thermal noise coexist, as illustrated in Fig. \[F3\](b), thus defining an optimal thermal noise level. Finally, in view of practical applications, we tested the robustness of JP rectification in channels with variable degrees of asymmetry: (i) In Fig. \[F4\] we modified the compartment geometry by shifting the corner coordinate, $x_0$, in the range $[0,x_L/2)$ (see inset). One immediately sees that $\eta$ decreases by only a factor 2 for $x_0$ up to 0.2; (ii) In our previous report [@MSshort] we studied the consequence of rescaling the $x$ and $y$ compartment dimensions by a factor $\kappa$. We concluded that rectification is rather insensitive to $\kappa$ in the Knudsen regime, $l_\theta>\kappa x_L$. For exceedingly large $\kappa$, the intensity of the rescaled translational (thermal) noise, $D_0/\kappa v_0$, is suppressed with respect to the intensity of the rescaled propulsion noise, $\kappa D_\theta/v_0$, which means that the role of thermal fluctuation grows negligible on increasing $\kappa$. As a consequence, in this limit $\eta$ approaches a constant, that is, ${\bar v}$ is inverse proportional to $\kappa$; (iii) In Ref. [@MSshort], we also reported that for narrow pores, $\Delta \ll y_L$, $\eta$ slightly decreases with lowering $\Delta$. The explanation is simple. As the pore shrinks, the compartment sidewalls grow longer and the particle takes more time to slide along them up to the exit pore. On the other hand, the negative flow is blocked mostly at the compartment corners, regardless the actual pore size. This result indicates that our numerical analysis can be readily extended to more realistic Janus swimmers of finite radius; (iv) On running our integration code for $\theta$ randomizing b.c. (not shown), we obtained substantially smaller $\eta$ values. This is a consequence of the fact that the persistency of the self-propulsion velocity is suppressed by the particle collisions against the walls. This effect gets more pronounced with increasing $D_0$, as thermal noise causes more wall collisions and, thus, stronger $\theta$ randomization at the boundaries [@MSshort]. ![(Color online) Rectification of a chiral JP in a triangular channel: (a) $\eta$ vs. $\tau_{\theta}$ for different $\Omega$; the same data sets are plotted in a semi-logarithmic (main panel) and bilogarithmic graph (inset); (b) $\eta$ vs. $\Omega$ for different $\tau_{\theta}$. Here, $\tau_x\equiv x_L/v_0$, $D_0=0.03$ and the compartment geometry is as in Fig. \[F2\]. Inset: $\eta$ vs. $\tau_{\theta}$ for large $\Omega$, see main panel (b). \[F5\]](MSlongFig5a.pdf "fig:"){width="45.00000%"} ![(Color online) Rectification of a chiral JP in a triangular channel: (a) $\eta$ vs. $\tau_{\theta}$ for different $\Omega$; the same data sets are plotted in a semi-logarithmic (main panel) and bilogarithmic graph (inset); (b) $\eta$ vs. $\Omega$ for different $\tau_{\theta}$. Here, $\tau_x\equiv x_L/v_0$, $D_0=0.03$ and the compartment geometry is as in Fig. \[F2\]. Inset: $\eta$ vs. $\tau_{\theta}$ for large $\Omega$, see main panel (b). \[F5\]](MSlongFig5b.pdf "fig:"){width="45.00000%"} [*Chiral Janus particles, $\Omega>0$.*]{} An angular bias $\Omega$ affects the autonomous ratchet effect discussed so far only in the strong chirality regime. In Fig. \[F5\] we present numerical simulation results for levogyre JP’s, $\Omega>0$, diffusing in the triangular channel of Fig. \[F2\]. As expected, on increasing $\tau_\theta$ the rectification power approaches an horizontal asymptote [@tumble], see Fig. \[F5\](a). However, such an asymptote gets lower at higher $\Omega$, until the curves $\eta=\bar v/v_0$ and, therefore, $\bar v$ versus $\tau_\theta$ develop a distinct maximum, see insets of panels (a) and (b). This change marks the crossover between the regimes of weak and strong chirality. Indeed, the chiral nature of the JP dynamics can be fully appreciated when the autocorrelation time of its self-propulsion velocity, $\tau_\theta/2$, is of the order of the reciprocal of the cyclotron frequency, $\Omega$, namely for $\tau_\theta \simeq 2/\Omega$. This simple argument closely locates the maxima of $\bar v$ in both insets of Fig. \[F5\]. The dependence of the rectification power on $\Omega$ is illustrated in panel (b) of Fig. \[F5\]. Independently of the level of translational noise, $D_0$, $\eta$ is largely insensitive to $\Omega$ until to a certain value, after which it suddenly drops to zero. Such a threshold value, termed here $\Omega_M$, can be estimated by noticing that on increasing $\Omega$ the chiral radius $R_\Omega=v_0/|\Omega|$ decreases, until the microswimmer can perform a full circular orbit inside the compartment, without touching the channel walls (actually a logarithmic spiral with exponentially small steps [@Lowen]). In the noiseless limit, this happens for $2R_\Omega \simeq x_L$, that is, $\Omega_M \simeq 2v_0/x_L$. Of course, this argument holds under the additional condition that $\Omega_M \tau_\theta>1$, to ensure a sufficiently long self-propulsion time. This estimate of $\Omega_M$ is in close agreement with the data of Fig. \[F5\](b) and will be used to explain the peaks developed by $\eta$ in Fig. \[F7\](b) of Sec. \[upsidedown\]. Another remarkable result of this section is reported in the inset of Fig. \[F5\](b): At very high $\Omega$, the horizontal asymptote of $\bar v$ changes sign. This is an instance of the current reversal phenomenon one often encounters in the ratchet literature [@RMP2009]. When the chiral radius grows so large that the chirality of the JP does not affect much its pore crossing, $R_\Omega \gtrsim \Delta$, self-propulsion generates an effective translational (colored) noise, with time constant $\tau_\theta/2$, which adds to the white noise of constant $D_0$, as detailed in Secs. \[diffnonchiral\] and \[diffchiral\]. Under these circumstances, Eq. (\[LE\]) thus describes a thermal ratchet. On the contrary, the rectification mechanism of non-chiral particles described above is rather reminiscent of a rocked ratchet (this ratchet classification is reviewed in Ref. [@RMP2009]). For a given left-right asymmetric ratchet potential, rocked and thermal ratchets tend to generate opposite rectification currents. For this reason the current reversals depicted in the inset of Fig. \[F5\] is not totally unexpected. However, the magnitude of the currents involved is probably too small to be of practical use. ![(Color online) Rectification of a levogyre JP in the channel of Eq. (\[wx\]): $\bar v/v_0$ vs. $\epsilon$ for $\Delta=0.12$, $D_0=0.01$ and different $\Omega$. We remind that for a right-left symmetric channel, $\bar v(-\Omega)=-\bar v(\Omega)$. Other simulation parameters are: $D_\theta=0.3$, $v_0=1$, and $x_L=y_L=1$. Inset: $\eta$ vs. $\phi$ for $\Omega=1$ and $\epsilon$ as in the legend. All other simulation parameters are as in the main panel. \[F6\]](MSlongFig6){width="65.00000%"} Upside-down asymmetric channels {#upsidedown} ------------------------------- The model Langevin equations (\[LE\]) have been integrated in Ref. [@SoftMatter] to study the net flow of a levogyre microswimmer with $\Omega > 0$, confined to the periodic channel of boundaries, $$\begin{aligned} \label{wx} w_+(x) &=& \frac{1}{2} \left [\Delta +\epsilon(y_L-\Delta)\sin^2\left(\frac{\pi}{x_L}x +\frac{\phi}{2} \right )\right ], \nonumber \\ w_-(x) &=& -\frac{1}{2} \left [\Delta +(y_L-\Delta)\sin^2\left(\frac{\pi}{x_L}x\right) \right ],\end{aligned}$$ where $x_L$ quantifies the compartment length, $\Delta$ the pore size, and $y_L$ the channel width. Two additional tunable geometrical parameters have been introduced in $w_+(x)$, namely, $\phi$ and $\epsilon$ with $\epsilon \geq 0$, respectively, to shift the position and tune the amplitude of the upper wall with respect to the lower one (a few examples of the corresponding channel compartments are drawn in Fig. \[F8\]). When confined to a channel compartment of size smaller than its self-propulsion length, $l_\theta>x_L$, a chiral microswimmer tends to align its velocity parallel to the walls [@Bechinger; @LowenKumm], thus generating two boundary flows oriented to opposite directions, see Fig. \[F1\](b). For $\Omega >0$, the JP is levogyre, which means that the upper and lower boundary flows are oriented, respectively, to the left and right. The net flow along the channel axis, $\bar v$, takes the sign of the flow along the least corrugated boundary, that is $w_+(x)$ for $\epsilon<1$ and $w_-(x)$ for $\epsilon>1$ and vanishes for $\epsilon=1$. This mechanism explains the current reversals shown in Fig. \[F6\]. The dependence of $\eta$ on the fluctuation parameters $D_0$ and $D_\theta$ is illustrated in Fig. \[F7\], adapeted from [@SoftMatter], and in particular in panel (a). The rectification power is proportional to $(\Omega \tau_\theta)^{2}$ in the weak chirality regime, i.e., inverse proportional to $D_\theta^2$ \[see inset of Fig. \[F7\](a)\]. In the opposite limit of strong chirality, $\Omega \tau_\theta \gg 1$, $\eta$ approaches a maximum, which depends on $D_0$, the chiral radius $R_\Omega$ and the compartment geometry. The curves of $|\bar v|$ versus $D_0$ increase (decrease monotonically) at high (low) frequency; they all eventually decay to zero for $D_0 \gg v_0|\Omega|$, no matter what the value of $\Omega$. For an optimal value of $\Omega$ and low noise levels, the particle tends to accumulate against the walls [@Fily; @MSshort], with tangential velocities close to $\pm v_0$. This is the condition of strong chirality, $|\Omega| \tau_\theta \gg 1$, and low noise, $v_0|\Omega|/D_0 \gg 1$, where strong autonomous rectification was first reported [@SoftMatter]. Indeed, as the chiral radius exceeds the compartment dimensions, $R_\Omega \gg x_L$, a strongly chiral swimmer spends more time drifting between the upper and lower walls than sliding along them, thus weakening the boundary flows. On the other hand, when the chiral radius grows too short, $R_\Omega \ll x_L$, diffusion occurs mostly away from the boundaries. As a consequence, in both $R_\Omega$ limits the torque exerted by $\Omega$ becomes ineffective and $\bar v$ tends to vanish. Of course, in the weak chirality regime, $|\Omega| \tau_\theta \ll 1$, chirality effects are negligible, altogether. ![(Color online) Optimization of the rectification of a levogyre JP with $v_0=1$ in the channel of Eq. (\[wx\]) with $\epsilon=0.25$, $\phi=0$, and $x_L=y_L=1$: (a) role of noise: $\eta$ vs. $D_0$ for $\Delta = 0.08$, $D_\theta=0.1$, and different $\Omega$ (see legends). Inset: $\eta$ vs. $D_\theta$ for $D_0=0.05$, $\Delta=0.08$ and different $\Omega$; (b) role of frequency: $\eta$ vs. $\Omega$ for $\Delta=0.12$, $\tau_\theta=10$, $D_0=0.1$, $\tau_x=x_L/v_0$, and different $D_0$ (see legend). \[F7\]](MSlongFig7a.pdf "fig:"){width="49.00000%"} ![(Color online) Optimization of the rectification of a levogyre JP with $v_0=1$ in the channel of Eq. (\[wx\]) with $\epsilon=0.25$, $\phi=0$, and $x_L=y_L=1$: (a) role of noise: $\eta$ vs. $D_0$ for $\Delta = 0.08$, $D_\theta=0.1$, and different $\Omega$ (see legends). Inset: $\eta$ vs. $D_\theta$ for $D_0=0.05$, $\Delta=0.08$ and different $\Omega$; (b) role of frequency: $\eta$ vs. $\Omega$ for $\Delta=0.12$, $\tau_\theta=10$, $D_0=0.1$, $\tau_x=x_L/v_0$, and different $D_0$ (see legend). \[F7\]](MSlongFig7b.pdf "fig:"){width="49.00000%"} This behavior is confirmed by the rectification peaks of the curves $\eta$ versus $\Omega$ displayed in Fig. \[F7\](b). On decreasing $D_0$ the $\eta$ peak shifts toward a limiting value, where it is the most pronounced. This is the $\Omega$ threshold value, $\Omega_M$, introduced in the previous Sec. \[leftright\]. Indeed, for $\Omega \simeq \Omega_M$ we know already that the microswimmer can perform a circular orbit, without being captured by the boundary layers. Moreover, by closing its orbit inside a compartment, the swimmer gets trapped there, which explains the sudden drop of $\eta$ for $\Omega \geq \Omega_M$. The peak in the curves $\bar v$ versus $\epsilon$ for $\epsilon>1$ and constant $\Omega$ reported in Fig. \[F6\] can be interpreted in the same way [@SoftMatter]. We have already seen that thermal noise disrupts the boundary flows by kicking the particle inside the compartment. Moreover, it also perturbs its circular orbits by making them spiral faster and their centers diffuse. That is why, on increasing $D_0$, the $\Omega$-peak tends to shift to higher $\Omega$ (i.e., smaller $R_\Omega$) and diminish in height, as shown in Fig. \[F7\](b) and anticipated in Fig. \[F7\](a). ![(Color online) Compartments of periodically corrugated channels with different symmetry properties: (a) centro- or supersymmetric; (b) left-right symmetric; (c) asymmetric; and (d) upside-down symmetric. The channels walls, $w_{\pm}(x)$, in (a)-(c) are given by Eqs. (\[wx\]) for $\epsilon$ and $\phi$ as reported; (d) example of upside-down symmetric, left-right asymmetric channel with $w_{\pm}(x)$ biharmonic sinusoidal functions with components of period $x_L$ and $x_L/2$.[]{data-label="F8"}](MSlongFig8.pdf){width="80.00000%"} Channel asymmetry requirements {#asymmetry} ------------------------------ In our interpretation the rectification process is governed by the boundary flows, and therefore by the spatial symmetry of the channel walls [@Denisov]. This picture is consistent with rigorous symmetry arguments. First of all we notice that the 2D channel compartments in Fig. \[F8\] can be asymmetric under inversion of either the $y$ axis ($y\to -y$, upside-down asymmetric), panel (b), of the $x$ axis ($x\to -x$, right-left asymmetric), panel (d), or both, panel (a,c). Most remarkably, compartment (a), while both upside-down and right-left asymmetric, is invariant under any pair of $x$ and $y$ axis inversions, namely, it is centro-symmetric. On combining the symmetry properties of the model dynamics, Eq. (\[LE\]), with those of the channel compartment, we arrive at a few interesting conclusions: \(i) With reference to the right-left symmetric compartment (b), we notice that Eqs. (\[LE\]) are invariant under the transformations $x\to -x$ and $\theta \to \pi - \theta$ or, equivalently, $\Omega \to -\Omega$, which leave the channel also invariant; hence $\bar v(-\Omega)= -\bar v (\Omega)$. As a consequence, nonchiral JP’s cannot be rectified in compartment (b), as, clearly, $\bar v(0)=0$. This last property holds in a wider sense, as discussed in item (v) below. \(ii) Analogously, for an upside-down symmetric channel one concludes that $\bar v(\Omega)= \bar v (-\Omega)$. The consequence of this last symmetry relation is that for the triangular channel of Sec. \[leftright\] $\bar v$ can only be a function of $\Omega^2$, which explains the flat branch of the $\eta$ curves with $\Omega < \Omega_M$, plotted in Fig. \[F5\](b). \(iii) By shifting the channel walls $w_\pm(x)$ in Eq. (\[wx\]) by a length $\phi$, one can easily prove the additional symmetry relations $\bar v(\phi, \Omega)=\bar v(-\phi,\Omega)$ for right-left symmetric compartments \[compare compartments (b) and (c) in Fig. \[F8\]\] and $\bar v(\phi, \Omega)=\bar v(-\phi,-\Omega)$ for upside-down symmetric compartments. As displayed in the inset of Fig. \[F6\], $\bar v$ is weakly modulated by a relative shift of the walls, $\phi$, and so are the boundary flows. \(iv) For a centro-symmetric compartment, (a), both parity relations hold simultaneously; hence, $\bar v(\Omega)=0$. Numerical simulations for the channel of Eq. (\[wx\]) with $\epsilon=1$ and any $\phi$ support this conclusion. \(v) From items (i) and (ii) one is led to conclude that $\Omega \neq 0$ is a necessary condition for JP rectification in the right-left symmetric channels but not in the upside-down symmetric ones. We observed that this condition applies, in fact, to a wider class of compartments, which includes compartments (a)-(c) of Fig. \[F8\]. For the sake of an argument, we assume first that self-propulsion is switched off, i.e., $v_0=0$. As apparent from Fick-Jacobs reduction technique [@ChemPhysChem; @Fick; @Jacobs; @Zwanzig], diffusion along a smooth directed channel depends on the modulating function $\sigma(x)=w_+(x)-w_-(x)$. If $w_{\pm}(x)$ are sinusoidal functions of period $x_L$, so is their difference, $\sigma(x)$. Accordingly, the reduced longitudinal particle dynamics would be mirror symmetric. As self-propulsion is switched on, the diffusing particle is subject to an additional time-correlated noise, which does break the time symmetry. However, due to the mirror symmetry of the reduced particle dynamics along the channel axis, this does not suffice to ensure rectification of a JP with $\Omega=0$. Put differently, a breach of the right-left symmetry of the channel compartment, does not suffice to rectify nonchiral JP’s. Diffusion of active microswimmers {#diffusion} ================================= As a measure of the efficiency of the autonomous rectification mechanism we now analyze the dispersion of a JP along the channel axis [@Machura]. This is an important issue experimentalists address when trying to demonstrate rectification: Indeed, drift currents, no matter how weak, can be detected over an affordable observation time only if the relevant dispersion is sufficiently small. To this purpose we compute the transport diffusivity of a JP in a channel defined as $$\label{diffeff} D_{\rm ch}=\lim_{t\to \infty}[\langle x^2(t)\rangle - \langle x(t) \rangle^2]/(2t).$$ Diffusion of nonchiral microswimmers, $\Omega=0$ {#diffnonchiral} ------------------------------------------------ A full analytical investigation of the model of Eq. (\[LE\]) is out of question even in the bulk and for nonchiral JP’s, $\Omega=0$. However, on noticing that [@MSshort; @Marchetti] $$\langle \cos \theta (t) \cos \theta (0) \rangle=\langle \sin \theta (t) \sin \theta (0) \rangle=(1/2)e^{-|t|D_\theta},$$ it follows immediately that the self-propulsion velocity components $v_{0,x}=v_0 \cos \theta$ and $v_{0,y}=v_0 \sin \theta$ can be regarded as the components of a 2D non-Gaussian noise ${\vec \xi_{s}}(t)$ with zero mean, $\langle \xi_{s,i}(t)\rangle=0$, and finite-time correlation functions, $\langle \xi_{s,i}(t)\xi_{s,j}(0)\rangle=2(D_s/\tau_\theta)\delta_{ij}e^{-2|t|/\tau_\theta}$, where $D_s=v_0^2\tau_\theta/4$ and $\tau_\theta=2/D_\theta$. In the bulk the first two LE of Eq. (\[LE\]) are statistically independent and, therefore, a nonchiral particle diffuses according to Fürth’s law $$\label{furth} \langle \Delta \vec{r}(t)^2\rangle = 4 (D_0+v_0^2\tau_\theta/4)t +(v_0^2\tau_\theta^2/2)(e^{-2t/\tau_\theta}-1),$$ with $\Delta \vec{r}(t) \equiv \vec{r}(t)-\vec{r}(0)$. Accordingly, the approximate equality $\langle \Delta \vec{r}(t)^2\rangle = 4Dt$, holding for $t \gg \tau_\theta$, defines the particle bulk diffusivity, $$\label{diff1} D=D_0+D_s\equiv D_0+{v_0^2\tau_\theta}/{4}.$$ Of course, if the JP diffuses in a non-corrugated channel, say, with $w_+(x)=w_-(x)=y_L/2$, then $D_{\rm ch}=D$, as confirmed by the simulation data of Fig. \[F9\] (dashed curves). When confined to a sinusoidal channel, the particle diffusivity is suppressed by the geometric constrictions represented by the pores, see Figs. \[F9\] for nonchiral and \[F10\] for chiral JP’s. In the absence of self-propulsion, $v_0=0$, the bulk diffusivity of a [*non-chiral*]{} JP, Eq. (\[diff1\]), is $D=D_0$ and the channel diffusivity can be written as $D_{\rm ch}=\kappa_0 D_0$, with $\kappa_0$ a well studied function of $\Delta$ and $D_0$ [@Schmid; @Bosi]. In the opposite limit, $v_0 \to \infty$, the process is governed by self-diffusion, that is, $D\simeq D_s$ and, accordingly, $D_{\rm ch}=\kappa_s D_s$, with $\kappa_s$ much less sensitive to the pore constriction than $\kappa_0$. Both asymptotic regimes of $D_{\rm ch}$ are illustrated in Fig. \[F9\] for different values of $\Delta$ and $D_\theta$. This picture does not depend much on the actual compartment geometry. This conclusion is corroborated by our simulations for different channel geometries. The channel diffusivity of a nonchiral JP in the triangular channel of Sec. \[leftright\] is plotted in the inset of Fig. \[F11\]. Here, too, in the limit of short self-propagation times, $\tau_\theta \ll \tau_x$, $D$ approaches $D_0$, while we know that $\eta$ drops to zero, see Fig. \[F3\](a). This regime amounts to an ordinary unbiased Brownian motion occurring in a triangular channel, for which $D_{\rm ch}/D_0 = \kappa_0$, with $\kappa_0$ a function of the compartment geometry (for the compartment of Fig. \[F2\], $\kappa_s \simeq 0.55$ [@Savelev]). In the opposite limit, $l_\theta \gg x_L$, as $\eta$ approaches its horizontal asymptote of Fig. \[F3\](a), $D$ grows like $D_s$, but the ratio $D_{\rm ch}/D$ decreases toward a ($D_0$-dependent) lower bound, $\kappa_s\simeq 0.16$. ![(Color online) Diffusion of a levogyre JP in a fully symmetric sinusoidal channel with $w_\pm(x)$ given in Eq. (\[wx\]): $D_{\rm ch}$ vs. $l_\theta$ for different $D_\theta$ and $\Delta$. Note that $\Delta=1$ represents the limiting case of a straight channel of width $\Delta=y_L=1$ and here $D_{\rm ch}=D$, see Eq. (\[diff1\]). Other simulation parameters are $x_L=y_L=1$, $\epsilon=1$, and $D_0=0.01$. Inset: $D_{\rm ch}/D$ vs $l_\theta/x_L$ for different $D_\theta$ (see legend) and $D$ defined in Eq. (\[diff1\]). All remaining simulation parameters are as in Fig. \[F10\], with the numerical estimates $\kappa_0=0.48$ and $\kappa_s=0.23$. We checked that our data do better approach the predicted power law, $D_{\rm ch} \propto l_{\theta}^2$, on further increasing $l_\theta$. \[F9\]](MSlongFig9.pdf){width="60.00000%"} We give next a simple phenomenological argument to compare the large-$l_\theta$ behaviors of $\eta$ and $D_{\rm ch}/D_0$ in the triangular channel of Fig. \[F2\]. The extension of this argument to the case of the sinusoidal channel considered in Figs. \[F9\] and \[F10\] is given in Ref.[@thesis]. Consistently with the estimate of the bulk active diffusion, $D_s=v_0^2\tau_\theta/4$, we can assume that a channeled JP self-propels itself to the right and left, alternately, with time constant $\tau_\theta/2$. When confined to a channel compartment, its effective self-propulsion velocities to the right/left are, respectively, $v_{R,L}=\mu_{R,L}v_0$ with mobility constants $\mu_{R,L}$, which depend on the compartment geometry. In terms of the right/left mobility, the rectification power of Eq.(\[RP\]) reads $\eta=(\mu_{R}-\mu_{L})/2$, and the corresponding channel diffusivity [@Borromeo1; @Borromeo2] $D_{\rm ch}/D_0=(\mu_{R}+\mu_{L})^2/2$. In the absence of thermal noise, $D_0=0$, for the triangular compartment of Fig. \[F2\] one can make use of the approximations $\mu_L=0$ and $\mu_R=\cos^2 \alpha/\sqrt{2}$ [@thesis]. The ensuing estimates for $\eta$ and $D_{\rm ch}$ at zero noise, $\eta^{(0)}=0.28$ and $\kappa_s=D_{\rm ch}/D_0=0.16$, reproduce fairly closely the relevant asymptotes of Figs. \[F3\] and \[F11\]. It should be remarked that, as discussed in Sec. \[leftright\], for $D_0>0$ thermal noise tends to suppress $\eta$, $\eta<\eta^{(0)}$, but increase $D_{\rm ch}$, $D_{\rm ch}>D_{\rm ch}^{(0)}$. ![(Color online) Diffusion of a levogyre JP in the sinusoidal channel of Eq. (\[wx\]): $D_{\rm ch}/D_0$ vs. $\tau_\theta$ for different $\Omega$. Here, $\tau_x\equiv x_L/v_0$, $v_0=1$, $D_0=0.05$, $\Delta=0.08$ and $\epsilon=1$. The dashed line represents the asymptotic linear power-law of Eq.(\[diff1\]).[]{data-label="F10"}](MSlongFig10.pdf){width="60.00000%"} ![(Color online) Diffusion of a levogyre JP in the triangular channel of Fig. \[F2\]: $D_{\rm ch}/D_0$ vs. $\Omega$ for different $D_\theta$. Here, $\tau_x\equiv x_L/v_0$, $D_0=0.03$ and the remaining simulation parameters are as in Fig. \[F2\]. The diffusion of a nonchiral JP, $\Omega=0$, is shown in the inset for different $D_0$: $D_{\rm ch}/D$ vs. $\tau_\theta$ with $D$ defined in Eq. (\[diff1\]).[]{data-label="F11"}](MSlongFig11.pdf){width="60.00000%"} Diffusion of chiral microswimmers, $\Omega \neq 0$ {#diffchiral} -------------------------------------------------- The transport diffusivity of [*chiral*]{} JP’s is illustrated in Figs. \[F10\] and \[F11\]. The dependence of $D_{\rm ch}$ on $\Omega$ well summarizes the different chiral regimes discussed in Sec. \[rectificaton\]. We pointed out that chiral effects are observable only if the self-propulsion time constant $\tau_\theta$ is long enough, that is $|\Omega|\tau_\theta \gg 1$ or $R_\Omega \ll l_\theta$. On the other hand, when the chiral radius $R_\Omega$ grows smaller than the compartment dimensions, $R_\Omega \ll x_L$, or $|\Omega|\tau_x \gg 1$, chirality suppresses active transport. In general, chiral self-propulsion effects are appreciable for $\tau_\theta \gg \tau_x$ [@MSshort]. In addition, we remark here that the $\Omega$-dependence of the bulk diffusivity of a chiral particle can be obtained from Eq. (\[furth\]) by applying to our model the approach of Refs. [@Taylor; @Kur], namely $$\label{diff2} D(\Omega)=D_0+\frac{v_0^2/\tau_\theta}{(2/\tau_\theta)^2+\Omega^2},$$ where $D(0)$ coincides with $D$ in Eq. (\[diff1\]) [@thesis]. Similarly to the case of the nonchiral JP’s discussed in the previous section, the confining action of the channel corrugations tends to suppress the particle diffusivity in the channel, $D_{\rm ch}(\Omega)$. Here too the symmetry of the walls plays no major role. For this reason we analyze the effects of chirality on active diffusion by discussing simulation data for a sinusoidal (Fig. \[F10\]) and triangular channel (Fig. \[F11\]) on the same foot. With these premises the main features of the curves $D_{\rm ch}$ versus $\tau_\theta$ in a sinusoidal channel at constant $\Omega$, Fig. \[F10\], are readily explained: (i) The curve $\Omega = 0$ reproduces the situation of Fig. \[F9\] with $D_{\rm ch}$ growing linearly with $\tau_\theta$; (ii) For $D_0 \ll D_s$ the channel diffusivity is proportional to $D(\Omega)$, that is $D_{\rm ch}(\Omega) = \kappa_s D(\Omega)$, like for the nonchiral JP’s; (iii) Moreover, the corresponding maxima occur for $|\Omega| \tau_\theta = 2$, see Eq. (\[diff2\]), with $D_{\rm ch}^{\rm max}=D(2/\tau_\theta)\simeq \kappa_s D_s/2$; (iv) For finite $\Omega$ the activated diffusivity in the channel is suppressed both for $\tau_\theta \to 0$ and $\tau_\theta \to \infty$. Accordingly, $D_{\rm ch} (\Omega) \to \kappa_0 D_0$, where $\kappa_0$ has been defined in Sec. \[diffnonchiral\]. The $\Omega$ dependence of $D_{\rm ch}$ in the triangular channel of Fig. \[F2\] at constant $\tau_\theta$ is also consistent with the above interpretation of chirality effects on channel diffusivity. In addition, the data sets of Fig. \[F11\] show that: (v) In the regime of weak chirality, $|\Omega|\tau_\theta \ll 1$, the $\Omega$ dependence of $D_{\rm ch}$ is negligible, as it was for $\eta$ in Fig. \[F5\](b). $D_{\rm ch}$ starts decreasing appreciably only for $|\Omega|\tau_\theta \gtrsim 2$, in coincidence with the $\eta$ maxima displayed in the insets of Fig. \[F5\]; (vi) In the regime of strong chirality, $|\Omega|\tau_\theta \gg 1$, $D(\Omega) \to D_0$, so that $D_{\rm ch} \simeq \kappa D_0$ with $\kappa \simeq 0.55$ (see discussion of Fig. \[F9\]); (vii) Finally, small diffusivity peaks emerge also for $|\Omega|\tau_\theta \gg 1$. They are centered around $\Omega_M$ and correspond to the sudden drop in the rectification power \[Fig. \[F5\](b), main panel\] that occurs when $R_\Omega$ grows shorter than $x_L$. Conclusions =========== We numerically simulated the transport of artificial active microswimmers diffusing along a narrow periodically corrugated channel. Key transport quantifiers, like rectification power and diffusivity, strongly depend on the particle self-propulsion mechanism and the channel compartment geometry. Applications of such control technique are within the reach of today’s technology. Specialized microfluidic circuits can be designed, for instance, to guide chiral microswimmers to a designated target. The same technique can be utilized to fabricate monodisperse chiral microswimmers (presently a challenging technological task). By the same token, microswimmers capable of inverting chirality upon binding to a load, can operate as chiral shuttles along a suitably corrugated channel even in the absence of gradients of any kind. The model analyzed here should be regarded as a stepping stone for more challenging generalizations and sophisticated comparisons with ongoing experimental work. Among the issues one should address next we mention: (i) [*diffusion gradients.*]{} Either the channel profile or the local inhomogeneities responsible for self-propulsion can be graded so as to generate an $x$-dependent channel transport diffusion coefficient, $D_{\rm ch}(x)$, which adds to the ratchet effect discussed in Sec. \[rectificaton\]; (ii) [*hydrodynamic effects.*]{} We ignored the role of the suspension fluid flowing around the moving microswimmer. An accurate account of microfluidic effects is likely to selectively impact the particle boundary flows along a corrugated channel wall as well as the translocation of finite size JP’s through a narrow pore; (iii) [*wall interactions.*]{} The sliding b.c. implemented in our simulation code are known to reproduce rather closely certain experimental conditions, but surely are not granted in all setups under investigation. Particle translocation through narrow constrictions may be extremely sensitive to the particle-wall interactions, which thus affect both active rectification and diffusion in corrugated channels. Acknowledgements {#acknowledgements .unnumbered} ================ X.A. has been supported by the grant Equal Opportunity for Women in Research and Teaching of the Augsburg University. P.H. and G.S. acknowledge support from the cluster of excellence Nanosystems Initiative Munich (NIM). Y.L. was supported by the NSF China under grants No. 11347216 and 11334007, and by Tongji University under grant No. 2013KJ025. F.M. thanks the Alexander von Humboldt Stiftung for a Research Award. All authors thank Riken’s RICC for computational resources. For a review see: P. S. Burada, P. Hänggi, F. Marchesoni, G. Schmid, and P. Talkner, ChemPhysChem [**10**]{}, 45 (2009). P. Hänggi and F. Marchesoni, Rev. Mod. Phys. **81**, 387 (2009). S. Denisov, S. Flach, and P. Hänggi, Phys. Rep. **538**, 77 (2014). P. K. Ghosh, V. R. Misko, F. Marchesoni, and F. Nori, Phys. Rev. Lett. **110**, 268301 (2013). E. M. Purcell, Am. J. Phys. **45**, 3 (1977). F. Schweitzer, *Brownian Agents and Active Particles* (Springer, Bwrlin, 2003). (a) S. Ramaswamy, Annu. Rev. Condens. Matter Phys. **1**, 323 (2010); (b) T. Vicsek and A. Zafeiris, Phys. Rep. **517**, 71 (2012). P. Romanczuk, M. Bär, W. Ebeling, B. Lindner, and L. Schimansky-Geier, Eur. Phys. J. Special Topics **202**, 1 (2012). S. Jiang and S. Granick (Eds.), [*Janus Particle Synthesis, Self-Assembly and Applications*]{} (RSC Publishing, Cambridge, 2012). A. Walther and A. H. E. Müller, Chem. Rev. **113**, 5194 (2013). W. F. Paxton, S. Sundararajan, T. E. Mallouk, and A. Sen, Angew. Chem. Int. Ed. **45**, 5420 (2006). J. G. Gibbs, and Y.-P. and Zhao, Appl. Phys. Lett. **94**, 163104 (2009); J. R. Howse, R. A. L. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, R. Golestanian, Phys. Rev. Lett. [**99**]{}, 048102 (2007). G. Volpe, I. Buttinoni, D. Vogt, H.-J. Kümmerer, and C. Bechinger, Soft Matter **7**, 8810 (2011). H. R. Jiang, N. Yoshinaga, and M. Sano, Phys. Rev. Lett. **105**, 268302 (2010). L. Baraban, R. Streubel, D. Makarov, L. Han, D. Karnaushenko, O. G. Schmidt, and G. Cuniberti, ACS Nano **7**, 1360 (2013). see, e.g., Y. Hong, D. Velegol, N. Chaturvedi, and A. Sen, Phys. Chem. Chem. Phys. [**12**]{}, 1823 (2010). A. Búzás, L. Kelemen, A. Mathesz, L. Oroszi, G. Vizsnyiczai, T. Vicsek, and P. Ormos, Appl. Phys. Lett. [**101**]{}, 041111 (2012). S. van Teeffelen and H. Löwen, Phys. Rev. E [**78**]{}, 020101 (2008). B. M. Friedrich and F. Jülicher, Phys. Rev. Lett. [**103**]{}, 068102 (2009). C. J. Brokaw, J. Exp. Biol. [**35**]{}, 197 (1958); J. Cell. Comp. Physiol. [**54**]{}, 95 (1959). M. Mijalkov and G. Volpe, Soft Matter [**9**]{}, 6376 (2013). F. Kümmel, B. ten Hagen, R. Wittkowski, I. Buttinoni, R. Eichhorn, G. Volpe, H. Löwen, and C. Bechinger, Phys. Rev. Lett. [**110**]{}, 198302 (2013). A. Boymelgreen, G. Yossifon, S. Park, and T. Miloh, Phys. Rev. E [**89**]{}, 011003(R) (2014). A. Sen, M. Ibele, Y. Hong, and D. Velegol, Faraday Discuss. [**143**]{}, 15 (2009). A. Zöttl and H. Stark, Phys. Rev. Lett. [**108**]{}, 218104 (2012). P. K. Radtke and L. Schimansky-Geier, Phys. Rev. E, [**85**]{}, 051110(R) (2012). T. R. Kline, W. F. Paxton, T. E. Mallouk, and A. Sen, Angew. Chem. Int. Ed. [**44**]{}, 744 (2005). B. ten Hagen, S. van Teeffelen and H. Löwen, J. Phys.: Condens. Matter [**23**]{}, 194119 (2011). S. E. Spagnolie and E. Lauga, J. Fluid Mech. [**700**]{}, 105 (2012). M. Ripoll, P. Holmqvist, R. G. Winkler, G. Gompper, J. K. G. Dhont, and M. P. Lettinga, Phys. Rev. Lett. [**101**]{}, 168302 (2008). I. Buttinoni, J. Bialkè, F. Kümmel, H. Löwen, C. Bechinger, and T. Speck, Phys. Rev. Lett. [**110**]{}, 238301 (2013) D. Takagi, J. Palacci, A. B. Braunschweig, M. J. Shelley, and J. Zhang, Soft Matter, [**10**]{}, 1784 (2014). P. K. Ghosh, P. Hänggi, F. Marchesoni, F. Nori, and G. Schmid, Europhys. Lett. [**98**]{}, 50002 (2012); Phys. Rev. E [**86**]{}, 021112 (2012). H. Brenner and D. A. Edwards, [*Macrotransport Processes*]{} (Butterworth-Heinemann, New York, 1993) P. K. Ghosh, P. Hänggi, F. Marchesoni, and F. Nori, Phys. Rev. E [**89**]{}, 062115 (2014) Y. Li, P. K. Ghosh, F. Marchesoni and B. Li, submitted (2014). Y. Fily, A. Baskaran, and M. F. Hagan, arXiv:1402.5583 \[cond-mat.soft\]. A. Fick, Ann. Phys. Chem. [**94**]{}, 59 (1855). M. H. Jacobs, [*Diffusion processes*]{} (Springer, New York, 1967). R. Zwanzig, J. Phys. Chem. [**96**]{}, 3926 (1992). L. Machura, M. Kostur, P. Talkner, J. Luczka, F. Marchesoni, and P. Hänggi, Phys. Rev. E [**70**]{}, 061105 (2004). Y. Fily and M. C. Marchetti, Phys. Rev. Lett. [**108**]{}, 235702 (2012). P. S. Burada, G. Schmid, D. Reguera, J. M. Rubi, and P. Hänggi, Phys. Rev. E [**75**]{}, 051111 (2007). L. Bosi, P. K. Ghosh, and F. Marchesoni, J. Chem. Phys. [**137**]{}, 174110 (2012). F. Marchesoni and S. Savel’ev, Phys. Rev. E [**80**]{}, 011120 (2009). details are given in the PhD thesis of Xue Ao (Augsburg University, in preparation). M. Borromeo and F. Marchesoni, Chem. Phys. [**375**]{}, 536 (2010). M. Borromeo, F. Marchesoni, and P. K. Ghosh, J. Chem. Phys [**134**]{}, 051101 (2011). J. B. Taylor, Phys. Rev. Lett. [**6**]{}, 262 (1961). B. Kurşunoǧlu, Phys. Rev. [**132**]{}, 21 (1963). [^1]:
--- abstract: 'The magnetic anisotropy of thin ($\sim 200$ nm) and thick ($\sim 2$ $\mu$m) films and of polycrystalline (diameters $\sim 60$ nm) powders of the Prussian blue analogue Rb$_{0.7}$Ni$_{4.0}$\[Cr(CN)$_6$\]$_{2.9} \cdot n$H$_2$O, a ferromagnetic material with $T_c \sim 70$ K, have been investigated by magnetization, ESR at 50 GHz and 116 GHz, and variable-temperature x-ray diffraction (XRD). The origin of the anisotropic magnetic response cannot be attributed to the direct influence of the solid support, but the film growth protocol that preserves an organized two-dimensional film is important. In addition, the anisotropy does not arise from an anisotropic g-tensor nor from magneto-lattice variations above and below $T_c$. By considering effects due to magnetic domains and demagnetization factors, the analysis provides reasonable descriptions of the low and high field data, thereby identifying the origin of the magnetic anisotropy.' author: - 'D. M. Pajerowski' - 'J. E. Gardner' - 'M. J. Andrus' - 'S. Datta' - 'A. Gomez' - 'S. W. Kycia' - 'S. Hill' - 'D. R. Talham' - 'M. W. Meisel' title: Magnetic anisotropy in thin films of Prussian blue analogues --- Introduction ============ There is an increasing demand for novel architectures that afford the possibility of spin polarized electron transport, a field known as spintronics.[@Wolf] A key element involves control of the magnetic anisotropy in ferromagnetic films and nanostructures. Accordingly, the ability to manipulate the underlying magnetic states of the spin polarizers is desirable. In addition to traditional solid-state materials, molecule-based magnetic systems are being investigated.[@Bogani; @Camarero; @Moritomo] The discovery of large and persistent photoinduced changes in the magnetization in some examples of cyanometallate coordination polymers makes them attractive materials to consider.[@Sato; @Paj-jacs-com] Herein, studies of the magnetic anisotropy of thin and thick films along with standard powder-like samples of bimetallic Prussian blue analogues, A$_j$M$^{\prime}_k$\[M(CN)$_6$\]$_{\ell} \cdot n$H$_2$O, where A is an alkali ion and M$^{\prime}$ and M are transition metal ions,[@Dunbar; @Verdaguer] are reported. Previously, the anisotropic response of the persistent photoinduced magnetism of thin films of Rb-Co-Fe (referring to A-M$^{\prime}$-M) Prussian blue analogues was discovered[@Park1] and subsequently studied systematically.[@Park2; @Frye; @Park-thesis; @Frye-thesis; @Gardner-thesis; @Pajerowski-thesis] The motivation to understand the origins of this anisotropic phenomenon is amplified by the ability to control the magnetization of Prussian blue analogues by photo-irradiation[@Sato; @Bleuzen1; @Paj-jacs-com] or pressure.[@Zentkova] However, the magnetic response of the photo-controllable A-Co-Fe system is complicated by the multiple stable oxidation states of the Co and Fe ions and by orbital angular momentum contributions. Consequently, the Rb-Ni-Cr Prussian blue analogue, a ferromagnet system possessing a spectrum of long-range ordering temperatures, $T_c \sim 60 - 90$ K, depending on stoichiometry,[@Verdaguer] was chosen as the centerpiece for the present work because the magnetic and physical properties of this system are robust and the ions have stable oxidation states that possess no first-order angular momentum. Finally, it is important to stress the significance of our findings. Although the study of the magnetism of solid-state films is a mature field, the extensions to molecule-based magnetism are just beginning to emerge. For example, with the drive to develop new devices, applications with single crystals are being explored.[@Schmidt] However, the exploitation of molecule-based magnetic films may be more attractive for industrial fabrication, and devices based on metal-phthalocyanines[@Heutz] and metal\[TCNE:tetracyanethylene\]$_x$[@Yoo] are two examples of work in this direction. In our work, the origins of the magnetic anisotropy in films of Prussian blue analogues will be linked to demagnetization effects after we have systematically eliminated all other plausible explanations, some of which are not issues in traditional solid-state magnetic films. As a result, our results provide a foundation from which the magnetism in films of Prussian blue analogues may be understood and employed in new devices. ![(Color online) The temperature dependences of the zero-field-cooled (ZFC) and field-cooled (FC) magnetizations, $M(T)$, normalized to the FC values at $T = 2$ K, $M_0$, are shown for (a) low, $B = 10$ mT, and (b) high, $B = 4$ T, applied magnetic fields. For clarity, the data for the thin film are not shown in (b). The anisotropic response for $B$ applied parallel ($\parallel$) or perpendicular ($\perp$) to the films is strikingly similar for both thin and thick films. The field-induced shift of $T_c$ from $\sim 70$ K to $\sim 100$ K is observable. For each panel, the solid lines are the results of analysis using demagnetization factors (see text).](Pajerowski-PRB-Fig1.eps){width="3.375in"} Experimental Details ==================== The synthesis of the powder samples followed established protocols,[@Gardner-thesis] while the films were generated using sequential adsorption methods[@Culp] that are detailed elsewhere.[@Gardner-thesis] Briefly stated, the film synthesis consists of using a solid support, such as Melinex 535, and immersing it in an aqueous solution of Ni$^{2+}$ ions and then in another aqueous solution of Cr(CN)$^{3-}_{6}$ containing Rb$^+$ ions. After each immersion step, washing with water is essential to remove the excess ions, and the process can be iterated multiple cycles to yield films of varying thicknesses and morphologies. For this work, two films, one of 40 cycles and the other of 400 cycles, are reported. Whereas the powder samples consisted of small polycrystals with diameters of $\sim 60$ nm, which are magnetically in the “bulk” limit,[@dpaj-nano] the 40 cycles and 400 cycles films had thicknesses of $\sim 200$ nm and $\sim 2$ $\mu$m, respectively. Finally, other Rb-M$^{\prime}$-M Prussian blue analogue films were investigated, including Rb-Co-Cr, Rb-Cu-Cr, Rb-Zn-Cr, Rb-Ni-Fe, Rb-Co-Fe, Rb-Cu-Fe, and Rb-Zn-Fe.[@Gardner-thesis; @Pajerowski-thesis] ![The angular variation of $M$ is shown for the case of the thick film when $B = 4$ T and $T = 10$ K. The discrete steps of 1.5$^{\circ}$ are detectable, and the data were taken continuously at each angle that was held for a period of 5 min. The data for the thin film and additional details are available elsewhere.[@Pajerowski-thesis]](Pajerowski-PRB-Fig2.eps){width="3.375in"} The chemical compositions and the physical properties of all samples were established by a suite of techniques, which yielded Rb$_{0.7}$Ni$_{4.0}$\[Cr(CN)$_6$\]$_{2.9} \cdot n$H$_2$O.[@Gardner-thesis; @Pajerowski-thesis] For the magnetization measurements, a commercial (Quantum Design) magnetometer was used in conjunction with a home-made *in situ* rotator.[@dpaj-rotator] The powder samples were mounted in gelcaps, while the film samples were either cut and stacked in a plastic box or measured individually in a straw holder. A single 400 cycles film, a stack of ten 40 cycles films, and $\sim 100$ $\mu$g of powder embedded in eicosane were employed for the cw-ESR measurements performed at either 50 GHz or 116 GHz, using a resonant cavity coupled to a cryostat and superconducting magnet at the NHMFL-Tallahassee.[@Takahashi] Transmission and reflection x-ray diffraction (XRD) studies were performed at 20 K, 110 K, and 300 K by using the instruments at the University of Guelph. Care was taken to avoid long term vacuum pumping of the sample at room temperature, since variations due to reversible dehydration-hydration[@Ohkoshi; @Moritomo2] were observed as the (200) peak shifted to higher $2\theta$ and broadened. Data were collected for nominally 24 h at each temperature, and a blank Melinex film was also measured to assist with the background subtraction arising from the solid support. ![(Color online) The cavity transmission at 116 GHz, as a function of $B$, is shown for various temperatures for the powder and the thick (2 $\mu$m) film for $B \parallel \mathrm{and} \perp$ to the surface of the film. The traces are offset for clarity.](Pajerowski-PRB-Fig3.eps){width="3.375in"} Results ======= The anisotropic magnetic response in Prussian blue analogues was initially observed in magnetization measurements,[@Park1] and this behavior is shown for the Rb-Ni-Cr films in Figs. 1 and 2. Differences between the ZFC and FC data are related to a spin-glass-like response,[@Pejakovic; @Mydosh] while the anisotropy of the thin and thick films is strikingly similar as the external magnetic field is applied parallel or perpendicular to the surface of the films, hereafter referred to as $B \parallel$ and $B\perp$, respectively. This behavior is also present, albeit to a somewhat weaker degree, in spin-cast samples[@Gardner-thesis] but is not observed in films that were synthesized in a manner that corrupts their two-dimensional nature by generating discontinuities and roughness.[@Park2] ![(Color online) (a) The temperature dependences of the main ESR absorption lines at 116 GHz (Fig. 3), for the powder specimen and the thick film with $B \parallel \mathrm{and} \perp$ to the surface of the film. The solid lines are the results of analysis using demagnetization factors, see text. (b) Angular dependences of the positions of the main (closed symbols) and weak (open symbols) ESR absorption lines at 116 GHz (round symbols and left scale) and at 50 GHz (square symbols and right scale). The $B \parallel \mathrm{and} \perp$ orientations are $0^{\circ}$ and $\pm 90^{\circ}$, respectively.](Pajerowski-PRB-Fig4.eps){width="3.375in"} To date, ESR investigations of Prussian blue analogues have been limited to the Rb-Mn-Fe system that ferromagnetically orders near 10 K.[@Pregelj; @Antal] In our work, the nature of the anisotropy was explored, and the 116 GHz results for the powder and 2 $\mu$m film are shown in Fig. 3, while the data for the 200 nm film are consistent with the trends reflected in the thicker film.[@Pajerowski-thesis] One difference is that the lines of the thin film have a Lorentzian shape, while the lines of the thick film have a Gaussian shape, and this observation is consistent with the increase of disorder as the films become thicker. For the powder, one clear absorption line, with an effective $g = 2.05$, is resolved. The response of the 2 $\mu$m film is similar to the powder for $T \gtrsim 100$ K for both orientations of the applied magnetic field. However, for $T < 100$ K, the absorption signals are described by two lines, one main line that is temperature dependent and a weak line that is independent of temperature within experimental resolution. Whereas the main line presumably arises from the well coupled Ni$^{2+}$ and Cr$^{3+}$ ions, the weak line is associated with trace amounts of powder-sized nodules that are observed on the surfaces of the films.[@Gardner-thesis; @Pajerowski-thesis] At 10 K, the main and weak lines have effective g-values of 2.11 and 2.05 for $B \parallel$ and 1.97 and 2.05 for $B \perp$. The temperature dependences of the main line positions are shown in Fig. 4, along with the angular dependences of the main and weak lines at 50 GHz and 116 GHz for the 2$\mu$m film at 10 K. The angular response of the main line is identical to the behavior observed for the magnetization, Fig. 2, and follows a uniaxial $\sin^2(\alpha)$ dependence, where $\alpha$ is the angle between $B$ and the surface of the film. In addition, the angular dependence of the positions of the main lines is the same at both frequencies with a maximum variation of $\Delta B \sim 0.3$ T (Fig. 4). ![(Color online) The XRD pattern collected in reflection and transmission modes at 20 K and 110 K. The results near the (200) and (400) peaks are shown when normalized to the peak values of each data set. The data traces are shifted for clarity. No changes in the lattice parameters are detected to within 0.005 Å.](Pajerowski-PRB-Fig5.eps){width="3.375in"} Since deviations from perfect cubic symmetry[@Bleuzen] might arise when the samples cool through $T_c$, variable temperature XRD studies were performed (Fig. 5). The (200) and (400) peaks at 17.16$^{\circ}$ and 34.68$^{\circ}$ were monitored in detail, and the results do not indicate any change in the lattice parameter through $T_c$, as the *Fm*$\overline{3}$*m* (No. 225) cubic symmetry is maintained with a lattice dimension of 10.33 Å. In transmission, the peaks at 24$^{\circ}$ and 30$^{\circ}$ were also assignable due to the absence of contributions from the polymer solid support in this configuration. Discussion ========== After inspecting the comprehensive set of experimental results, several points are immediately obvious. Firstly and simply stated, the films possess magnetic anisotropy that is not manifested in polycrystalline powder samples that are normally studied. Secondly, the data indicate that the underlying anisotropy prevails for thin and thick films, so the anisotropy does not explicitly arise from influences coming from direct interaction with the solid support[@Gambardella] but does depend upon the two-dimensional organization of the sample generated during the film fabrication process. Furthermore, the values of $T_c$ are independent of the orientation of the magnetic field, meaning the anisotropy does not originate from variations of the superexchange parameter, $J$. Thirdly, the ESR results, namely the line shapes of the powder spectra and the frequency independence of the magnitude of the line splittings, cannot be reconciled by the presence of an anisotropic g-tensor. Finally, magnetostriction or other structural changes are not observed at any temperature, so the cubic symmetry is preserved to an extent that does not permit it to be a possible explanation of the anisotropy. With the elimination of several common mechanisms as the possible sources of the anisotropic response, magnetostatic interactions remain as a plausible explanation. Indeed, the uniaxial nature of the anisotropy is consistent with dipolar interactions. In addition, demagnetizing effects ($H_{\mathrm{effective}} = H_{\mathrm{lab}} \,-\, N M$, where $N$ is the demagnetizing factor) model the data well when using the theoretical value to normalize the high field, saturation magnetization value of $1.47 \times 10^5$ A/m, namely, $\langle S_{\mathrm{Ni}_z} \rangle _{\mathrm{max}} = 1$ and $\langle S_{\mathrm{Cr}_z} \rangle _{\mathrm{max}} = 3/2$.[@Pajerowski-thesis; @Osborn; @Vonsovskii] The low field magnetization in the perpendicular orientation can be reproduced quantitatively from the parallel orientation if $N_{\parallel} = 0.07$ and $N_{\perp} = 0.86$, Fig. 1a, where domains are expected to obey $2N_{\parallel} + N_{\perp} = 1$.[@Osborn; @Vonsovskii] The high field magnetization can also be reproduced but not as directly, since the high field susceptibility has a significant experimental uncertainty because $\mathrm{d}M/\mathrm{d}H$ is orders of magnitude smaller than $M/H$ in this range. Nevertheless, the uniformly magnetized film limit, *vide infra*, namely $N_{\parallel} = 0$ and $N_{\perp} = 1$, reasonably reproduces the observed trends (Fig. 1b). The ESR data can also be explained by the presence of demagnetization effects. [@Vonsovskii; @Kittel; @Kunii] Specifically, taking the equations of motion for a spin in $B$ along the z-axis, the resonance condition is $$\omega_0^2=g^2\,\mu_B^2\,[B_z+(N_y-N_z)\mu_o M_z][B_z+(N_x-N_z)\mu_o M_z]\;.$$ For a perfect sphere, $N_x = N_y = N_z = 1/3$, so the resonance condition should be isotropic and have no magnetization dependence. In practice, there may be small deviations from spherical symmetry for the powder, and the resonance condition may be written as $$\omega_{0,\mathrm{powder}}\;=\;g\,\mu_B\,[B- \mu_o\delta M_z] \;\;\;,$$ where $\delta$ takes care of deviations from spherical symmetry. For the powder data, Figs. 2 and 3, a shift of $\sim 10$ mT is present in the fully magnetized state compared to the paramagnetic state. This observation is consistent with a value of $\delta \sim 0.05$, and the shift is similar to the one reported for Rb-Mn-Fe,[@Pregelj] where it was attributed to demagnetizing effects. For a uniformly magnetized film oriented perpendicular to $B$, the resonance condition is $$\omega_{0,\perp}\;=\;g\,\mu_B\,[B- \mu_o M_z] \;\;\;,$$ whereas for the parallel orientation, the resonance condition is $$\omega_{0,\parallel}\;=\;g\,\mu_B\,[B(B+ \mu_o M_z)]^{1/2} \;\;\;.$$ Ergo, the temperature dependence of the main lines can be predicted with $N_{\parallel} = 0$ and $N_{\perp} = 1$, and the results are in excellent agreement with the data (Fig. 4a). Conclusions =========== In summary, the observed magnetic anisotropy of the Rb-Ni-Cr Prussian blue analogue films is attributable to demagnetization effects arising from their two-dimensional geometry. Additional evidence for the magnetic domain-field interactions is garnered from the extensive data sets collected on the aforementioned Rb-M$^{\prime}$-M Prussian blue analogues.[@Gardner-thesis; @Pajerowski-thesis] Having identified magnetic domains as the origin of the anisotropy, additional studies, such as magnetic imaging of the surfaces, will provide a deeper understanding of the architecture and dynamics of the domains. Finally, a systematic approach for determining the nature of magnetic anisotropy in coordination polymers has been presented and will be important as this class of materials is investigated for physical properties applicable to spintronic applications. We acknowledge conversations with M. F. Dumont, M. W. Lufaso, and A. Ozarowski. This work was supported, in part, by NSERC, CFI, and NSF through DMR-0804408 (SH), DMR-1005581 (DRT), DMR-0701400 (MWM), and the NHMFL via cooperative agreement under NSF DMR-0654118 and the State of Florida. We thank Ben Pletcher and the Major Analytical Instrumentation Center (MAIC), Department of Materials Science and Engineering, University of Florida, for help with the EDS, SEM, and TEM work. [35]{} S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Science [**294**]{}, 1488 (2001). L. Bogani and W. Wernsdorfer, Nature Mater. [**7**]{}, 179 (2008). J. Camarero and E. Coronado, J. Mater. Chem. [**19**]{}, 1678 (2009). Y. Moritomo and T. Shibata, Appl. Phys. Lett. [**94**]{}, 043502 (2009). O. Sato, T. Iyoda, A. Fujishima, and K. Hashimoto, Science [**272**]{}, 704 (1996). D. M. Pajerowski, M. J. Andrus, J. E. Gardner, E. S. Knowles, M. W. Meisel, and D. R. Talham, J. Am. Chem. Soc. [**132**]{}, 4058 (2010). K. R. Dunbar and R. A. Heitz, Prog. Inorg. Chem. [**45**]{}, 283 (1997). M. Verdaguer and G. S. Girolami, in *Magnetism: Molecules to Materials V*, editted by J. S. Miller and M. Drillon (Wiley-VCH, Weinheim, Germany, 2005) p. 283. J.-H. Park, E. Čižmár, M. W. Meisel, Y. D. Huh, F. Frye, S. Lane, and D. R. Talham, Appl. Phys. Lett. [**85**]{}, 3797 (2004). J.-H. Park, F. Frye, S. Lane, Y. D. Huh, E. Čižmár, D. R. Talham, and M. W. Meisel, Polyhedron [**24**]{}, 2355 (2005). F. A. Frye, D. M. Pajerowski, J.-H. Park, M. W. Meisel, and D. R. Talham, Chem. Mater. [**20**]{}, 5706 (2008). J.-H. Park, Ph. D. thesis, University of Florida, 2006. Full text available at http://purl.fcla.edu/ fcla/etd/UFE0021664. F. A. Frye, Ph. D. thesis, University of Florida, 2007. Full text available at http://purl.fcla.edu/ fcla/etd/UFE0013792. J. E. Gardner, Ph. D. thesis, University of Florida, 2009. Full text available at http://purl.fcla.edu/ fcla/etd/UFE0024355. D. M. Pajerowski, Ph. D. thesis, University of Florida, 2010. To be posted online as an electronically transmitted dissertation (etd) in late 2011. A. Bleuzen, C. Lomenech, V. Escax, F. Villain, F. Varret, C. Cartier dit Moulin, and M. Verdaguer, J. Am. Chem. Soc. [**122**]{}, 6648 (2000). M. Zentková, Z. Arnold, J. Kamarád, V. Kavečanský, M. Lukáčová, S. Mat$^{\prime}$aš, M. Mihalik, Z. Mitróová, and A. Zentko, J. Phys.: Condens. Matter [**19**]{}, 266217 (2007). R. D. Schmidt, D. A. Shultz, J. D. Martin, and P. D. Boyle, J. Am. Chem. Soc. [**132**]{}, 6261 (2010). S. Heutz, C. Mitra, W. Wu, A. J. Fisher, A. Kerridge, M. Stoneham, A. H. Harker, J. Gardener, H.-H. Tseng, T. S. Jones, C. Renner, and G. Aeppli, Adv. Mater. [**19**]{}, 3618 (2007) J.-W. Yoo, C.-Y. Chen, H. W. Jang, C. W. Bark, V. N. Prigodin, C. B. Eom, and A. J. Epstein, Nature Mater. [**9**]{}, 638 (2010). J. T. Culp, J.-H. Park, I. O. Benitez, Y. D. Huh, M. W. Meisel, and D. R. Talham, Chem. Mater. [**15**]{}, 3431 (2003). D. M. Pajerowski, F. A. Frye, D. R. Talham, and M. W. Meisel, New J. Phys. [**9**]{}, 222 (2007). D. M. Pajerowski and M. W. Meisel, J. Phys.: Conf. Ser. [**150**]{}, 012034 (2009). S. Takahashi and S. Hill, Rev. Sci. Instrum. [**76**]{}, 023114 (2005). S. Ohkoshi, K. Arai, Y. Sato, and K. Hashimoto, Nature Mater. [**3**]{}, 857 (2004). Y. Moritomo, F. Nakada, J. Kim, and M. Takata, Appl. Phys. Expr. [**1**]{}, 111301 (2008). D. A. Pejaković, J. L. Manson, J. S. Miller, and A. J. Epstein, Phys. Rev. Lett. [**85**]{}, 1994 (2000). J. A. Mydosh, *Spin Glasses* (Taylor and Francis, London, 1993). M. Pregelj, A. Zorko, D. Arčon, S. Margadonna, K. Prassides, H. van Tol, L. C. Brunel, and O. Ozarowski, J. Magn. Magn. Mater. [**316**]{}, e680 (2007). Á. Antal, A. Jánossy, L. Forró, E. J. M. Vertelman, P. J. van Koningsbruggen, and P. H. M. van Loosdrecht, Phys. Rev. B [**82**]{}, 014422 (2010). A. Bleuzen, J.-D. Cafun, A. Bachschmidt, M. Verdaguer, P. Münsch, F. Baudelet, and J.-P. Itié, J. Phys. Chem. C [**112**]{}, 17709 (2008). P. Gambardella, S. Stepanow, A. Dmitriev, J. Honolka, F. M. F. de Groot, M. Lingenfelder, S. S. Gupta, D. D. Sarma, P. Bencok, S. Stanescu, S. Clair, S. Pons, N. Lin, A. P. Seitsonen, H. Brune, J. V. Barth, and K. Kern, Nature Mater. [**8**]{}, 189 (2009). J. A. Osborn, Phys. Rev. [**67**]{}, 351 (1945). S. V. Vonsovskii, *Ferromagnetic Resonance* (Pergamon Press, Oxford, 1966). C. Kittel, *Introduction to Solid State Physics* (Wiley and Sons, New York, 1976). S. Kunii, J. Phys. Soc. Jpn. [**69**]{}, 3789 (2000).
Introduction ============ One of all time beautiful examples of physical theories motivated by logical consistency is Dirac‘s theory of relativistic quantum electrodynamics: The void in the filled Fermi sea led Dirac to predict a new particle, later called positron. The influence of its success is now on all branches of modern physics. Another beautiful example is Abrikosov‘s vortex lattice theory, inferred as a very natural consequence, in the hindsight, of the Ginzburg-Landau theory. Beautiful theories can also be obtained at the opposite end, by careful consideration and critical analysis of experimental data. One of the all time such examples is the Bardeen-Cooper-Schrieffer theory of superconductivity. Again, its influence is now on all branches of modern physics. Daily theoretical physicists‘ activities are usually between those two extremes. For example, by combining limited experimental data with theoretical insights, difficult aspects of the vortex dynamics puzzle were solved by Nozieres and Vinen [@nv] and by Bardeen and Stephen [@bs] in the 1960‘s. Those works are such elegant illustrations that all researchers interested in vortex dynamics field should study them carefully. As we marvel at those achievements, uneasiness has been accumulated. While the vortex friction in the Bardeen-Stephen work appears to be largely consistent with experimental data, the transverse force in the Nozieres-Vinen work seems not. To be precise, if viewing vortex dynamics in real superconductors as the same of [**independent**]{} vortices, that is, no correlation among vortices, the Nozieres-Vinen work would predict a large Hall angle. Experimentally, however, the Hall angle is usually small. More puzzling is that the Hall angle can change sign, sometimes even more than once. In the early 1970‘s, it became clear that those experimental data were real, not experimental errors. This pronounced and apparent discrepancy between theory and experiment is hence the famous Hall anomaly in the mixed state of superconductors. Similar discrepancies have been observed in neutral superfluids. In order to resolve this Hall anomaly, even after it had been shown by Ao and Thouless, and their coworkers [@at; @az]that based on the microscopic theory and global topological analysis the total transverse force on an individual vortex is indeed large, the large transverse force in the Nozieres-Vinen work has been questioned. Subsequently it has been concluded by a large group of eminent theorists that the total transverse force on a single moving vortex must be usually small and occasionally changes sign. Various independent vortex dynamics theories have been developed during the past 30 years. Such efforts have been conveniently summarized in two recent reviews [@v1; @v2]. It is clear, nevertheless, that despite those ingenious efforts to fit experimental data, the Hall anomaly still remains an ‘anomaly‘ in the light of those independent vortex dynamics theories. What goes wrong? Four Mathematical Inconsistencies ================================= A careful examination of those theories reveals that there are two types of errors in those theories in Ref‘s.\[\]. The first type lies in physics: The vortex many-body correlations are completely absent in their explanations of Hall anomaly by independent vortex dynamics theories. The hint to correct this independent particle dynamics error is in fact already implied in the work of Dirac and Abrikosov but absent in those independent vortex dynamics theories, to which I will come back later. The second type of errors lies in mathematics: In order to fit experimental data on Hall anomaly, the mathematical consistency in those theories has been severely compromised. This ignorance of mathematical consistency goes also against the spirit of Dirac and Abrikosov. Let me explain the mathematical inconsistencies in those independent vortex dynamics theories first. A further close analysis of those theories suggests that the mathematical inconsistencies in those independent vortex dynamics theories may be classified into four different categories. Misuse of the relaxation time approximation ------------------------------------------- The most subtle inconsistency is the relaxation time approximation employed by Kopnin and his co-workers (Kopnin and Kratsov, JETP Lett.,1976; Kopnin and Lopatin, Phys. Rev. B, 1995; van Otterlo, Feigelman, Geshkenbein, Blatter, Phys. Rev. Lett., 1995; documented in Ref.\[6\]). This type of mistakes frequently occurs in the force-balance type calculation of transport coefficients, which was already noticed at least by Green in the 1940‘s and has been extensively discussed by Kubo. Unfortunately, as Kubo remarked in his coauthored famous book on statistical physics, in the literature such error repeatedly appears in different disguises. It would be very attempting to use such a simple relaxation time approximation following the typical diagrammatic technique. But, it is plainly wrong in the present context of microscopic derivation of vortex dynamics based on force balance equation. Fortunately, a careful calculation of the vortex friction and its transverse force with disorders ranging from weak to strong was performed by Ao and Zhu [@az]. No relaxation time approximation is needed. The disorder effect can be directly taken into account. The Nozieres-Vinen and Bardeen-Stephen theories have been mathematically united in this work [@az]. It is also a detailed implementation of the global topological method [@at]. One may now stop the use of erroneous relaxation time approximation in the derivation of vortex dynamics. Mixing with another effect -------------------------- The second type of mistake is very interesting. It has been claimed that there is a contribution from the normal fluid which would cancel (or add to, depending on the interpretation) the large transverse force determined by the superfluid density (Sonin Soviet Phys. JETP,1976; Phys. Rev. B, 1997, documented in Ref.\[5\]). From the very beginning questions have been raised on the interpretation of the result and on the validity of various approximations. For the clarity of the present discussion let us accept the view that the normal fluid can indeed be represented by phonons and that phonons scattering off a vortex can indeed be exactly mapped onto the Aharonov-Bohm scattering problem. It has been known at least since Aharonov and Bohm that such a scattering is periodic in magnetic flux. When transforming this result back into vortex dynamics, this means that the transverse force according to such phonon contributions is a periodic function of vorticity, completely different from that of the transverse force represented by the Magnus force [@nv; @at; @az] which is linear in vorticity. Hence the calculation of such phonon contributions is of a completely different kind. It corresponds to a different experimental condition. In fact, if we should conceive the similar situation of a vortex scattering of a superfluid island, a periodic dependence on the enclosed superfluid particle numbers has already been obtained by Ao and coworkers [@zta] as well as by many others. In the derivation of vortex dynamics by geometric phase computation, the phase vortex acquired is a [**continuous distribution**]{} of the vortex trajectory, and the phase in the situation conceived by Sonin would be a [**discontinuous**]{} function. This difference explains that in the former case the transverse force is a linear function of the superfluid density while in the latter it is not. Those two situations should not be mixed. This difference is a fine manifestation of symmetry breaking for solutions of a symmetric Hamiltonian. Double counting the same topological effect ------------------------------------------- The third mistake is the claim of the spectral flow canceling the Berry phase (Volovik, JETP Lett., 1993 and1997, documented in both Ref.\[5\] and \[6\]). This mistake could be easily committed by anyone if not being careful and critical enough: There are two ways of calculating the topological contribution for the transverse force in fermionic superfluids: One is far from the vortex core. It is the method employed by Ao and Thouless and further refined by Thouless, Ao, and Niu [@at]. Another is from the core, the spectrum flow, also related to the curvature or connection. They are equivalent according to Stokes theorem [@ao97]. Hence they should not be used as different forces to cancel each other. As is now well established, the Magnus force is a manifestation of the Josephson-Anderson relation. It is worth mentioning that it has already been well known that for fermionic superfluids phase slippage of the Josephson relation is equivalent to the spectrum flow. It is perhaps also useful to point out that in modern physics there are several significant topological phenomena which can be computed in seemingly completely different ways. A fine example is the transport in quantum Hall effect in condensed matter physics. There it is firmly established that the calculation of Hall conductance due to edge states is equivalent to one due to bulk states. In one particular experimental situation one type of calculation may be more straightforward. Nevertheless, no one would like to have those two calculations cancel each other in Hall conductance. The mistake of the spectral flow canceling the Berry phase is a double counting of the same effect. No extra Berry phase contribution at vortex core ------------------------------------------------ The fourth mistake is rather mysterious, perhaps associated with the difficulty in the understanding of the Anderson theorem in dirty superconductors. It was claimed (Feigelman, Geshkenbein, Larkin, Vinokur, JETP Lett, 1995, documented in Ref.\[6\] ) that there were two contributions to Berry phase, one far from the core, the same as that of Ao and Thouless, and an extra one from the vortex core, because the corresponding superfluid density would be finite at the point of phase singularity. The starting point to demystify this mistake is quantum mechanics: At the phase singularity the wavefunction must be zero, hence the superfluid density at the core of the corresponding phase singularity must be zero. There is simply no extra Berry phase from the core because the corresponding superfluid density is zero. Let us analyze this mistake from two different levels of description: macro- and micro-scopic. On the level of the phenomenological Ginzburg-Landau type description, which is now believed to be valid for both clean and dirty superconductors, the superfluid density is obviously zero at the vortex core. Therefore, there is no extra contribution to the Berry phase. Now, a person familiar with Bardeen-Cooper-Schrieffer microscopic theory would say that this macroscopic description is an oversimplication. The superfluid density is finite at the vortex core even for a super super clean superconductor. Indeed, the superfluid density is finite at the vortex core according to the Bardeen-Cooper-Schrieffer microscopic theory. The same microscopic theory also tells us that first, for this finite superfluid density at the vortex core there is no associated phase singularity, and, second, if there is a phase singularity associated with a quasiparticle state, its wavefunction is zero at this singularity, as would be expected from quantum mechanics. Hence, whether or not a finite density exists, there is no extra Berry phase from the vortex core. This is true for both super clean and dirty superconductors. Magnus Force and Josephson-Anderson Relation ============================================ We have now explained that the microscopically derived independent vortex dynamics theories documented in Ref.\[5\] and \[6\] are all mathematically inconsistent. Even if the reader agrees with this strong conclusion, one important question still remains: the experimental evidence. There is actually already a substantial body of experimental supports for the works of Nozieres-Vinen and Bardeen and Stephen, as well as that by Ao and Thouless, and their coworkers. I use the word ‘substantial‘ because there is a big group of eminent physicists, such as discussed above, would not accept those experiments. Also, it is needless to say that further experiments are needed. Given this consideration, I mention here three important experiments. The first one is the classical experiment to establish the quantization of vorticity, assuming that the total transverse force is indeed proportional to the superfluid density which can be measured by different means [@vinen]. This experiment was published in 1961, and is a Josephson type experiment before Josephson‘s predictions. The second type of experiment is a systematic measurement of the effect of a moving vortex, the Magnus force and its closely related Josephson(-Anderson) relation [@packard]. The third one is a direct measurement of transverse force of a moving vortex in a superconductor [@zhu]. All those experiments clearly establish the large transverse force on a moving vortex. Hall Anomaly and Vortex Many Body Effect ======================================== However, one may insist that the Hall anomaly still remains unexplained: How could the large transverse force lead to a small Hall angle, and, sometime to a sign change? The ingredients to explain the Hall anomaly are actually already implied in the works of Dirac and Abrikosov. Let us stand one step away from vortex dynamics and consider the Hall effect in semiconductors. The transverse force there, the Lorentz force, on a moving electron is universal. There exist such extremely rich Hall phenomena in semiconductors: small Hall angle, sign change, Quantum Hall effect, etc. It would be puzzling that how could a universal Lorentz force generate such a complexity. Fortunately, this puzzle has long been solved: The competition between electron many body effect (Coulomb interaction, Fermi-Dirac statistics) and pinning (lattice, impurities, etc). The key to solve the puzzle is a logical extension of Dirac‘s idea of a void in a filled Fermi sea: the existence of holes in a filled energy band [@mermin]. The ubiquitous existence of the Abrikosov lattice, the starting point of theoretical considerations in the mixed state of superconductors, already loudly suggests that one must consider vortex many body effect. Following this suggestion, it has been argued that the competition between vortex many body interaction and pinning can explain the Hall anomaly in the mixed state [@ao95]. Starting from this idea, it is rather straightforward to work out quantitative predictions on the Hall anomaly, even with the simplest types of collective excitations in an Abrikosov lattice such as vacancies and interstitials: The scaling laws, the activation energies, the sign change, etc. More complicated many body effects are clearly possible to play a role. This vortex many body effect consideration is also consistent with the tremendous progress in the last 10 years in the explanation of the longitudinal resistance by vortex many body effect. This successful explanation of Hall anomaly clearly implies that the independent vortex dynamics model is not the only model for Hall anomaly and that the use of it to explain Hall anomaly is physically inconsistent with both the Abrikosov lattice theory and the recent theoretical progress in vortex matter. We may further point out that the vortex many body effect-pinning model appears consistent with all major experimental observations on Hall anomaly. But, it has not been universally accepted yet. As a good sign, it turns out that two of the strong advocators of independent vortex dynamics theories already believe the vortex many body effect can indeed lead to the sign change [@kv]. A Lesson ======== Thus, the vortex dynamics theory started by Nozieres and Vinen, by Bardeen and Stephen, and further developed by Ao and Thouless, and their coworkers, works well. Four types of technical mistakes in those independent vortex dynamics summarized in Ref.\[5\] and \[6\] aiming to fit Hall anomaly data have been discussed above. For a reader not in the immediately related fields, the technical issues above are not particularly helpful. What other lessons can one learn? A generic lesson can indeed be learned. As discussed at the beginning, there are two opposite methods in theoretical physics research: emphasizing the logical consistency or emphasizing the direct hints from experiments. Both can and have lead to great successes. However, pushing each method to its extreme can also be very problematic. For example, in pushing his unified field theory for elementary particles, Heisenberg claimed to have a final theory for physics and the only job left to be done would be to work out the details. We note that more than 40 years later string theorists are still busy and working hard on ‘details‘ of different final theories. I am afraid that the major mistake made by those eminent researchers in their independent vortex dynamics theories on Hall anomaly is another extreme: In pushing those theories to fit Hall anomaly data, the mathematical consistency of their theories is greatly compromised. 60 years later we are still working hard on the foundation of vortex dynamics [@ao03]. Is there a take-home rule on how to successfully use those two opposite methodologies? The fun and challenge of practicing theoretical research, as well as its pitfall, are that there is no ready formula for the right mix of two extremes. You can only find it out by getting your hands wet and dirty, preferably at the spot where the water is the roughest [@weinberg]. [99]{} P. Nozieres and J. Vinen, The motion of flux lines in type II superconductors, Phil. Mag. [**14**]{}, 667 (1966). J. Bardeen and M.J. Stephen, Theory of the motion of vortices in superconductors, Phys. Rev. [**140**]{}, 1197A (1965). P. Ao and D.J. Thouless, Berry phase and the Magnus force for a vortex line in a superconductor, Phys. Rev. Lett. [**70**]{}, 2158 (1993);\ D.J. Thouless, P. Ao, and Q. Niu, Transverse force on a quantized vortex in a superfluid, Phys. Rev. Lett. [**76**]{}, 3758 (1996). P. Ao and X.-M. Zhu, Microscopic theory of vortex dynamics in homogeneous superconductors, Phys. Rev. [**B60**]{}, 6850 (1999). G. Blatter and V.B. Geshkenbein, Vortex matter, chapter 10, in The Physics of Superconductors. V. I: conventional and high-Tc superconductors, edited by K.H. Bennemann and J.B. Ketterson, Springer, Berlin, 2003. N.B. Kopnin, Vortex dynamics, part IV, in Theory of Nonequilibrium Superconductivity, Clearendon Press, Oxford, 2001. P. Ao and X.-M. Zhu, Quantum interference of a single vortex in a mesoscopic superconductor, Phys. Rev. Lett. [**74**]{}, 4718 (1995);\ X.-M. Zhu, Y. Tan, and P. Ao, Effects of geometric phases in Josephson junction arrays, Phys. Rev. Lett. [**77**]{}, 562 (1996). P. Ao, Spectral flow, the Magnus force, and the Josephson-Anderson relation, Phys. Lett. [**A216**]{}, 167 (1996). W.F. Vinen, The detection of single quanta of circulation in liquid helium II, Proc. Royal. Soc. London [**A260**]{}, 218 (1961). R.E. Packard, The role of the Josephson-Anderson equation in superfluid helium, Rev. Mod. Phys. [**70**]{}, 641 (1998). X.-M. Zhu, E. Brandstrom, and B. Sundqvist, Observation of the transverse force on moving vortices in YBCO films, Phys. Rev. Lett. [**78**]{}, 122 (1997). It is interesting to note that actually such a Hall anomaly in solid was not a concern, because people have accepted the universal Lorentz force and are happy to see the explanation coming from somewhere else. For a perspective discussion, see, for example, H.A. Bethe and N.D. Mermin, A conversation about solid state-state physics, Physics Today, [**57**]{}, 53 (June, 2004). P. Ao, A scenario to the anomalous Hall effect in the mixed state of superconductors, J. Supercond. [**8**]{}, 503 (1995);\ Nernst effect, Seebeck effect, and vortex dynamics in the mixed state of superconductors, J. Low Temp. Phys. [**107**]{}, 347 (1997);\ Motion of vacancies in a pinned vortex lattice: origin of the Hall anomaly, J. Phys. Cond. Matt. [**10**]{}, L677 (1998);\ Origin of Hall anomaly in the mixed state, Phys. Rev. Lett.. [**82**]{}, 2413 (1999) N.B. Kopnin and V.M. Vinokur, Effects of pinning on the flux flow Hall resistivity, Phys. Rev. Lett. [**83**]{}, 4864 (1999). P. Ao, Yes, 60 years later we are still working hard on vortices ( http://babbage.sissa.it/abs/cond-mat/0311495 and http://babbage.sissa.it/pdf/cond-mat/0311495). S. Weinberg, Four golden lessons, Nature [**426**]{}, 389 (2003). If a reader feels a need for a more detailed list of works discussed in Section II, I apologize for this inconvenience. The main rational for this omission is to keep the reference list short. It is also because there exists a complete coverage of those works in Ref‘s.\[\]. The rational to put a modest list of works by Ao and/or Thouless on the transverse force and on the Hall anomaly is that they are not referenced in Ref‘s.\[\]. For a reader interested in their study, Ref.\[\] may be a good starting point.
--- abstract: 'We study the light scattering by localized quasi planar excitations of a Cholesteric Liquid Crystal known as spherulites. Due to the anisotropic optical properties of the medium and the peculiar shape of the excitations, we quantitatively evaluate the cross section of the axis-rotation of polarized light. Because of the complexity of the system under consideration, first we give a simplified, but analytical, description of the spherulite and we compare the Born approximation results in this setting with those obtained by resorting to a numerical exact solution. The effects of changing values of the driving external static electric (or magnetic) field is considered. Possible applications of the phenomenon are envisaged.' author: - | G. De Matteis [^1], $\quad$ D. Delle Side [^2], $\quad$ L. Martina [^3], $\quad$ V. Turco [^4]\ $*$ IISS “V. Lilla”, MIUR, Francavilla Fontana (BR) Italy\ Dipartimento di Matematica e Fisica, Universitådel Salento\ INFN, Sezione di Lecce\ Via per Arnesano, C.P. 193 I-73100 Lecce, Italy\ title: Light Scattering by Cholesteric Skyrmions --- Introduction {#sec:intro} ============ In the last few years great efforts have been done in developing new materials for opto-electronics and photonics applications. A relevant role in this work has been played by the liquid crystals (LC) physics [@Luckhurst2017; @Chigrinov2010] for a quite long time. In fact, nowadays LC are widely used in all types of display applications and their unique nonlinear electro-optical properties make them suitable material for non-display applications, like optical filters and switches, beam-steering devices, spatial light modulators, optical wave-guiding, lasers [@Coles] and optical nonlinear components [@Beeckman]. On the other hand, a wide interest was deserved by a variety of new 2-dimensional structures like *cholesteric fingers* [@OSWALD200067], and 3-dimensional ones, like nematicons [@Assanto:2016aa], *cholesteric bubbles* or *spherulites* [@doi:10.1080/02678299208029010; @doi:10.1080/02678299108035502; @POL:POL180180709], the latter appear in quasi-2D layers of Chiral Liquid Crystals (CLCs) with homeotropic anchoring on the confining surfaces. Those textures have been studied from a theoretical point of view [@Leonov; @noiProc] and we would like to consider them for their potential opto-technological applications. Thus the aim of the present paper is to evaluate the possibility to exploit spherulites, isolated or in lattice arrangements [@key-2; @carboni], as electric/magnetically driven switches for light beams, propagating in the liquid crystal. Spherulites in CLCs share some properties with the 2D skyrmions in magnetic systems [@Romming636; @PhysRevLett.87.037203]. In fact, these isolated axisymmetric states are stabilized by specific interactions imposed by the underlying molecular handedness, however they are more sensible to the external fields and may possess slow modulations in a preferred direction. Thus, a continuum model can be derived in the framework of the Frank-Oseen theory [@deGennes; @Stewart], from which one can write the respective equilibrium equations. By applying external fields and imposing anchoring boundary conditions [@1; @2], the free helicoidal equilibrium can be deformed into new structures such as skyrmions [@9; @12], which are stabilized by topological conservation laws. The theory also describes the cholesteric fingers [@3; @8], or helicoids, with defect disclination type, which can be described, at least in some approximate setting, in terms of integrable nonlinear equations [@key-2; @17], stabilized both by topological and non-topological conservation laws. Carboni et al. [@carboni] detected a phase transition between the two textures, strongly depending on the thickness of the confining cell. They showed that the texture changes are driven by temperature through a parameter $\zeta$ proportional to the thickness and to a proper chirality parameter. Samples of different thickness displayed the textural changes at different temperatures but for the same value of $\zeta$. However here we limit ourselves to the sferulites/skyrmion case. The paper is organized as follows. In Sec. \[sec:skyrme\] we introduce the continuum elastic model of the CLC, we obtain the corresponding equilibrium equations and analyse the skyrmion (spherulite) solutions, either by analytical or numerical methods. In Sec. \[sec:diffusion\] we introduce the problem of light diffusion by a spherulite. In Sec. \[sec:Born\] we provide perturbative solutions for the light scattering equations derived in Sec. \[sec:diffusion\]. In particular in \[sec:Bornout\] we compute the cross section of the conversion process of incoming polarized light in the incidence plane into the outgoing polarized light in the perpendicular direction. Analogously, in \[sec:Bornin\] we consider the complementary problem of the change of polarization axis from the direction orthogonal to the liquid crystal to the in plane direction. Finally, in the Conclusions we summarise our results and address some possible experimental realizations. Skyrmions in chiral liquid crystals {#sec:skyrme} =================================== A LC is described by a uni-modular director field $\mathbf{n}{\left (}\mathbf{r} \rg$ belonging to $\mathbb{RP}^2$[@deGennes; @Stewart], which in polar representation is $$\mathbf{n}({\mathbf}{r})=(\sin\theta(\mathbf{r})\cos\psi(\mathbf{r}), \sin\theta(\mathbf{r})\sin\psi(\mathbf{r}), \cos\theta(\mathbf{r})), \qquad - \mathbf{n} \equiv \mathbf{n}. \label{directorpolar}$$ In the bulk a CLC director field $\mathbf{n}{\left (}\mathbf{r} \rg$ is governed by the Frank-Oseen free energy density \_[FO]{}= ()\^2+(-q\_0)\^2 + ()\^2\ +-()\^2 , \[fomega\] where $q_0$ is the chirality parameter of the cholesteric phase, the positive reals $K_1$, $K_2$, $K_3$, $ K_4$ are the Frank elastic constants, which we set to be $ K = K_1 = K_2 = K_3, \quad K_4=0 $ for sake of simplicity. The last term in (\[fomega\]) represents the interaction energy density associated with a spatially uniform external static electric field $\mathbf{E}$, or equivalently a magnetic field $\mathbf{H}$, along the ${\mathbf}{k}$ direction. Of course, in the presence of the external electric (magnetic) field, the general rotational symmetry is broken and reduced to rotations around the direction of $\mathbf{E}$ ($\mathbf{H}$). In the absence of anchoring conditions, the field $\mathbf{n}{\left (}\mathbf{r} \rg$ would form a cholesteric helix with axis orthogonal to $\mathbf{E}$ ($\mathbf{H}$). However, supposing the CLC confined within the region $\mathcal{B}=\lbrace (x,y,z)\in\mathbb{R}^3, \mid z\mid\leq \dfrac{L}{2}\rbrace$, the translational symmetry in the direction of ${\mathbf}{k}$ is broken and the interaction of the CLC with the planar bounding surfaces can be encoded by the Rapini and Papoular[@rapini] additional surface energy contribution \_s= K\_s (1+([n]{})\^2), where $K_s,\hspace{.1cm}\alpha>0$ and $\bm{\nu}$ being the unit outward normal to the boundary surface. Strong homeotropic anchoring is obtained for $K_s\to\infty$, which corresponds to the Dirichlet boundary conditions $ \mathbf{n}{\left (}x, y, z \pm \frac{L}{2}\rg = \mathbf{k} \equiv - \mathbf{k}.\label{surfcond2}$ So helices are deformed and confined within $\mathcal{B}$ and possibly extended structures called helicoids (or helicons and, sometimes, *fingers*) or spherulites (also *skyrmions*) may form, depending on the existence of a preferred direction of perturbations of ${\mathbf}{n}$. In order to find equilibrium configurations of the CLC we have to minimise the Frank free energy under the appropriate boundary conditions. We also limit ourselves to axisymmetric isolated solutions. Thus, assuming $\theta=\theta(\rho, z)$ and $\psi=\psi(\phi)$, where $\rho$, $ z$ and $\phi$ are the usual cylindrical coordinates around the axis ${\mathbf}{k}$, the solution of minimal energy is given by ()=+, \[eqpsi2\]. and all the admissible equilibrium configurations are solutions of the dimensionless Boundary Value Problem (BVP) $$\begin{aligned} \label{theta2Dscal} \frac{\p^2 \theta}{\p z^2}+\frac{\p^2 \theta}{\p \rho^2}+\frac{1}{\rho}\frac{\p \theta}{\p \rho} -\frac{1}{\rho^2}\sin\theta\cos\theta \mp \frac{4\pi}{\rho}\sin^2\theta-\pi^4\left(\frac{E}{E_0}\right)^2\sin\theta\cos\theta=0, \vspace{1cm}\\ \begin{cases} \label{bccases} &\theta(0,z)= \pi,\vspace{.5cm}\hspace{.5cm}\theta(\infty,z)= 0,\\ &\p_z\theta\left(\rho,\pm\frac{\nu}{2}\right)=\mp 2\pi k_s \sin\theta\left(\rho,\pm \frac{\nu}{2}\right)\cos\theta\left(\rho,\pm \frac{\nu}{2}\right), \end{cases} \end{aligned}$$ where the lengths are rescaled with respect to the so-called pitch length $p=\frac{2\pi}{\mid q_0\mid}$. Here, $E_0=\dfrac{\pi \mid q_0\mid}{2} \sqrt{\dfrac{K}{\varepsilon}}$ is the critical unwinding field for the cholesteric-nematic transition in non-confined CLCs[@stewarta], $\nu=L/p$ is the normalized thickness of the layer and $k_s=K_s/(K q_0)$ the strength of the interaction liquid/ boundary surfaces. The $\mp$ sign in equation depends on the sign of $q_0$: in the following we take $q_0<0$, with no loss of generality. Moreover, it is convenient to simplify the notation setting $\rho_1^2=\pi^4\left( \dfrac{E}{E_0}\right)^2$. System (\[theta2Dscal\]-\[bccases\]) is a 3D perturbed Sine-Gordon type equation: chirality and BCs do not allow to integrate it in analytical form. The main deformation comes from the fifth term in (\[theta2Dscal\]), associated to the chirality of the system. Thus, the solutions of the BVP (\[theta2Dscal\]-\[bccases\]) can be obtained, at least to our knowledge, only by numerical methods. However, to get information about the shape of a spherulite, one can evaluate the asymptotic behaviours of the solutions near $\rho\leadsto 0$ and $\rho\leadsto\infty$. Moreover, let us consider first the pure cylindrical reduction of , i.e. $\theta_z=0$, which holds when $\nu$ is sufficiently large and modulations in the $z$ variable are discarded. Near $\rho\leadsto 0$ both the chiral and the electric interaction can be neglected with respect to the other terms, thus setting both $q_0\to 0$ and $E\to 0$, equation reduces to the the conformally invariant O(3)-sigma model in polar representation[@manton]. Accordingly, the solutions near $\rho\leadsto 0$ behave like the Belavin-Polyakov ones[@belp], namely \[1BP\] =(), = where $\rho_0$ is an arbitrary scale factor due to the conformal invariance. The fourth and the fifth term in break the conformal symmetry. Thus, substituting solution in equation we obtain the extimation \[rho0\] \_0= ()\^2 = 4\_1\^2, which can be interpreted as the typical scale of a spherulite. Then, around $\rho= 0$, at the lowest order the solution of (\[theta2Dscal\]-\[bccases\]) is approximated by \[ansatzbulk0\] ()= -+O( ()\^3). with $\rho_0$ fixed by . Furthermore, in order to have information also about the modulation in the $z$ direction, we assume as rough approximation of the solution by deforming as \[linansatz\] (,z)= [cc]{} - & /Z(z)&lt; \_0\ 0 & /Z(z)&gt;\_0 . , with $\rho_0$ given by , and replace into the Frank-Oseen energy. Its minimisation leads to equation \[eleqz\] Z”(z)- Z(z)+=0, which, upon imposing the boundary conditions , yields the solution \[zansatz\] Z(z)= 1-. We note that the sizes of the vortices decrease as $\mid z\mid$ and $k_s$ increase, as it can be seen in figure \[fig:approxsfe\]. We will assume hereafter that this is the $z$-modulation of the skyrmion in the entire volume. In the asymptotic limit $\rho\to\infty$ the dominant term comes from the external electric field, which affects the shape of Skyrmion by the reduced equation \[sgcil\] +- 2=0, which is known as cylindrical Sine-Gordon equation [@barone]. The most relevant fact about this equation is its connection with the celebrated Painlev$\textrm{III}$ equation[@ablowitz; @McCoy; @noiProc] (see also [@NIST:DLMF] Cap. 32): then it can be analytically solved. However, in correspondence to the boundary conditions at $\infty$ stated in , this equation has always singular solutions at $\rho\to 0$. Thus the validity of such an approximation is limited to a neighbourhood of $\infty$, where its asymptotics is \[asymptlin\] c\_2 . This result is sufficiently similar to the one obtained in linear approximation, which leads to first order modified Bessel functions of second kind which have almost analogous asymptotics. The above results give us useful indications about the shape of the spherulite/skyrmion, but many important details are missed. In fact, to have a good account of them and to estimate the goodness of the approximations made above, we need to perform numerical calculations on the BVP described by (\[theta2Dscal\]-\[bccases\]). To this aim, we use the standard central finite difference discretisation and the Newton-Raphson method [@recipes; @leveque], inizialized by the shooting method for the planar reduction of the system (i.e. $\theta_z=0$). It turns out that for sufficiently large electric fields, i.e. $\frac{E}{E_0} > 1$ the linear approximations matches with the numerical solution quite closely, as represented in fig. \[fig:comparison1.02Ana\]. On the other hand, the approximations become very rough for relatively weak fields , i.e. $\frac{E}{E_0} \approx 1$, as shown in fig. \[fig:comparison1.5Ana\]. As far as the numerical cases considered in the present work, this behaviour denotes the underestimation of the chiral term in the linear approximation, in particular at the intermediate scales $\rho_1\leq \rho \leq \rho_0$. ![Comparison between the numerical solution of and the analytical linear approximations for $\frac{E}{E_0}=1.02$.](102-eps-converted-to.pdf "fig:"){width="10cm" height="5cm"} \[fig:comparison1.02Ana\] ![Comparison between the numerical solution of and the analytical linear approximations for $\frac{E}{E_0}=1.5$](150-eps-converted-to.pdf "fig:"){width="10cm" height="5cm"} \[fig:comparison1.5Ana\] The numerical solutions of the BVP , for different values of the couple $\left( \dfrac{E}{E_0}, k_s\right)$ are depicted in figures \[fig:profiles1\] and \[fig:profiles2\]. In each figure the profiles $\theta(\rho)$ for different values of $z\in [-\nu/2,\nu/2]$ are represented. In figure \[fig:profiles1\] we have $\dfrac{E}{E_0}=1.02$ and the strength of the anchoring $k_s=0.1, 6$. In figure \[fig:profiles2\] we have $\dfrac{E}{E_0}=1.5$ with the same values of $k_s$. We note that, when the strength of the anchoring is small, the profiles are almost the same for every value of coordinate $z$. This means that, when the interfaces at the boundaries of the cell have a really small homeotropic effect on the director’s configuration, a quasi-perfect cylindrical simmetry holds for axisymmetric solutions. In this case, the planar vortices described by $\theta(\rho)$ for every value of $z$, have the same, maximum, size. However, if we impose a quite stronger homeotropic effect at the boundaries, the vortices tend to have a reduced size, which becomes smaller as $\mid z\mid$ reaches the value $\dfrac{\nu}{2}$. In both figures \[fig:profiles1\] and \[fig:profiles2\], the value of the adimensional thickness of the cell is $\nu=1.8$. A different representation of the spherulite is given by reporting the intersection point with the $\rho$ axis by the tangent to the inflection point of $\theta(\rho)$, for a fixed value of $z$[@Leonov]. The results of this procedure are reported in figures \[fig:size1\].a and \[fig:size1\].b, for the two different values of $\dfrac{E}{E_0}$ taken into consideration. We stress that for greater external fields the size of all vortices narrows. Diffusion of Light on a CLC cylindrical structure {#sec:diffusion} ================================================== In this section we consider the scattering of an e.m. wave, propagating through a confined CLC, which is under the suitable conditions for a spherulite to be formed. The geometry is the same as in the previous section, as the same is the choice of the cartesian axes. We assume that the wave vector is parallel to the ${\left (}x, y\rg$ - plane. The propagation of the wave is described in terms of the oscillating electric field $\vE$, eventually to be distinguished by the static electric field $\mathbf{E}$, and by the associated electric displacement field $\vD$, by the equation [@Jackson] [(]{}- \^2 = - \_[t t ]{} . \[Waveeq\]This equation has been obtained, as usual, by eliminating the magnetic field from the Maxwell’s equations. The electric anisotropy of the CLC is made explicit by the existence of a permeability tensor, which locally has an orthogonal component $\epsilon_\bot$ if $\vE \bot \mathbf{n}$ and by a parallel contribution $\epsilon_\|$ if $\vE \| \mathbf{n}$. Then the costitutive relation is given by [@deGennes; @Kleman; @Stewart] = \_ + [(]{} , = \_ - \_. Let us assume that the incident wave is described by the electric field E\_y e\^[[(]{}k x -t ]{}+ e\^[[(]{} x -t ]{}= [\_]{} e\^[- t ]{}x - ,where $\tilde{k} = k \sqrt{1+ \frac{\Delta \epsilon}{ \epsilon_\bot}}$. Since we suppose that the spherulite is not perturbed by the wave, we need to assume certain supplementary conditions: 1. the liquid crystal molecules are not deformed/rotated by wave, which implies $\omega \gg \frac{1}{\tau}$, being $\tau$ any “relaxation time” of the CLC. 2. The diffractive effects in the light scattering on the spherulite are not negligible, then we assume that its wavelength is $\lambda \lesssim \rho_0 $ ( or $k{\left (}\omega \rg \gtrsim \frac{1}{\rho_0}$), being $\rho_0$ the typical size of the spherulite defined in equation . 3. the horizontal bounding plates are considered as homogeneous dielectric planes, thus restricting the electric field to be periodic along the $z$ axis. 4. A strong supplementary condition we introduce is $\nabla \cdot \vE = 0$, which may imply $\nabla \cdot \vD = \rho_{free} \neq 0$. This should be true in the core of the spherulites, where we expect significant variations of the fields. However, at this stage of our analysis we prefer to adopt such an assumption, because the equations become simpler. Then an a posteriori evaluation of the local free charge density will clarify how good is our hypothesis. 5. A final remark concerns the functional dependency of the shape of the spherulite, which we assume to be simply $\theta = \theta{\left (}\rho \rg$. Thus, for sake of simplicity we neglect the modulation along the $z$ axis described by . Under the conditions above equation leads the equation for ${\vec \cE}= {\vec \cE}{\left (}\vr\rg$ \[redWaveeq\] \^2 = - k\^2 , x - , where $k = \frac{\omega}{c} \sqrt{\epsilon_\bot}$ and the coupling matrix = [**1**]{}\_3 + . Since $\bn = \bn{\left (}\rho, \phi\rg = \cos\theta{\left (}\rho\rg\bk + \sin\theta{\left (}\rho\rg \boldsymbol{\phi} $, we are naturally led to express also the electric field as = \_[(]{}, , z + \_[(]{}, , z + \_z [(]{}, , z . Then, now it is easier to explicit the off diagonal contributions to the . Indeed, provided that [(]{} = [(]{}\_+ \_z[(]{} + , then reads &[(]{}\^2 + k\^2 [(]{}\_+ \_ + \_z = \[vectorhelmh\]\ & - k\^2 [(]{}\_+ \_z + [(]{}\_+ \_z . However, now the Laplacian operator acts on the cylindrical components of a vector-field, then it takes different expressions according to the component index. In particular, by defining $\nabla_0^2 \cdot = \frac{1}{\rho} \p_\rho {\left (}\rho \, \p_\rho \cdot \rg + \frac{1}{\rho^2} \p^2_\phi + \p^2_z $, equation becomes [(]{}\_0\^2 + k\^2 \_& =& [(]{}\_+2 \_ \_, \[WaveRho\]\ [(]{}\_0\^2 + k\^2 \_& =& [(]{}\_- 2 \_ \_- k\^2 [(]{}\^2 \_+ 2\_z, \[WavePhi\]\ [(]{}\_0\^2 + \^2 \_z & = & - k\^2 [(]{} 2\_- \^2 \_z, = k . \[WaveZ\] In order to describe the scattering of the light on the spherulite, the above equations have to be solved with the asymptotic conditions \_[ ]{} \_ e\^[ ]{}, \_[ ]{} \_ e\^[ ]{}, \_[ z]{} \_[z]{} e\^[ ]{} .\[asymptFields\] Of course such asymptotic conditions are exact solutions of the homogeneous system above, i.e. when $\frac{\Delta \epsilon}{\epsilon_\bot} \to 0$. As the problem of finding a complete analytical solution to - is quite hard, let us consider a perturbative setting. The basic idea is to first give a Born approximated solution of the equation , keeping an implicit dependence on $\cE_{ \phi}$. Then we can use it in which will become a closed linear equation, even if non local, in $\cE_{ \phi}$. Solving it, in the same approximation, one can use these results into for $\cE_{ \rho}$. Perturbative solutions of the light scattering equations by a spherulite {#sec:Born} ======================================================================== The *out plane* conversion {#sec:Bornout} -------------------------- Following the standard method by Lippmann-Schwinger [@LippSchwi], let us rewrite equation as the integral equation \_z[(]{}= \_[z]{} e\^[ ]{} + G[(]{},’U\_z[(]{}’, \_[(]{}’,[(]{}’ d’ \[LSintEq\] where $U\lq \cE_z{\left (}\vr\rg, \cE_\phi{\left (}\vr\rg,\theta{\left (}\rho\rg \rq = - k^2 \frac{\Delta \epsilon}{\epsilon_\bot} {\left (}\frac{1}{2} \sin 2\theta{\left (}\rho\rg \,\cE_\phi {\left (}\vr\rg - \sin^2 \theta{\left (}\rho\rg \,\cE_z{\left (}\vr\rg \rg $ and the Green function $G{\left (}\vr,\vr'\rg$ is a solution of the PDE [(]{}\_0\^2 + \^2 G[(]{},’= [(]{}-’[(]{}-’[(]{}z - z’\[Geq\]provided that it is differentiable in its domain (i.e. the CLC layer) except at the point $\rho = \rho', \; \phi = \phi' , \; z = z'$. There the partial first derivatives exist, but they are not continuous, in such a way that the second derivatives admit the singularity defined by the r.h.s. in . The function $G$ can take the form G = \_[m, n = -]{}\^e\^[ [(]{}z - z’]{} e\^[ m [(]{}- ’]{} h\_[m, n ]{}[(]{}, ’ ,\[GreenSeries\] where the functions $h_{n,m}$ satisfy the Bessel type equation with singular inhomogeneity [@NIST:DLMF] \_[(]{} \_ - + \^2 - [(]{}\^2 h\_[m, n]{} = [(]{}- ’,\[BesselEq\]and the cutoff frequency $\kappa_n = \sqrt{\tilde{k}^2 - {\left (}\frac{2 \pi n}{\nu}\rg^2} \; \textrm{for} \; \tilde{k} \geq \frac{2 \pi n}{\nu}.$ is induced by the finite transverse size of the CLC layer. We require $G$ to be a continuous function with a bounded behaviour at $\rho \to 0$ and, additionally, to be a cylindrical progressive wave as $\rho \to \infty$, i.e. of the form $\propto \frac{e^{\imath \kappa \rho}}{\sqrt{\kappa \rho}}$. Furthemore, because of the $\delta$ like inhomogeneity, $G$ can have discontinuities only in the first derivatives at $\rho \to \rho'$. In conclusion, by imposing the above conditions, the Green function takes the form G[(]{},’= \_[m, n = -]{}\^e\^[ [(]{}z - z’]{} e\^[ m [(]{}- ’]{} H\^[[(]{}1 ]{}\_m[(]{}\_n \_&gt; J\_m[(]{}\_n \_&lt; , { [ccc]{} \_&gt; & = & [(]{}, ’\ \_&lt; & = & [(]{}, ’ . where $J_m$ denotes the Bessel function of first kind with integer order $m$ and $H^{{\left (}1 \rg}_m {\left (}\zeta \rg = J_m{\left (}\zeta \rg + \imath\, Y_m{\left (}\zeta \rg$ the corresponding Hankel function of first kind [@NIST:DLMF]. Without further calculations, dramatic simplifications stem from our assumption 5. in Sec. 3, implying that the only non vanishing contributions come from the $n = 0$ mode. Moreover, we are actually interested in the behaviour of the wave at radii much larger than the effective size of the spherulite, which decreases very fast, as we noticed in (2.14). Thus the form of Green function we have to use is G\_[sempl]{}[(]{},’= \_[m = -]{}\^e\^[ m [(]{}- ’]{} H\^[[(]{}1 ]{}\_m[(]{} J\_m[(]{} ’, Now, replacing the above formula into and introducing the explicit form of the potential $U$, we see from that the parameter $\frac{\Delta \epsilon}{\epsilon_\bot}$ can be considered as a perturbation parameter, allowing to express the wave function as a series of powers of it. At the 0 order the solution is given by asymptotics , which replaced into provides at the first order (Born approximation) corrections to the plane wave propagation. Thus, in the Born limit, by the identity $ e^{\imath \zeta \cos p} = \sum_{l = -\infty}^\infty e^{\frac{\imath \,\pi \,l}{2}} \,e^{\imath \, l \, p}\, J_l{\left (}\zeta\rg$, one can integrate on $\phi'$ and obtain the approximated expression $ \cE_z^B{\left (}\vr\rg $ of the $ \cE_z{\left (}\vr\rg $ component as follows & \_z\^[B]{}[(]{}= \_[z]{} e\^[ ]{} + \_[m = -]{}\^e\^[i m ]{} H\_m\^[(1)]{}( )\ &\_[ ]{} (2 [(]{}’)J\_m( ’)[(]{}J\_[m-1]{}( k ’) - J\_[m+1]{}( k ’) - \_ \^[m+1]{} \^2([(]{}’)J\_m\^2( ’) ’ d’ . \[BornZaxis2\] In order to have a simple estimation of the integrals in the above expression, let us resort to the asymptotic expressions of the spherulite given by and . Actually, the simplest rough choice is (with $Z{\left (}z \rg = 1$), which we will adopt here, since we are not interested in the exact values of the diffusion amplitudes, but only in their approximate size. Thus, we have to evaluate integrals of the form \^\_m&=& - \_0\^[k \_0 ]{} [(]{}2 J\_m( s)[(]{}J\_[m-1]{}( s) - J\_[m+1]{}( s) s d s =\ & & - \_0\^[k \_0 ]{} [(]{}2 [(]{}J\_m(s)\^2 ’ s d s, \[matrix1\]\ [I]{}\^z\_m&=& \_0\^[k \_0]{} \^2( )J\_m\^2( s) s d s, \[matrix2\] where the substitution $\tilde{k} \to k$ is justified, since the difference is of the order $ \frac{\Delta \epsilon }{ \epsilon _\bot}$ as stated in . At the moment the above matrix elements do not have yet an analytical expression and should be computed numerically. However, a simple estimation can be obtained evaluating the amplitude of the oscillating Bessel function expression, modulated by the remaining factors of the integrands. Let us consider first . There, the expression involving the Bessel functions is simply $ {\left (}J_m( s)^2\rg ' $, thus we have to look for the zeroes of $ {\left (}J_m( s)^2\rg '' = \frac{1}{2} \left((J_{m-1}(s)-J_{m+1}(s )){}^2+J_m(s) (J_{m-2}(s)-2 J_m(s)+J_{m+2}(s))\right)$. Then, we may evaluate However some manipulations may be useful. First, we recall that the Fourier Transform of the Bessel Fuctions is given by B\_m[(]{}= \_[- ]{}\^[+ ]{} dt e\^[- t]{} J\_m[(]{}t = [cc]{} T\_m[(]{}& || &lt; 1\ 0 & || &lt; 1 . , where $T_m{\left (}\omega \rg$ denote the Chebyshev polynomial of order $m$. Then, the matrix element $ {\cal I}^\phi_m$ becomes \^\_m = - \_[-]{}\^[-]{} d \_[-]{}\^[-]{} d’ \_0\^[k \_0 ]{}d s B\_m()[(]{}B\_[m-1]{}(’) - J\_[m+1]{}( s) s Examples of the numerical evaluation of a certain number of integrals is given in figure \[fig:integrIm\]. Before proceeding in such a calculations let us show the form of the cross section of conversion of a *in plane* polarized wave into a *out plane* polarized one. In fact let us suppose $\cE_{\infty z} = 0$, then the scattered amplitude along the $z$-axis reads \_z\^[B]{}[(]{}= \_ \_[m = -]{}\^\^[m]{} [I]{}\^\_m e\^[i m ]{} H\_m\^[(1)]{}( ) . Recalling that at infinity the asymptotic behaviour of the Hankel functions is $ H_m^{(1)}( \zeta) \leadsto \frac{(1-i) e^{i \zeta-\frac{i \pi m}{2}}}{\sqrt{\pi\, \zeta }} + O\left(\zeta^{-3/2}\right),$ the above expression becomes \_z\^[B]{}[(]{}&=& [(]{}1 - \_ \_[m = -]{}\^ [I]{}\^\_m e\^[i m ]{} =\ & &[(]{}1 - \_ \^\_0 + 2 \_[m = 1 ]{}\^[+ ]{} [I]{}\^\_m , where the identity ${\cal I}^\phi_{- m} = {\cal I}^\phi_m$ has been used, which can be easily proved from and by $J_{- m} = {\left (}-1\rg^m \, J_m$. The cross section of the conversion of linear *in plane* polarized $\hy$ light into the *out plane* $\hz$ one is given by [(]{}, ; , = [(]{} \^2 \^\_0 + 2 \_[m = 1 ]{}\^[+ ]{} [I]{}\^\_m \^2, \[convcrsection\]where we have singled out the dependency on the geometrical size of the spherulites from its relative size with respect the used light wavelength. The calculations of the conversion cross section in the direction $\hr{\left (}\phi \rg$ indicates that there is a quite well defined small angle, around $10^o$ in our numerical examples, along which the rotation of the polarization is efficiently performed. The angle of maximum conversion is $\propto {\left (}k \rho_0\rg ^{-1}$, thus it becomes smaller as the wavelength becomes shorter. A further remarkable aspect is the vanishing of the backscattering. The effective values depend basically on the square of the anisotropy ratio ${\left (}\frac{\Delta \epsilon }{ \epsilon _\bot} \rg^2$. In fact the total cross section takes the expression \_[conv]{} = [(]{} \^2 [(]{}[I]{}\^\_0\^2 + 2 \_[m = 1 ]{}\^[ ]{} [(]{}[I]{}\^\_m\^2 , which is a decreasing function of $k\, \rho_0$. Let us turn our attention again to equation . By using the exact numerical solutions for the spherulite (figures \[fig:profiles1\] and \[fig:profiles2\]), we can obtain the exact expression of the differential cross section in , for the scattering of an electromagnetic wave by a skyrmion. A direct comparison between the exact differential cross section and the approximated one in arbitrary units is reported in figures \[fig:comparison1\] and \[fig:comparison2\]. As it can be seen, the exact numerical solution for the spherulites makes the angle of maximum conversion smaller, than the one computed through the use of the approximated solution. Furthermore, recalling that as the external electric field increases, the size of the spherulite decreases (as described by equation ), we note that the larger is the size of the skyrmion, more efficient is the polarization conversion with respect to the approximated one. The *in plane* conversion {#sec:Bornin} ------------------------- Now let us turn our attention on the subsystem -, which could be represented in the form ( [cc]{} L + k\^2 & -M\ M & L + k\^2 ) ( [c]{} \_\ \_ ) = k\^2 [(]{} [c]{} 0\ \^2 \_+ 2\_z , \[Erhophisub\] where $L = \nabla_0^2 - \frac{1}{\rho^2} $ and $M = \frac{2}{\rho^2}\, \p_\phi \, $ . As before, it can be set into the integral form &[(]{} [c]{} \_\ \_ ) & = [(]{} [c]{} \_\ \_ ) e\^[ ]{} +\ &k\^2 & ( [cc]{} G[(]{},’& - F[(]{},’\ F[(]{},’& G[(]{},’ ) [(]{} [c]{} 0\ \^2 [(]{}’\_[(]{}’+ 2[(]{}’\_z[(]{}’ d’ . Of course, the inhomogeneous term is a solution of the homogeneous system and the matrix Green function is ( [cc]{} G[(]{},’& - F[(]{},’\ F[(]{},’& G[(]{},’ ) = \_[m, n = -]{}\^e\^[ [(]{}z - z’]{} e\^[ m [(]{}- ’]{} ( [cc]{} h\_[m, n ]{}[(]{}, ’ & - f\_[m, n ]{}[(]{}, ’\ f\_[m, n ]{}[(]{}, ’ & h\_[m, n ]{}[(]{}, ’ ) , where the unknown $ h_{m, \, n }, \,f_{m, \, n }$ satisfy the matrix equation \[systemhf\] [(]{} [cc]{} \_[(]{} \_ - +\_n\^2&\ & \_[(]{} \_ - +\_n\^2 ) ( [cc]{} h\_[m, n ]{}[(]{}, ’ & - f\_[m, n ]{}[(]{}, ’\ f\_[m, n ]{}[(]{}, ’ & h\_[m, n ]{}[(]{}, ’ ) = [(]{}- ’, where $\kappa_n^2= k^2 -{\left (}\frac{2\, \pi \, n}{\nu}\rg^2$. As in the previous subsection, we limit ourselves to evaluate the diffusion of light by the spherulite in the Born approximation. Accordingly, the conversion from out-plane to in-plane scattering leads to the following approximated expression [(]{} [c]{} \_\^B\ \_\^B ) = \_[z]{} \_[m, n = -]{}\^2[(]{}’ e\^[ ’ ’]{} e\^[ [(]{}z - z’]{} e\^[ m [(]{}- ’]{} ( [c]{} - f\_[m, n ]{}[(]{}, ’\ h\_[m, n ]{}[(]{}, ’ ) d’ , where $h_{m, \, n }$ and $f_{m, \, n }$ are solutions of the system . Again, using the simplification induced by the assumption 5. in Sec. 3 and by using the expansion of the plane wave factor in terms of Bessel functions, one gets [(]{} [c]{} \_\^B\ \_\^B ) = \_[z]{} \_[m= -]{}\^\^me\^[ m ]{} 2[(]{}’ J\_m[(]{} ’ ( [c]{} - f\_[m ]{}[(]{}, ’\ h\_[m ]{}[(]{}, ’ ) ’ d’ , \[InplaneBorn\] where we dropped the subscript $n$ from both $h_{m, \, n }$ and $f_{m, \, n }$ as the only non vanishing contributions come from the $n = 0 $ mode. The squared modulus of the above quantity, properly managed, will produce the cross section of the *out plane* - *in plane* scattering process. From we obtain the equations for $h_{m}$ and $f_{m}$ as follows { [cc]{} h\_m”[(]{}++ (k\^2- ) h\_m[(]{}+ & = ,\ f\_m”[(]{}++(k\^2- ) f\_m[(]{}+ & = 0. . \[syshf\] The general solution of the system above is h\_m\^ &=& c\_1\^ J\_[m-1]{}[(]{}k + c\_2\^ Y\_[m-1]{}[(]{}k + d\_1\^ J\_[m+1]{}[(]{}k + d\_2\^Y\_[m+1]{}[(]{}k ,\ f\_m\^ &= & c\_1\^ J\_[m-1]{}[(]{}k + c\_2\^ Y\_[m-1]{}[(]{}k - d\_1\^ J\_[m+1]{}[(]{}k - d\_2\^ Y\_[m+1]{}[(]{}k , where $c_i^\pm \,$ and $d_i^\pm \,$ are four arbitrary constants in the regions $\rho <\rho' $ or $\rho > \rho' $, respectively. Continuity of the solutions and discontinuity of their first derivatives at $\rho'$ imply a functional dependency of those coefficients on this variable. Moreover, as in the previous section, we require regularity at $\rho \to \, 0$ and radiative behaviour at $\rho \to \, \infty$. All conditions above lead to a linear system, from which one obtains the values of the unknown coefficients, namely c\_1\^[+]{} = - H\^[[(]{}1 ]{}\_[m-1]{}[(]{}k ’ , d\_1\^[+]{} = - H\^[[(]{}1 ]{}\_[m+1]{}[(]{}k ’ , c\_2\^[+]{} = d\_2\^[+]{} = 0 ,\ c\_1\^[-]{} = c\_2\^[-]{} = - J\_[m-1]{}[(]{}k ’ , d\_1\^[-]{} = d\_2\^[-]{} = - J\_[m+1]{}[(]{}k ’ . Now we are in position to evaluate , namely Resorting again to the asymptotic behaviour of the Hankel functions, the solution at infinity can be estimated as e\^[- ]{}\_[m= -]{}\^e\^[ m ]{} 2[(]{}’ J\_m[(]{} ’ ( [c]{} - J\_[m-1]{}[(]{}k ’ + J\_[m+1]{}[(]{}k ’\ J\_[m-1]{}[(]{}k ’ - J\_[m+1]{}[(]{}k ’ ) ’ d’ . \[InplaneBorn2\] Setting $$\begin{aligned} \frac{1}{\tilde{k}^2} I^{(\rho)}_m=&\frac{1}{k^2} \int \sin 2\theta{\left (}\frac{s}{\tilde{k}}\rg \, J_m{\left (}s \rg \, \, \lq J_{m-1}{\left (}k\, \frac{s}{\tilde{k}} \rg + J_{m+1}{\left (}k\, \frac{s}{\tilde{k}} \rg \rq \, s\, d s\, d\phi' \label{matrix3}\\ \frac{1}{\tilde{k}^2}I^{(\phi)}_m=&\frac{1}{k^2}\int \sin 2\theta{\left (}\frac{s}{\tilde{k}}\rg \, J_m{\left (}s \rg \, \, \lq J_{m-1}{\left (}k\,\frac{s}{\tilde{k}} \rg - J_{m+1}{\left (}k\, \frac{s}{\tilde{k}} \rg \rq \, s\, d s, \label{matrix4}\end{aligned}$$ equation can be rewritten as e\^[- ]{}\_[m= -]{}\^e\^[ m ]{} ( [c]{} -I\^[()]{}\_m\ I\^[()]{}\_m ) . \[InplaneBorn3\] Recalling the identity $J_{-m}=(-1)^m J_{m}$, it is easy to show that $I^{(\rho)}_0=0$, $I^{(\rho)}_m=-I^{(\rho)}_{-m}$ and $I^{(\phi)}_m=I^{(\phi)}_{-m}$, so that equation now reads e\^[- ]{} ( [c]{} 2 \_[m= 1]{}\^ I\_m\^[()]{}m\ I\^[()]{}\_0+2 \_[m= 1]{}\^ I\_m\^[()]{}m ) . \[InplaneBorn4\] Performing again the substitution $\tilde{k} \to k$, we notice that $I_{m}^{(\phi)}$ is the same as $\mathcal{I}_m^\phi$ in equation . On the other hand, the values of the first three hundred matrix elements are presented in figure \[fig:matrix3\]. The *in plane-conversion* cross section is then given by [(]{}, ; , = [(]{} \^2 4(\_[m= 1]{}\^ I\_m\^[()]{}m)\^2+([ I]{}\^[()]{}\_0 + 2 \_[m = 1 ]{}\^[+ ]{} [ I]{}\^[()]{}\_m )\^2 \[convcrsection2\]and the total cross section reads \_[(]{}, ; , = [(]{} \^2 ([ I]{}\^[()]{}\_0)\^2+ 2\_[m= 1]{}\^ (I\_m\^[()]{})\^2 + 2 \_[m = 1 ]{}\^[+ ]{} ([ I]{}\^[()]{}\_m )\^2 . \[convcrsectiontot2\] The numerical results, in arbitrary units, for the computation of the differential cross section are depicted in figures \[fig:sigmaplane\] and \[fig:sigmaplane2\], for two different values of the ratio $E/E_0$. Conversely to what happens for the *out plane*-conversion cross section, in this case the use of the exact solution for the computation of the differential cross section keeps the angle of maximum conversion substantially unchanged. Conclusions =========== In the present work we showed that the spherulites in CLC can be used to change the polarization axes of incoming light with a certain efficiency. To the best of our knowledge, this phenomenon is quite new as, so far, only the light diffusion from helicoidal CLC structures in the bulk has been studied [@nadina]: here we considered the interaction with localized perturbations, i.e. the spherulites. In detail we first described the shape of the spherulites, for different values of the controlling parameters, in particular the external applied electric (or magnetic) field. From that we were able to compute the cross section of the polarization axes conversions in Born approximation. We found that the conversion processes have maximum differential cross section at small non-zero deflection angles. Thus, the effect we described can be detected off the forward direction. Furthermore, we compared the differential cross sections for different values of the external electric field, proving that the scattering in significantly influenced by such a parameter. Thus, we can use it as a tuning controller of the diffusion. In particular, the conversion is more efficient for fields slightly above the threshold of the critical unwinding field of the cholesteric-nematic transition. This is due to the quadratic inverse dependency on the external field of the spherulite core size. In order to obtain these results, we used both a piecewise linear approximation of the spherulite profile and the corresponding numerical exact solution. On the other hand, we showed that the spherulite is badly approximated by a piecewise linear function, especially for weak electric field. Thus, to improve our results we need to further study analytical profile solution of the spherulites . Actually there are many similar questions to be answered. First, it would be important to study the cross sections for all channels, beyond the Born approximation, and to suppress the several simplifications we made. In particular, the spherulite is not a cylinder, as we assumed in the present work, but it resembles more a sort of barrel. Correspondingly, new diffractive effects may arise from the actual shape, especially close the confining plates. This is related to the type of anchoring, which is parametrised by a further controlling parameter in the Rapini-Papoular conditions. In fact, we showed that the shape of the spherulite depends significantly on it, even if the ratio $E/E_0$ is kept fixed. Finally, it is well known that for external fields below the critical threshold, lattice configurations of spherulites can appear [@Leonov]. This fact suggests to explore the light diffusion processes in such a regime, in order to enhance the effects we described above, or to have a better control on them. ### Aknowledgments {#aknowledgments .unnumbered} This work was partially supported by MIUR, by the INFN on the project IS-CSN4 Mathematical Methods of Nonlinear Physics and by INDAM-GNFM. [1]{} G. Luckhurst, D. Dunmur *Liquid Crystals* in *Springer Handbook of Electronic and Photonic Materials* edited by Kasap S., Capper P. (Springer Handbooks. Springer, Cham, 2017). G. V. Chigrinov, Frontiers of Optoelectronics in China **3**, 103-107 (2010) H. Coles and S. Morris, Nature Photonics **4**, 676-685 (2010). J. Beeckman and K. Neyts and P. J. M. Vanbrabant, Optical Engineering **50**,081202 (2011) P. Oswald and J. Baudry and S. Pirkl, Phys Rep **337(1)**, 67-96 (2000). G. Assanto and N. F. Smyth, , IEEE Journal of Selected Topics in Quantum Electronics **22**, 4400306 (2016). H.S. Kitzerow and P. P. Crooker, Liquid Crystals **11(4)**, 561-568 (1992). D. K. Yang and P. P. Crooker, Liquid Crystals **9(2)**, 245-251 (1991) Patel, Dahyabhai L. and Dupr[é]{}, Donald B., Journal of Polymer Science: Polymer Physics Edition **18(7)**,1599-1607 (1980). A. O. Leonov, I. E. Dragunov, U. K. Rö[ß]{}ler, and A. N. Bogdanov Phys. Rev. E **90**, 042502 (2014). G. De Matteis and L. Martina and V. Turco, Theoretical and Mathematical Physics (to be published). J. Fukuda and S. Zumer, Nature Communications **2**, 246 (2011). C. Carboni and A. K. George and A. Al-Lawati, Molecular Crystals and Liquid Crystals **410(1)**, 1109-1113 (2004). N. Romming, C. Hanneken, M. Menzel, J. E. Bickel, B. Wolter, K. von Bergmann, A. Kubetzka and R. Wiesendanger, Science **341**, 6146 (2013). Bogdanov, A. N. and Rößler, U. K., Phys. Rev. Lett. **87**, 037203 (2001) P. De Gennes and J. Prost, *The physics of liquid crystals*, (Clarendon Press, Oxford, 1993). I. W. Stewart, *The static and dynamic continuum theory of liquid crystals: a mathematical introduction*, (Taylor And Francis, 11 New Fetter Lane, London, 2004). P. Oswald, P. Pieranski, G. Gray and J. Goodby, *Nematic and Cholesteric Liquid Crystals*, (CRC Press, Boca Raton, 2006). R. D. Kamien and J. V. Selinger, Journal of Physics: Condensed Matter **13(3)**, R1-R22 (2001). T. Akahane, T. Tako Japan. J. Appl. Phys. **15**, 1559 (1976). B. Kerllenevich and A. Coche, Molecular Crystals and Liquid Crystals **68**, 47-55 (1981). J. Baudry and S. Pirkl and P. Oswald, Phys. Rev. E **57**, 3038 (1998). P. Oswald and A. Dequidt, Phys. Rev. E **77**, 061703 (2008). S. Afghah, J.V. Selinger, Phys. Rev. E **96**, 012708 (2017). A. Rapini and M. Papoular, J. Physique Colloq **30**,C4 (1969). P. J. Kedney and I. W. Stewart, Letters in Mathematical Physics **31**, 261-269 (1994) N. Manton and P. Sutcliffe, *Topological Solitons*, 1st ed, (Cambridge University Press, 2004). A. A. Belavin and A. M. Polyakov, JETP Lett. **22**, 503-506 (1975). A. Barone and F. Esposito and C. J. Magee and A. C. Scott, Rivista del Nuovo Cimento **1(2)**, 227-267 (1971). M. J. Ablowitz and P. A. Clarkson, *Solitons, nonlinear evolution equations and inverse scattering*, (Cambridge University Press, 1991). B. M. McCoy. and C. A. Tracy,. and T. T. Wu, Painleve Functions of the Third Kind, J. Math. Phys **18(5)** (1977). NIST Digital Library of Mathematical Functions, http://dlmf.nist.gov/. William H. Press and Saul, A., *Numerical Recipes*, 3rd ed., (Cambridge University Press, 2007). Randall J. LeVeque, *Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems*, (Society for Industrial and Applied Mathematics, 2007). Jackson , J. D., *Classical Electrodynamics,* 3rd ed., (Wiley, 2012). M. Kleman, O.D. Lavrentovich and J. Goodby , *Soft Matter Physics: An Introduction*, (Springer-Verlag, New York, 2003). B. A. Lippmann, J. Schwinger, Phys. Rev. **79**, 469 (1950). N. Gheorghiu, G. Y. Panasyuk, ArXiv e-prints, 1705.02683 (2017). [^1]: e-mail: giovanni.dematteis@istruzione.it [^2]: e-mail:nico.delleside@unisalento.it [^3]: e-mail:martina@le.infn.it [^4]: e-mail: vito.turco@live.com
--- abstract: 'We study the natural $G_2$ structure on the unit tangent sphere bundle $SM$ of any given orientable Riemannian 4-manifold $M$, as it was discovered in [@AlbSal]. A name is proposed for the space. We work in the context of metric connections, or so called geometry with torsion, and describe the components of the torsion of the connection which imply certain equations of the $G_2$ structure. This article is devoted to finding the $G_2$-torsion tensors which classify our structure according to the theory in [@FerGray].' author: - | R. Albuquerque[^1]\ rpa@dmat.uevora.pt title: 'On the $G_2$ bundle of a Riemannian 4-manifold' --- [**Key Words:**]{} metric connections, torsion, tangent sphere bundle, Einstein manifold, $G_2$ structure, $G_2$ torsions. [**MSC 2010:**]{} Primary: 53C10, 53C20, 53C25; Secondary: 53C28 The author acknowledges the support of Fundação Ciência e Tecnologia, Portugal, either through CIMA-UÉ, Centro de Investigação em Matemática e Aplicações da Universidade de Évora and through SFRH/BSAS/895/2009. [10]{} I. Agricola, [*The Srní lectures on non-integrable geometries with torsion*]{}, Archi. Mathematicum (Brno), Tomus 42 (2006), Suppl., 5–84. I. Agricola and C. Thier, [*The geodesics of metric connections with vectorial torsion*]{}, Ann. Global Anal. Geom. 26 (2004), no. 4, 321–332. R. Albuquerque and I. Salavessa, , Monatshefte für Mathematik 158, Issue 4 (2009), 335–348. A. L. Besse, , Springer-Verlag Berlin Heidelberg 1987. D. Blair, , LNM 509 (1976) Springer-Verlag Berlin Heidelberg. R. L. Bryant, , Annals of Mathematics (2), vol. 126 no. 3 (1987), 525–576. R. L. Bryant, , Proceedings of the 2004 Gokova Conference on Geometry and Topology (May, 2003). R. L. Bryant and S. Salamon, , Duke Math. Journ., vol. 58 no. 3 (1989), 829–850. S. Chiossi and S. Salamon, [*The intrinsic torsion of $SU(3)$ and $G_2$ structures*]{}, Differential Geometry, Valencia 2001, World Sci. Publishing, 115–133, (2002). M. Fernández and A. Gray, , Ann. Mat. Pura Appl. 4 132 (1982), 19–45. N. O’Brian and J. Rawnsley, , Ann. of Global Analysis and Geometry, 3(1) (1985), 29–58. Th. Friedrich, I. Kath, A. Moroianu and U. Semmelmann, , Journ. Geom. Phys. 23 (1997), 259–286. Th. Friedrich and S. Ivanov, , Jour. Geom. Phys. 48 (2003), 1–11. Th. Friedrich and S. Ivanov, , Asian Jour. of Math., 6 (2002), n.2, 303–335. R. Harvey and H. B. Lawson, , Acta Math. 148 (1982), 47–157. S. Ishihara, , J. Math. Soc. Japan, 7 (1955), 345–370. G. Jensen, , J. Diff. Geom., 3 (1969), 309–349. D. Joyce, , Oxford Mathematical Monographs, Oxford University Press (2000). Y. Tashiro, , Tôhoku Math. Jour. 21 (1969), 117–143. [^1]: Departamento de Matemática da Universidade de Évora and Centro de Investigação em Matemática e Aplicações (CIMA-UÉ), Rua Romão Ramalho, 59, 671-7000 Évora, Portugal.
--- abstract: 'We offer an alternative viewpoint on Dyson’s original paper regarding the application of Brownian motion to random matrix theory (RMT). In particular we show how one may use the same approach in order to study the stochastic motion in the space of matrix traces $t_n = \sum_{\nu=1}^{N} \lambda_\nu^n$, rather than the eigenvalues $\lambda_\nu$. In complete analogy with Dyson we obtain a Fokker-Planck equation that exhibits a stationary solution corresponding to the joint probability density function in the space ${\boldsymbol{t}}= (t_1,\ldots,t_n)$, which can in turn be related to the eigenvalues ${\boldsymbol{\lambda}}= (\lambda_1,\ldots,\lambda_N)$. As a consequence two interesting combinatorial identities emerge, which are proved algebraically in the appendix. We also offer a number of comments on this version of Dyson’s theory and discuss its potential advantages.' address: - '$^{1}$Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 7610001, Israel.' - '$^{2}$School of Mathematical Sciences, Queen Mary University of London, London, E1 4NS, UK' - '$^{3}$ Max-Planck Institut für Mathematik, Vivatsgasse 7, D-53111 Bonn, Germany.' author: - 'Christopher H. Joyner$^{1,2}$ and Uzy Smilansky$^{1}$ (with an appendix by Don B. Zagier$^{3}$)' title: 'Dyson’s Brownian-motion model for random matrix theory - revisited' --- Introduction ============ In his seminal 1962 paper, [*A Brownian-Motion Model for the Eigenvalues of a Random Matrix*]{} [@Dyson1], F. Dyson provided a conceptually novel and practical approach to the theory of random matrices, paving the way for many interesting developments (see e.g. [@mehtabook; @forrester; @Anderson; @Guhr; @Akemann; @Haakebook; @Porter] and references cited therein.) In it he explains how to introduce a dynamical approach to the theory of random matrices and the traditional Gaussian ensembles in particular. We briefly recapitulate the results here in this introductory section. Consider a self adjoint matrix $M$ of size $N \times N$, whose entries are of the form $M_{i j}=\sum_{\alpha=0}^{\beta-1}M_{ij;\alpha}e_{\alpha}$. The coefficients $M_{ij;\alpha}$ being real parameters and $e_{\alpha}$ are the units of the three potential algebras: real ($\beta = 1)$, complex ($\beta=2$) and real-quaternion ($\beta=4$), satisfying $e_0^2 =1$ and $e_\alpha^2 = -1 \ \forall\ \alpha>0$. Choosing the real coefficients $M_{ij;\alpha}$ independently from a Gaussian distribution with zero mean and variance ${\mathbf{E}}(M_{ij;\alpha}^2) = (1+\delta_{ij})/(2\beta)$ we obtain the Gaussian orthogonal, unitary and symplectic ensembles (GOE, GUE and GSE) for $\beta=1,2$ and 4 respectively. Thus the probability distribution for the matrix $M$ may be neatly summarised in the following form $$\label{Matrix prob dist} P(M) = \kappa_{\beta}^{(N)} \ {\rm e}^{-\frac{\beta}{2}\tr MM^\dagger},$$ with $\kappa_{\beta}^{(N)} $ a normalization constant. Crucially, Dyson realised that the above distribution can be identified as the stationary distribution of a Brownian particle in $N + \beta N(N-1)/2$ dimensions. More precisely, this means that each independent element $M_{ij;\alpha}, 1 \leq i \leq j\leq N$ undergoes a 1D Ornstein-Uhlenbeck process, so that in the (fictitious) time $s$ the motion of $M_{ij;\alpha}$ is completely determined by the following moments: $$\begin{aligned} \label{Matrix elements general 1} {\mathbf{E}}(\delta M_{ij;\alpha}) & = & - M_{ij,\alpha}\delta s \\ \label{Matrix elements general 2} {\mathbf{E}}(\delta M_{ij;\alpha}^2) & = & \frac{1}{\beta}(1 + \delta_{ij})\delta s . \end{aligned}$$ The latter implies that ${\mathbf{E}}(|\delta M_{ij}|^2) = (1 + (2/\beta - 1)\delta_{ij})\delta s$ (since the diagonal elements $M_{ii}$ are always real). Importantly, this stochastic motion is invariant under unitary transformations, meaning the eigenvectors do not play any role in the corresponding motion induced in the $N$ dimensional space of eigenvalues ${\boldsymbol{\lambda}}=(\lambda_1,\cdots, \lambda_N)$. Therefore one may choose a representation in which $M$ is diagonal, leading to a perturbation of the eigenvalue $\lambda_{\mu}$ due to a small change in the matrix $\delta M$ of $$\label{Perturbation formula} \delta \lambda_{\mu} = \delta M_{\mu\mu;0} + \sum_{\nu \neq \mu} \frac{|\delta M_{\mu\nu}|^2}{\lambda_{\mu} - \lambda_{\nu}}.$$ Obtaining the first two moments of the evolution then follows directly from the expressions (\[Matrix elements general 1\]) and (\[Matrix elements general 2\]), given by $$\begin{aligned} \label{Eigenvalue motion 1} {\mathbf{E}}(\delta \lambda_{\mu}) & = & F_\mu({\boldsymbol{\lambda}})\delta s = \left[\sum_{\nu\ne\mu} \frac{1}{\lambda_{\nu}-\lambda_\mu} - \lambda_{\mu}\ \right]\delta s \\ \label{Eigenvalue motion 2} {\mathbf{E}}(\delta \lambda_{\mu}^2) & = & \frac{2}{\beta}\delta s .\end{aligned}$$ Using these two moments, one obtains a Fokker-Planck equation that describes how the joint probability distribution function (JPDF) $P({\boldsymbol{\lambda}};s)$ evolves in time, given some specific initial distribution $P({\boldsymbol{\lambda}};0)$; $$\label{FP-spectral} \frac{\partial P}{\partial s}=\sum_{\mu=1}^{N}\left [- \frac{\partial }{\partial \lambda_{\mu}}(F_{\mu}({\boldsymbol{\lambda}})P({\boldsymbol{\lambda}};s)) +\beta^{-1} \frac{\partial^2P({\boldsymbol{\lambda}};s)}{\partial \lambda_{\mu}^2}\right].$$ The real advantage, and one might add elegance, of this approach is expressed in the above equation. In general it is not known how to obtain $P({\boldsymbol{\lambda}};s)$ for arbitrary initial conditions and times $s$. However, since we are interested in the stationary distribution, we can reduce the complexity by setting the LHS equal to zero, at which point one solves the equation easily to obtain $$\label{spectrJPD} P({\boldsymbol{\lambda}})= C^{(N)}_{\beta} \prod_{\mu<\nu}|\lambda_{\mu}-\lambda_{\nu}|^{\beta} {\rm exp}\left(-\frac{\beta}{2} \sum_{\mu}\lambda_{\mu}^2\right ),$$ with $C^{(N)}_{\beta}$ a normalisation constant (see e.g. Chapter 3 of [@mehtabook]). Moreover, since we know that the underlying motion (\[Matrix elements general 1\]) and (\[Matrix elements general 2\]) in the space of matrices leads to the probability distribution (\[Matrix prob dist\]), the expression (\[spectrJPD\]) must be the unique stationary distribution for the process (\[Eigenvalue motion 1\]) and (\[Eigenvalue motion 2\]) and is therefore the JPDF of the eigenvalues in the appropriate Gaussian ensembles. The key component of (\[spectrJPD\]) is the Vandermonde determinant $\prod_{\mu<\nu} |\lambda_{\mu}-\lambda_{\nu}|$, which is responsible for the apparent repulsion of neighbouring eigenvalues. This factor emerges as the Jacobian of the transformation from (\[Matrix prob dist\]) to (\[spectrJPD\]). However, as Dyson highlights, the above approach offers a new insight into its appearance - as it is nothing more than the effect coming from the second order term in the perturbation formula (\[Perturbation formula\]). Recently, the authors have adapted the above approach to investigate the spectral statistics of *Bernoulli* matrices [@Joyner-2015] (matrices in which the elements come from the set $\{\pm 1\}$). In this instance higher terms in the perturbation formula had to be accounted for, which meant assumptions regarding the delocalisation of eigenvectors were required. This inevitably led to the following question - can Dyson’s Brownian motion model be used without the requirement of the perturbation formula (\[Perturbation formula\])? In this article we demonstrate that the answer is indeed positive. To achieve this we start from a slightly different viewpoint to Dyson: Rather than following the evolution $P({\boldsymbol{\lambda}};s)$ of the eigenvalues directly, we instead follow $Q({\boldsymbol{t}};s)$ - the JPDF of the $N$-dimensional vector of traces ${\boldsymbol{t}}= (t_1,\cdots,t_N)$, where $t_k=\sum_{\nu=1}^N\lambda_\nu^k = \tr M^k$. Performing a transformation of variables then allows us to recover the stationary solution (\[spectrJPD\]) expressed in terms of the ${\boldsymbol{t}}$ variables. To the best of our knowledge, the distribution of the traces (or spectral moments) has not been extensively studied, although there are exceptions for both the Gaussian and circular ensembles (see e.g. [@Sinai; @Essen; @Guionnet; @Diaconis-1994; @Diaconis-2004] and references therein). We therefore find it worthwhile to pursue this direction, not only as it sheds new light on Dyson’s approach, but because it may offer different perspectives on such trace distributions. In addition, our method has led to the discovery of two identities (see Proposition \[Identities\]) that relate the traces $t_n$ with $n > N$ to those with $n \leq N$. We are unaware of the existence of similar identities in the literature and a direct proof of their validity has kindly been supplied by D. Zagier in \[Zagier proof\]. It has also been brought to our attention[^1] that a similar philosophy has also been undertaken by Bakry and Zani [@Bakry-2014]. Rather than looking at the traces ${\boldsymbol{t}}$ they follow the motion of the secular coefficients (given by $c_k$ in Section \[definitions\]). Their motivation comes from wanting to generalise the probability density functions to Gaussian random matrices with Clifford algebras (rather than real, complex or quaternion entries) and they too note that such approaches have not been utilised before. The paper is organised as follows: In Section \[definitions\] we introduce the basic concepts to be discussed, provide some useful relations and outline the identities mentioned above. We also provide explicit formulae for the stationary distribution $Q_{\beta}({\boldsymbol{t}})$ for $\beta = 1,2,4$ and arbitrary dimension $N$. In Section \[F-P equations\] we derive the Fokker-Planck equation for $Q({\boldsymbol{t}};s)$ and give an example of its form in two dimensions in Section \[2 Dim example\]. In Section \[Sec: Stationary solution\] we analyse the equation in $N$-dimensions and show how the aforementioned identities arise from considering the stationary solution $Q_{\beta}({\boldsymbol{t}})$. Section \[Sec: Mean values\] is used briefly to explain how the mean spectral moments also arise naturally in this context. Finally in Section \[Sec: Application\] and Section \[Sec: Discussion\] we provide an application of this method to Bernoulli ensembles and discuss the potential advantages of the whole approach. Definitions and useful relations {#definitions} ================================ The first essential feature to outline is the relationship between the spectral and trace distribution functions $P({\boldsymbol{\lambda}};s)$ and $Q({\boldsymbol{t}};s)$. The elements of the Jacobian of the transformation are given by $\frac{\partial t_n}{\partial \lambda_\nu} = n \lambda_\nu^{n-1}$, which means that $$\label{PD transformation} P({\boldsymbol{\lambda}};s) = \left|\frac{\partial {\boldsymbol{t}}}{\partial {\boldsymbol{\lambda}}}\right| Q({\boldsymbol{t}};s) = N! \det(V) \; Q({\boldsymbol{t}};s) .$$ Here $V$ is the familiar Vandermonde matrix $$\label{Vandermonde} V= \left(\begin{array}{cccc} 1 & 1 & \cdots & 1 \\ \lambda_1 & \lambda_2 & \cdots & \lambda_N \\ \lambda_1^2 & \lambda_2^2 & \cdots & \lambda_N^2 \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_1^{N-1} & \lambda_2^{N-1} & \cdots & \lambda_N^{N-1} \end{array}\right)$$ and so $\det(V) = \prod_{\mu < \nu} | \lambda_{\mu} - \lambda_{\nu}|$, as seen in (\[spectrJPD\]). The mapping ${\boldsymbol{\lambda}}\rightarrow {\boldsymbol{t}}$ is one-to-one as long as the Jacobian does not vanish, hence we must restrict the spectral variables to an ordered sector, e.g. $\lambda_1 < \lambda_2 < \cdots < \lambda_N$. In order to obtain an expression for $Q({\boldsymbol{t}};s)$ we need to write $P({\boldsymbol{\lambda}};s)$, and thus the Vandermonde determinant $\det(V)$, in terms of the traces ${\boldsymbol{t}}$. Fortunately this is relatively straightforward, since $G({\boldsymbol{t}}) = \det(V) = \sqrt{\det(VV^{\intercal})}$, with $$VV^{\intercal} = \left(\begin{array}{cccc} t_0 & t_1 & \cdots & t_{N-1} \\ t_1 & t_2 & \cdots & t_N \\ t_2 & t_3 & \cdots & t_{N+1} \\ \vdots & \vdots & \ddots & \vdots \\ t_{N-1} & t_N & \cdots & t_{2N-2} \end{array}\right)$$ and $t_0=N$ (see [@Dunne-1993; @Vivo-2008] for example for uses of this identity in other contexts). At this point $G({\boldsymbol{t}})$ is expressed entirely in terms of the traces, as desired, however this includes traces of higher degree than $N$, which are themselves functions of the traces $t_n, \ 1\le n\le N$. The expressions for $t_{N+r}$ in terms of the first $N$ $t_n$, whilst complicated, can be written down explicitly. They originate from the characteristic polynomial $\Phi(X) := \det(XI - M) = \sum_{k=0}^N c_k X^{N-k}$, with $c_0=1$. For any eigenvalue $\lambda_\nu$ we have $\Phi(\lambda_\nu)=0$ and thus it follows $$\label{poleq} t_{N+r}= \ -\left [\sum_{k=1}^{N} c_k t_{N+r-k}\right ].$$ Newton’s identities give the coefficients $c_k$ in terms of the $t_n, n\le k$ via the determinant $$\label{newteq} c_k =\frac{(-1)^k}{k!} \left|\begin{array}{ccccc} t_1 & 1 & 0 & \cdots & 0 \\ t_2 & t_1 & 2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots\\ t_{k-1} & t_{k-2} & \cdots & t_1& k-1\\ t_{k} & t_{k-1} & \cdots & t_2 & t_{1} \end{array}\right|.$$ Therefore, using a combination of relations (\[poleq\]) and (\[newteq\]) one may write $G({\boldsymbol{t}})$ explicitly in terms of the first $N$ traces ${\boldsymbol{t}}$. Clearly $\Delta = G({\boldsymbol{t}})^2$ is nothing but the discriminant of $\Phi(X)$ expressed as a function of ${\boldsymbol{t}}$. Using the transformation (\[PD transformation\]) and the stationary distribution for the eigenvalues (\[spectrJPD\]) we can obtain the JPDF for the traces in the three canonical ensembles $$\label{trJPD} Q_{\beta} ({\boldsymbol{t}}) = C^{(N)}_{\beta} G({\boldsymbol{t}})^{\beta -1} \exp\left(-\frac{\beta}{2} t_2\right )\chi_{N}({\boldsymbol{t}}).$$ $\chi_{N}({\boldsymbol{t}})$ is an indicator function for the domain $\mathcal{T}\subset\mathbb{R}^N$ which is the support for $Q_{\beta} ({\boldsymbol{t}})$. In contrast to the spectrum, which is defined over the entire space $\mathbb{R}^N$, the trace parameters are restricted to the domain $\mathcal{T}$. This is because the traces are sums of powers of real variables, which must satisfy certain consistency relations: The inverse mapping ${\boldsymbol{t}}\rightarrow {\boldsymbol{\lambda}}$ should yield real spectra. For example, in 2 dimensions we have $2t_2-t_1^2 = (\lambda_2-\lambda_1)^2\ge 0$. Hence, $ \mathcal{T}=\{(t_1,t_2)\in \mathbb{R}^2: 2t_2-t_1^2 \ge 0 \} $. In higher dimensions it becomes increasingly more difficult to write an explicit definition of $\mathcal{T}$, other than stating that it is the image of the mapping ${\boldsymbol{\lambda}}\rightarrow {\boldsymbol{t}}$. It should be emphasized, however, that $\mathcal{T}$ is independent of the ensemble under consideration - one may consider matrices with non-Gaussian elements, or even correlated elements, and $\mathcal{T}$ will remain the same. We would also like to highlight that the GOE distribution takes a very simple form in this space, i.e. $Q_{1}({\boldsymbol{t}})= C^{(N)}_{1}\exp\left(-\frac{1}{2 } t_2\right )$. At first sight it might seem strange that the JPDF for all the traces depends only on one parameter $t_2$; however, as alluded to above, one must pay very close attention to the domain of integration $\mathcal{T}$. This is exemplified in Section \[2 Dim example\], in which we calculate expectations values and marginal probabilities. In the following section we shall derive the Fokker-Planck equation for $Q({\boldsymbol{t}};s)$. Its stationary solution is known and given explicitly in (\[trJPD\]). As will be shown below, by substituting this solution into the stationary Fokker-Planck equations we obtain two identities which are summarized in the following proposition. \[Identities\] For $n\ge0$ we have $$\begin{aligned} 2\,\sum_{m=1}^N m\,\frac{{\partial}t_{n+m}}{{\partial}t_m} & = & \sum_{i,\,j\ge0 \atop i+j=n} t_it_j \;+\; (n+1)\,t_n\, \label{Main ID 1} \\ \frac{2}{G}\sum_{m=1}^N m\,t_{n+m}\,\frac{{\partial}G}{{\partial}t_m} & = & \sum_{i,\,j\ge0 \atop i+j=n} t_it_j \;-\; (n+1)\,t_n\,. \label{Main ID 2} \end{aligned}$$ As mentioned in the introduction, we are unaware of such identities arising before in RMT or any other context and a direct algebraic proof is given by D. Zagier in \[Zagier proof\]. The Fokker-Planck equation {#F-P equations} ========================== The main reason for studying Dyson’s Brownian motion in the space of traces is that the Fokker-Planck equation for $Q({\boldsymbol{t}};s)$ can be derived directly, avoiding the use of perturbation theory (\[Perturbation formula\]). The expectation values of the components of ${\boldsymbol{t}}$ due to an incremental changes in the matrices will be evaluated directly from the matrix elements statistics. Once $Q({\boldsymbol{t}};s)$ has been computed, one can then transform back to the spectral representation in order to deduce the eigenvalue statistics though $P({\boldsymbol{\lambda}},s)$. We begin by expressing the change in the $n^{\rm th}$ trace via the change in the matrix $\delta M$, up to second order (since higher terms in $\delta M$ will be of orders $\delta s^2$ or greater after taking the expectation) $$\begin{aligned} \label{deltatn} \delta t_n &=&\left[\tr((M+\delta M)^n) - \tr(M^n)\right]\nonumber \\ &=& n\tr(M^{n-1}\delta M) + \frac{n}{2}\sum_{x=0}^{n-2}\tr(M^x \delta M M^{n-x-2} \delta M) + \ldots \; .\end{aligned}$$ The simplest way to compute ${\mathbf{E}}(\delta t_n)$ and ${\mathbf{E}}(\delta t_n\delta t_m)$ is to invoke the invariance of the stochastic motion under unitary transformations. We are then free to write the initial matrix $M$ in a diagonal representation of eigenvalues, i.e. $M_{ij} = \lambda_i\delta_{ij}$. Using this and the expressions (\[Matrix elements general 1\]) and (\[Matrix elements general 2\]) we find $$\begin{aligned} \label{First trace moment} {\mathbf{E}}(\delta t_n) &=& -nt_n\delta s + \frac{n}{2}\sum_{x=0}^{n-2}\sum_{ijkl}\lambda_i^x\delta_{ij} \lambda_k^{n-x-2}\delta_{kl} {\mathbf{E}}(\delta M_{jk} \delta M_{li}) \nonumber \\ &=& \left[-nt_n + \frac{n}{2}\sum_{x=0}^{n-2}t_x t_{n-2-x} + \frac{2-\beta}{\beta}\frac{n}{2}(n-1) t_{n-2}\right] \delta s,\end{aligned}$$ where we have used that ${\mathbf{E}}(|\delta M_{ij}|^2) = (1 + (2/\beta-1)\delta_{ij})\delta s$. In particular this means for $n=1$ and $2$ that we have ${\mathbf{E}}(\delta t_1) = -t_1\delta s$ and ${\mathbf{E}}(\delta t_2) = (-2t_2 + t_0^2 + (2/\beta-1)t_0)\delta s$. For the second order moments, since again we need terms proportional to $\delta s$ and no more, we only require the first term in (\[deltatn\]). Therefore, for $n,m =1,\ldots,N$, we get $$\begin{aligned} \label{Second trace moment} \hspace{-20pt} {\mathbf{E}}( \delta t_n \delta t_m) &=& nm\sum_{ijkl} \lambda_i^{n-1}\delta_{ij}\lambda_k^{m-1}\delta_{kl} {\mathbf{E}}(\delta M_{ji}\delta M_{lk})\nonumber \\ &=& nm\sum_{ik}\lambda_i^{n-1}\lambda_k^{m-1}{\mathbf{E}}(\delta M_{ii}\delta M_{kk}) = \frac{2nm}{\beta}t_{n+m-2}\delta s,\end{aligned}$$ where we have used ${\mathbf{E}}(\delta M_{ii}\delta M_{kk}) = \frac{2}{\beta}\delta_{ik}\delta s$. Note that in the above equations, and in the following, one should remember that the independent parameters in the present theory are the components ${\boldsymbol{t}}$ which consist of the first $N$ traces. Whenever there appears $t_x$ with $x>N$, it should be considered as a function of the independent parameters as explained in the previous section. Similarly, one must substitute $t_0 = N$. We are now in a position to obtain our Fokker-Planck equation for determining the probability distribution $Q_{\beta}({\boldsymbol{t}};s)$ of the traces. For simplicity we write (\[First trace moment\]) and (\[Second trace moment\]) in the form $R^{(\beta)}_n= {\mathbf{E}}(\delta t_n)/\delta s$ and $R^{(\beta)}_{nm}= {\mathbf{E}}(\delta t_n\delta t_m)/ \delta s$, so that (see for instance [@Wang]) $$\label{FP - traces} \frac{\partial Q_{\beta}}{\partial s} = - \sum_n \frac{\partial (R^{(\beta)}_n Q_{\beta})} {\partial t_n} + \frac{1}{2} \sum_{n,m} \frac{\partial^2 (R^{(\beta)}_{nm} Q_{\beta})} {\partial t_n\partial t_m}.$$ Just as $P_{\beta}({\boldsymbol{\lambda}})$, given in (\[spectrJPD\]), is the stationary solution to the Fokker-Planck equation (\[FP-spectral\]) for the eigenvalues, so we would like to verify $Q_{\beta}({\boldsymbol{t}})$, given in (\[trJPD\]), is the stationary solution of (\[FP - traces\]) above. For this to be the case, $Q_{\beta}({\boldsymbol{t}})$ must therefore satisfy the following $N$ simultaneous equations $$\label{statfokkerplanck} R^{(\beta)}_n Q_{\beta} = \frac{1}{2} \sum_{m} \frac{\partial (R^{(\beta)}_{nm} Q_{\beta})} {\partial t_m}, \ \ \ \ \forall\ 1 \le n \le N.$$ These will be discussed shortly for arbitrary matrix dimension $N$ but prior to this we outline, for illustrative purposes, the scenario for $N=2$. Example: $2\times 2$ Gaussian ensembles {#2 Dim example} --------------------------------------- The $N=2$ case offers the particular advantage that the expressions (\[First trace moment\]) and (\[Second trace moment\]) do not contain traces larger than $t_N$ (i.e. $t_2$ in this case), which is not true for $N >2$. In order to satisfy (\[FP - traces\]) $Q \equiv Q_{\beta}(t_1,t_2)$ must be a solution of the simultaneous equations (\[statfokkerplanck\]), which in 2 dimensions are given by $$\begin{aligned} 0 & = & t_1 Q + \frac{1}{\beta}\left[\frac{(t_0\partial Q)}{\partial t_1} + 2\frac{\partial(t_1Q)}{\partial t_2}\right] \nonumber \\ 0 & = & \left(2t_2 - t_0^2 - \frac{(2-\beta)}{\beta} t_0 \right)Q + \frac{2}{\beta}\left[\frac{\partial (t_1Q)}{\partial t_1} + 2\frac{\partial(t_2Q)}{\partial t_2}\right]. \nonumber\end{aligned}$$ One may verify by substitution that the solution is, including the normalisation constant presented in (\[spectrJPD\]), $$\label{22 Trace distribution} Q_\beta(t_1,t_2) = \frac{1}{2}C^{(2)}_{\beta} \left(2t_2 - t_1^2\right)^{\frac{\beta-1}{2}}e^{-\frac{\beta t_2}{2}}.$$ Written in terms of the eigenvalues, using $G({\boldsymbol{t}})^2 = \left(2t_2 - t_1^2\right) = (\lambda_2 - \lambda_1)^2$, this yields $$P_\beta(\lambda_1,\lambda_2) = C^{(2)}_{\beta}|\lambda_2 - \lambda_1|Q(\lambda_1,\lambda_2) = C^{(2)}_{\beta}|\lambda_2 - \lambda_1|^\beta e^{-\frac{\beta(\lambda_1^2 + \lambda_2^2)}{2}},$$ which is the expected result for the JPDF. From (\[22 Trace distribution\]) we can immediately calculate the marginal probability distributions for the traces. Importantly, the limits of integration are defined by the domain $\mathcal{T}$. For 2 dimensions this was outlined in Section \[definitions\] $$\begin{aligned} q_\beta(t_1) & = & \int_{t_1^2/2}^{\infty} d t_2 \; Q_\beta(t_1,t_2) = \frac{1}{2}C^{(2)}_{\beta} r_{\beta}e^{-\beta t_1^2/4} \\ q_\beta(t_2) & = & \int_{-\sqrt{2t_2}}^{\sqrt{2t_2}} d t_1 \; Q_\beta(t_1,t_2) = \frac{1}{2}C^{(2)}_{\beta} s_{\beta}t_2^{\beta/2}e^{-\beta t_2/2},\end{aligned}$$ where $(C^{(2)}_{\beta})^{-1} = 4\sqrt{\pi}, \pi, 3\pi/8$, $r_{\beta} = 2, \sqrt{\pi/2},3\sqrt{\pi}/8$ and $s_{\beta} = 2^{3/2},\pi,3\pi/2$ for $\beta = 1,2,4$ respectively. The expected value of $t_2$ is therefore $\langle t_2 \rangle = \int_0^{\infty} dt_2 \; t_2q_{\beta}(t_2) = 3,2,3/2$ in the three cases. Stationary solution {#Sec: Stationary solution} ------------------- Finding the stationary solution in $N$ dimensions requires solving the $N$ simultaneous equations given by (\[statfokkerplanck\]). Therefore, substituting in the expressions (\[First trace moment\]) and (\[Second trace moment\]) we get for each $n$ $$\label{identity} \fl \left(-nt_n + \frac{n}{2}\sum_{x=0}^{n-2}t_x t_{n-2-x} + \frac{2-\beta}{\beta}\frac{n}{2}(n-1) t_{n-2}\right) Q_{\beta} = \frac{n}{\beta}\sum_{m=1}^N m\frac{\partial (t_{n+m -2}Q_{\beta})}{\partial t_m}.$$ The derivative in the RHS can be expanded using the chain rule to obtain $$\fl \frac{\partial (t_{n+m -2}G^{\beta-1}e^{-\beta t_2/2})}{\partial t_m} = \left(\frac{\partial t_{n+m -2}}{\partial t_m} + t_{n+m-2} \frac{(\beta -1)}{G}\frac{\partial G}{\partial t_n} - \frac{\beta}{2}t_{n+m -2}\delta_{2m}\right)Q_{\beta}.$$ Therefore, after some algebra in which we divide through by a factor $nQ_{\beta}/(2\beta)$ and cancel the term involving $-nt_n$ on both sides, we arrive at the following relationship between the traces $$\label{Trace identity 0} \fl \beta \sum_{x=0}^{n-2}t_x t_{n-2-x} + (2-\beta)(n-1) t_{n-2} = 2\sum_{m=1}^N m\left[ \frac{\partial t_{n+m -2}}{\partial t_m} + t_{n+m-2} \frac{(\beta -1)}{G}\frac{\partial G}{\partial t_m}\right].$$ In the particular case $\beta=1$ there is no dependence on the Vandermonde determinant $G({\boldsymbol{t}})$ and we get $$\label{idgoe} 2\sum_{m=1}^N m \frac{\partial t_{n+m -2}}{\partial t_m} = \sum_{x=0}^{n-2}t_x t_{n-2-x} + (n-1) t_{n-2}.$$ Replacing $n-2$ by $n$ thus gives the identity (\[Main ID 1\]). If we then rearrange (\[Trace identity 0\]) in terms of $\beta$ we find $$\fl \beta\left (\sum_{x=0}^{n-2}t_x t_{n-2-x}-(n-1) t_{n-2}- 2\sum_{m=1}^N m t_{n+m-2} \frac{1}{G}\frac{\partial G}{\partial t_m}\right)$$ $$\label{idbeta} = 2\left (\sum_{m=1}^N m\left[\frac{\partial t_{n+m -2}}{\partial t_m} - t_{n+m-2} \frac{1}{G}\frac{\partial G}{\partial t_m}\right] - (n-1)t_{n-2}\right ).$$ The above must be fulfilled simultaneously for both $\beta=2,4$, which only occurs if the expressions in large brackets on the two sides of (\[idbeta\]) vanish. Therefore, by using the substitution (\[idgoe\]) we arrive at the second identity (\[Main ID 2\]) $$2\sum_{m=1}^N m t_{n+m-2} \frac{1}{G}\frac{\partial G}{\partial t_m} = \sum_{x=0}^{n-2}t_x t_{n-2-x} - (n-1) t_{n-2} \; ,$$ where again we must replace $n-2$ by $n$. Since we know that the expression (\[trJPD\]) must be our stationary solution the method above constitutes a proof of the identities (\[Main ID 1\]) and (\[Main ID 2\]). However, a direct proof of these is given by D. Zagier in \[Zagier proof\], which therefore implies that (\[trJPD\]) must be our stationary JPDF, without the need for any transformation of variables. The mean values $\langle t_n \rangle$ {#Sec: Mean values} ------------------------------------- Computations of expected values of any function of ${\boldsymbol{t}}$ involve integrating over the domain $\chi_N({\boldsymbol{t}})$, which is not explicitly defined for any $N>2$. However, one can use a simple heuristic reasoning in order to identify the mean values $\langle t_n \rangle$ as the coordinates of the vector ${\boldsymbol{t}}$ for which the drift force (\[First trace moment\]) vanishes, i.e. $$\label{semicircle} \langle t_n\rangle = \frac{1}{2}\sum_{x=0}^{n-2}\langle t_x\rangle \langle t_{n-2-x}\rangle + \frac{2-\beta}{2\beta}(n-1)\langle t_{n-2}\rangle .$$ It is natural, and customary, to scale the matrices $M$ by $1/\sqrt{N}$ and the resulting traces by $1/N$, so that we may define $\tau_n = N^{-\frac{n}{2}-1} t_n $. Thus $$\label{Catalan} {\langle \tau_n \rangle} = \frac{1}{2}\sum_{x=0}^{n-2}{\langle \tau_x \rangle}{\langle \tau_{n-2-x} \rangle} + \frac{2 - \beta}{2\beta N}(n-1) {\langle \tau_{n-2} \rangle}$$ If we take $\beta = 2$, with initial conditions $\tau_0 =1$ and $\tau_1 =0$, then (\[Catalan\]) implies that ${\langle \tau_{2k+1} \rangle} =0$ and ${\langle \tau_{2k} \rangle} =\frac{1}{2^k} C_k$ where $C_k$ are the Catalan numbers. This is the well known result obtained by computing the moments using the semi-circle spectral distribution function (see e.g. [@Anderson; @Sinai; @wigner]). For other $\beta$ the last term is of order $ \mathcal{O} (\frac{1}{N})$ smaller than the rest and thus its effect vanishes in the limit of large $N$. This is consistent with the fact that the spectral distribution of the three canonical ensembles converge to the semi-circle distribution for $N\rightarrow \infty$. Moreover for $N=2$, (\[semicircle\]) returns $\langle t_2\rangle = 3,2, 3/2$ for $\beta = 1,2,4$ respectively, which is exactly the result obtained in Section \[2 Dim example\]. Application to Bernoulli ensembles {#Sec: Application} ================================== Recently, the authors have used a discrete analogue of Dyson’s Brownian motion model to investigate the spectral statistics of Bernoulli ensembles [@Joyner-2015]. Here we provide a brief illustration of how this can be adapted to the traces setting and discuss why this offers certain advantages. Our Bernoulli ensemble $\mathfrak{B}_N$ is given by the set of $N\times N$ symmetric matrices with 0 on the diagonal and off-diagonal entries chosen randomly and independently from the set $\{\pm a\}$ with equal probability (in the following we shall choose, without loss of generality, $a = 1/\sqrt{2}$ in order to match the variance of the GOE defined in Section \[introduction\]). The spectral properties of $\mathfrak{B}_N$ were first analysed by E. Wigner in 1955, who showed the empirical spectral density converges to the semicircle distribution in the limit of large $N$ [@wigner]. Recent works have gone much further, establishing that local eigenvalue correlations do indeed converge to the corresponding Gaussian expressions as $N$ increases [@Tao-2011; @Erdos-2011; @Erdos-2013; @Erdos-2012]. In [@Joyner-2015] the random walk is defined on $\mathfrak{B}_N$ such that at each single time-step, one of the $d_N=\frac{1}{2}N(N-1)$ off-diagonal matrix entries $B_{pq}$ is chosen at random and its sign is flipped (together with $B_{qp}$). This leads to a change in the matrix $B$ of $$\delta B^{pq} = -2 B_{pq}[{| p \rangle}{\langle q |} + {| q \rangle}{\langle p |}],$$ where ${| p \rangle}$ is a vector whose elements are all zero but for $1$ in the position $p$, and ${\langle p |}$ is its transposed. This perturbation in turn induces a change in the eigenvalue $\lambda_{\mu}$ of $$\label{Perturbation formula 2} \delta \lambda_{\mu} ={\langle \mu |} \delta B^{pq}{| \mu \rangle}+\sum_{\nu\ne\mu}\frac{|{\langle \nu |} \delta B^{pq}{| \mu \rangle}|^2}{\lambda_{\mu}-\lambda_{\nu}} \ + \cdots \ ,$$ in a similar manner to (\[Perturbation formula\]). In order to construct the coefficients in the Fokker-Planck equation one has to average $\delta \lambda_{\mu}$ over the entire neighbourhood of matrices that can be reached in a single step. In particular, ${\mathbf{E}}({\langle \mu |} \delta B{| \mu \rangle}) = -2\lambda_{\mu}/d_N$ and $$\label{EV second moment} \fl {\mathbf{E}}(|{\langle \nu |} \delta B {| \mu \rangle}|^2)=\frac{1}{d_N}\sum_{p<q} |{\langle \nu |} \delta B^{pq}{| \mu \rangle}|^2 = \frac{2}{d_N}\left (1 + \delta_{\nu\mu} -2\sum_{p=1}^N \nu_p^2 \mu_p^2\right )\ .$$ Here, in contrast to [@Joyner-2015], there is an additional term $\sum_{p=1}^N \nu_p^2 \mu_p^2$ that cannot be written purely in terms of the eigenvalues, meaning the motion is not autonomous. Collating the above expressions allows one to derive a Fokker-Planck equation which describes the motion of a suitable observable, up to an error that depends on $N$. This error comes from a combinations of factors such as higher moments ${\mathbf{E}}(\delta \lambda_{\mu}^k)$ and higher terms in the perturbation formula (\[Perturbation formula 2\]). This is because, ultimately, our process in discrete and, unlike Dyson’s Brownian motion, one cannot assume that the change of the matrix due to a single step can be made arbitrarily small. These errors, together with the correction to the second moment from the additional term in (\[EV second moment\]), all depend on the eigenvectors and can only be assumed to become negligible in the large $N$ limit if they are sufficiently delocalised. For the present ensemble, it has been proved this is correct with high probability (see [@Tao-2011; @Erdos-2011; @Erdos-2013; @Erdos-2012] and references therein) but for Bernoulli ensembles with correlated matrix entries there are no rigorous results thus far in this direction. Moreover, perturbation theory only converges when $|{\langle \mu |} \delta B{| \mu \rangle}|$ is small relative to $|\lambda_{\mu}-\lambda_{\mu\pm 1}|$. In ensembles such as random regular graphs, this is not the case, even though the eigenvectors are delocalised, due to the growth rate (or lack thereof) of the mean level spacing. These observations therefore motivate the search for another approach. In complete analogy to Section \[F-P equations\] we can also study the random walk in the space of traces. In fact we shall find it more amenable to use the rescaled traces $\tau_n = N^{-n/2-1}t_n$, as used in Section \[Sec: Mean values\]. In this basis all the variables are $\mathcal{O}(1)$ in $N$ and thus it becomes transparent as to which terms can be neglected. To facilitate this transition let us therefore scale the original matrices by $\bar{B} = B/\sqrt{N}$. Applying this to (\[deltatn\]) we have $$\begin{aligned} \label{Bernoulli trace expansion} \delta \tau_n &=& \frac{1}{N}(\Tr(\bar{B} + \delta \bar{B})^n - \Tr(\bar{B}^n)) \nonumber \\ &=& \frac{1}{N}\left[n\Tr(\bar{B}^{n-1}\delta \bar{B}) + \frac{n}{2}\sum_{x = 0}^{n-2} \Tr(\bar{B}^{x}\delta \bar{B} \bar{B}^{n-2-x}\delta \bar{B}) + \ldots\right].\end{aligned}$$ Although we shall eventually seek to neglect those higher terms, as in Section \[F-P equations\], the whole expansion is finite for fixed $n$ and thus exact. It means this formalism offers a distinct advantage over the perturbation formula (\[Perturbation formula 2\]), which has no such guarantees. Moreover, in this way the change in the variables can be expressed directly in terms of the matrix elements, which is not the case for the eigenvalue representation, since it relies on the appearance of the eigenvectors. Proceeding in a similar manner, the expected change of $\delta \tau_n$ in one time step may be calculated as follows[^2] $$\begin{aligned} \label{eqappl1} \fl {\mathbf{E}}(\Tr(\bar{B}^{n-1}\delta \bar{B})) & = & -\frac{2}{d_N}\sum_{p<q}\bar{B}_{pq} \Tr\left (\bar{B}^{n-1}[{| p \rangle}{\langle q |} + {| q \rangle}{\langle p |}]\right) \nonumber \\ &= & -\frac{4}{d_N}\sum_{p<q}\bar{B}_{pq}\bar{B}^{n-1}_{pq} = -\frac{2}{d_N}N^{-n/2}t_n = -\frac{2}{d_N}N\tau_n\end{aligned}$$ and $$\begin{aligned} \label{eqappl2} \fl {\mathbf{E}}\left (\Tr(\bar{B}^{x}\delta \bar{B} \bar{B}^{n-2-x}\delta \bar{B})\right ) &=& \frac{2}{d_N}\sum_{p<q} \bar{B}_{pq}^2 \Tr(B^{x}[{| p \rangle}{\langle q |}+{| q \rangle}{\langle p |}]\bar{B}^{n-2-x}[{| p \rangle}{\langle q |} + {| q \rangle}{\langle p |}])\nonumber \\ &=& \frac{2}{d_N}\frac{1}{N}\sum_{p \neq q} (\bar{B}^{x}_{pq} \bar{B}^{n-2-x}_{qp} + \bar{B}^x_{pp} \bar{B}^{n-2-x}_{qq})\nonumber \\ &=& \frac{2}{d_N}\left[\frac{1}{N}\sum_{p,q} (\bar{B}^{x}_{pq} \bar{B}^{n-2-x}_{qp} + \bar{B}^{x}_{pp} \bar{B}^{n-2-x}_{qq}) - \frac{2}{N}\sum_p \bar{B}^{x}_{pp} \bar{B}^{n-2-x}_{pp}\right ] \nonumber \\ &=& \frac{2}{d_N}\left[\tau_{n-2} + N \tau_x \tau_{n-2-x} - 2\zeta(x,n-2-x)\right],\end{aligned}$$ where $$\zeta(r,s) =\frac{1}{N}\sum_p \bar{B}^{r}_{pp} \bar{B}^{s}_{pp}.$$ The most striking difference between (\[eqappl2\]) and the Gaussian equivalent (\[First trace moment\]) is the appearance of the term $\zeta(x,n-2-x)$, which cannot be expressed in terms of the variables ${\boldsymbol{t}}$. Writing $\tau_s\tau_r -\zeta(r,s) = \frac{1}{N}\left(\sum_p \bar{B}^r_{pp}\left[\frac{1}{N}\sum_{q}\bar{B}^s_{qq} - \bar{B}^s_{pp}\right]\right)$ we see that $\zeta(r,s)$ is very close to $\tau_s\tau_r$ if the diagonal elements $\bar{B}^s_{pp}$ are close to their average over the whole diagonal $\sum_{q}\bar{B}^s_{qq}$. Using Wigner’s combinatorial method of counting Dyck paths (see e.g. [@Anderson; @wigner]) one can show that by averaging over ${\mathfrak{B}_N}$ we have for fixed $r$ and $s$ that $\langle \tau_r\tau_s - \zeta(r,s)\rangle_{{\mathfrak{B}_N}}$ tends to 0 as $N \to \infty$. Moreover, using the same technique one finds $\operatorname{Var}_{{\mathfrak{B}_N}}(\tau_r\tau_s - \zeta(r,s)) = \mathcal{O}(N^{-2})$. Hence with high probability $\zeta(r,s)$ is $\mathcal{O}(1)$. This shows that (\[eqappl2\]) is dominated by the term $N\tau_x\tau_{n-2-x}$. In addition we can also estimate those higher terms in the expectation ${\mathbf{E}}(\delta \tau_n)$ coming from the expansion (\[Bernoulli trace expansion\]). For example, we have ${\mathbf{E}}(\Tr(\delta \bar{B}^3 \bar{B}^{n-3})) = N^{-n/2}{\mathbf{E}}(\Tr(\delta B^3 B^{n-3})) = 4N^{-n/2}{\mathbf{E}}(\Tr(\delta B B^{n-3})) = -8N^{-n/2}t_{n-2}/d_N = -8\tau_{n-2}/d_N$. This again is of an order in $N$ less than the dominant term in (\[eqappl2\]). Therefore, in the large $N$ limit we find that ${\mathbf{E}}(\delta \tau_n)/\delta s$ (taking $\delta s = 2/d_N$) tends to the expression (\[First trace moment\]) calculated for the GOE. Similarly, for the second moment we find $$\label{eqappl3} {\mathbf{E}}(\delta \tau_n\delta \tau_m)= \frac{2}{d_N}\frac{2nm}{N^2}\left(\tau_{n+m-2}-\zeta(n-1,m-1)\right ) + \ldots \; .$$ The difference in comparison to the first moment is that, by the arguments above, the additional term $\zeta(n-1,m-1)$ is of the same order in $N$ as the supposed leading term. This is also in contrast to the outcome for the second order term in the eigenvalue representation (\[EV second moment\]), where the effect of removing the matrix diagonal leaves only a $1/N$ correction. Nevertheless we present arguments that allow for it to be neglected. Let us continue by inserting the expressions (\[eqappl1\]), (\[eqappl2\]) and (\[eqappl3\]) into the appropriately scaled version of the $n$ simultaneous equations (\[statfokkerplanck\]), which determine the stationary solution $Q$ (the method for calculating the error terms in the analogous eigenvalue representation approach is discussed at length in [@Joyner-2015] and thus we refrain from details here). Therefore, for large $N$, the stationary solution $Q$ for $\mathfrak{B}_N$ approximately satisfies $$\fl \left[-\tau_n + \sum_{x=0}^{n-2}\left(\tau_x\tau_{n-2-x} + \frac{\tau_{n-2}}{N}\right)\right]Q = \sum_m \frac{2m}{N^2} \frac{\partial}{\partial \tau_m}\left\{(\tau_{n+m-2} - \zeta(n-1,m-1) )Q\right\}.$$ To estimate the contribution of $\zeta(n-1,m-1)$ we replace the exact value with its mean, i.e. $N^{-1}\sum_p \bar{B}^{n-1}_{pp} \bar{B}^{m-1}_{pp} \approx \tau_{n-1}\tau_{m-1}$. For all matrices $B \in \mathfrak{B}_N$ we have $\tau_1=0$ and $\tau_2 = N(N-1)/N^2 = 1 - 1/N$, meaning our space of variables is reduced to $\tau_n$ for $n=3,\ldots N$. Assuming then, that in all the remaining directions our JPDF $Q$ is constant (as is the case in the GOE expression (\[trJPD\])) we find for $n \geq 3$ $$\sum_m \frac{2m}{N^2} \frac{\partial}{\partial \tau_m}\left(\tau_{n-1}\tau_{m-1}Q\right) = \frac{2}{N^2}(n-1)\tau_{n-2}Q,$$ where we have used that $\partial \tau_{m-1}/\partial \tau_m = 0$ and $\partial \tau_{n-1}/\partial \tau_m = \delta_{m,n-1}$ for all $n,m \leq N$. This results in a term which is of order $1/N$ less than the corresponding term on the LHS and a full order $1/N^2$ less than the leading term. Discussion {#Sec: Discussion} ========== The efforts invested in developing the formalism presented above were motivated by our initial observations regarding random regular graphs. Dyson’s original model could not be transcribed to this matrix ensemble as the perturbation formula is effectively useless (a consequence of small separation between eigenvalues) in this context (see [@Joyner-2015b]). Here we offer a method which does away with the requirement of the perturbation formula and therefore offers a potential method for circumventing such problems. We have demonstrated this method in the standard Gaussian setting and also illustrated how this can be used for Bernoulli matrices. The former case leads immediately to two previously unseen identities regarding symmetric functions, which are proved directly below. Finally we also note the relation with those studies [@Sinai; @Essen; @Guionnet; @Diaconis-1994; @Diaconis-2004] regarding the distributions of traces. Except for [@Guionnet], these works did not consider any dynamical aspects and so what we have outlined here may offer alternative ways for studying traces distributions. For instance one should be able to apply the same techniques to the circular ensembles. Acknowledgements {#acknowledgements .unnumbered} ================ US acknowledges the Institut Henri Poincaré for the hospitality extended when the manuscript was put in its final form. CHJ thanks the Isaac Newton Institute for their hospitality during the writing of this article and acknowledges the financial support of both the Feinberg Graduate School and Leverhulme Trust (grant number ECF-2014-448). US and CHJ would also like to extend their gratitude to D. Zagier for providing a very nice proof of the identities in Section \[definitions\] and writing the following appendix. We also thank P. Forrester for bringing to our attention the reference of Bakry and Zani. Proof of Proposition \[Identities\] by Don Zagier {#Zagier proof} ================================================= Following the notation of the paper, we let $\l_\a$ ($\a=1,\dots,N$) be independent variables and let $c_i$ ($0\le i\le N$), $t_n$ ($n=0,1,\dots$) and $\D$ (discriminant) be the elements of the algebra $S =\C[\l_1,\dots,\l_N]^{\frak S_N}$ of symmetric polynomials in the $\l_\a$ defined by $$\begin{aligned} &&\Phi(X) \;:=\; \prod_{\a=1}^N(X-\l_\a) \= \sum_{i=0}^N c_iX^i\,, \\ && t_n\=\sum_{\a=1}^N\l_\a^n\,, \qquad \D \= {\rm disc}(\Phi) \;= \prod_{1\le\a<\b\le N}(\l_\a-\l_\b)^2 \;. \end{aligned}$$ For $n<0$ we set $t_n=0$. We have $c_N=1$ and $t_0=N$, while both $(c_1,\dots,c_N)$ and $(t_1,\dots,t_N)$ generate the algebra $S$. In particular, if we take the latter as coordinates on $S$, then we can ask for the values of ${\partial}t_n/{\partial}t_m$ and ${\partial}\D/{\partial}t_m$ for $n\ge0$ and $1\le m\le N$. (Of course the former is $\d_{nm}$ for $0\le n\le N$, so it is only interesting if $n>N$.) The identities (\[Main ID 1\]) and (\[Main ID 2\]) were proved in the body of this paper using an indirect proof coming from random matrix theory. Here we give a purely algebraic verification of both of these identities, and some small generalizations. For the reader’s convenience we repeat these identities here, expressing the second one in terms of the polynomial invariant $\D$ rather than its square-root $G$. For $n\ge0$ we have $$\begin{aligned} &&\quad\;\,\, 2\,\sum_{m=1}^N m\,\frac{{\partial}t_{n+m}}{{\partial}t_m} \;\= \sum_{i,\,j\ge0 \atop i+j=n} t_it_j \;+\; (n+1)\,t_n\;, \label{App: Main ID 1} \\ && \,\frac1\D\,\sum_{m=1}^N m\,t_{n+m}\,\frac{{\partial}\D}{{\partial}t_m} \= \sum_{i,\,j\ge0 \atop i+j=n} t_it_j \;-\; (n+1)\,t_n\;. \label{App: Main ID 2} \end{aligned}$$ We use that the logarithmic derivative of $\Phi(X)$ is a generating series for the $t_n$, i.e., $$T(X) \;:= \frac{\Phi'(X)}{\Phi(X)} \= \sum_{\a=1}^N\frac1{X-\l_\a} \= \sum_{n=0}^\infty\frac{t_n}{X^{n+1}}\,,$$ where the last expression can be taken either as a formal power series in $S[[1/X]]$ or as a holomorphic function in the annulus $|X|>\max_\a|\l_\a|$ if the $\l_\a$ are complex numbers. Dividing (\[App: Main ID 1\]) and (\[App: Main ID 2\]) by $X^{n+2}$ and summing over $n\ge-m$ (or equivalently $n\ge0$, since ${\partial}t_{n+m}/{\partial}t_m$ vanishes for $-m\le n<0$), we can rewrite these two identities as $$\label{App: Main ID 1b} \quad 2\,\sum_{m=1}^Nm\,\frac{{\partial}T(X)}{{\partial}t_m}\,X^{m-1} \= T(X)^2 \m T'(X)$$ and $$\label{App: Main ID 2b} \frac{T(X)}\D\,\sum_{m=1}^Nm\,\frac{{\partial}\D}{{\partial}t_m}\,X^{m-1} \= T(X)^2 \+ T'(X)\,.$$ For the proof, we define polynomials $\Phi_\a(X)$ and coefficients $c_{\a,n}$ for $1\le\a\le N$ and $0\le n\le N-1$ by $$\Phi_\a(X)\=\prod_{\b\ne\a}\frac{X-\l_\b}{\l_\a-\l_\b} \= \frac1{\Phi'(\l_\a)}\,\frac{\Phi(X)}{X-\l_\a} \= \sum_{n=0}^{N-1}c_{\a,n}\,X^n\,.$$ Then $\Phi_\a(\l_\b)=\d_{\a\b}$, so $(c_{\a,n})$ is the inverse of the Vandermonde matrix $(\l_\a^n)_{n,\a}$. On the other hand, we have $\frac{1}{m}\,\frac{{\partial}t_m}{{\partial}\l_\a}=\l_\a^{m-1}$, so $c_{\a,m-1}=m\,\frac{{\partial}\l_\a}{{\partial}t_m}$ for $1\le m\le N$. Hence $$\label{App: Relation 1} m\,\frac{{\partial}T(X)}{{\partial}t_m} \= \sum_{\a=1}^Nc_{\a,m-1}\,\frac{{\partial}T(X)}{{\partial}\l_\a} \= \sum_{\a=1}^N\,\frac{c_{\a,m-1}}{(X-\l_\a)^2}\,,$$ so each term ${\partial}T(X)/{\partial}t_m$ is a rational function of the form $P_m(X)/\Phi(X)^2$ where $P_m(X)$ is a polynomial of degree $\le2n-2$. Multiplying (\[App: Relation 1\]) by $X^{m-1}$ and summing over $m=1,\dots,N$ gives $$\begin{aligned} \fl \qquad\quad\sum_{m=1}^N\,m\, \frac{{\partial}T(X)}{{\partial}t_m}\,X^{m-1} &&\= \sum_{\a=1}^N\,\frac{\Phi_\a(X)}{(X-\l_\a)^2} \nonumber \\ &&\= \Phi(X)\,\sum_{a=1}^N\frac{1}{\Phi'(\l_\a)}\,\frac1{(X-\l_\a)^3} \nonumber \\ &&\= \Phi(X) \,\sum_{a=1}^N \operatorname{Res}_{z=\l_\a}\biggl(\frac1{\Phi(z)}\,\frac{dz}{(X-z)^3}\biggr) \nonumber \\ &&\= \Phi(X)\;\operatorname{Res}_{z=X}\biggl(\frac{dz}{(z-X)^3\,\Phi(z)}\biggr) \nonumber \\ &&\= \frac{\Phi(X)}2\,\frac{d^2}{dX^2}\frac1{\Phi(X)} \;\,=\;\, -\,\frac12\,\frac{\Phi''(X)}{\Phi(X)}\+\frac{\Phi'(X)^2}{\Phi(X)^2} \nonumber \\ &&\= -\,\frac{T'(X)}2\+\frac{T(X)^2}2\,,\nonumber \end{aligned}$$ where in the fourth line we have used the residue theorem. This prove the first identity [(\[App: Main ID 1b\])]{}. The calculation for [(\[App: Main ID 2b\])]{} is similar. We have $$\fl \frac m{2\D}\,\frac{{\partial}\D}{{\partial}t_m} \= \frac12\,\sum_{\a=1}^Nc_{\a,m-1}\,\frac{{\partial}\log\D}{{\partial}\l_\a} \= \sum_{\a=1}^Nc_{\a,m-1}\sum_{\b\ne\a}\frac1{\l_\a-\l_\b} \= \sum_{\a=1}^Nc_{\a,m-1}\,\frac{\Phi'_\a(\l_\a)}{\Phi_\a(\l_\a)}\;.$$ Substituting into this the identity $$\begin{aligned} \fl \quad\qquad\frac{\Phi'_\a(\l_\a)}{\Phi_\a(\l_\a)} &&\= \biggl(\frac{\Phi'(t)}{\Phi(t)}\m\frac1{t-\l_\a}\biggr)\biggr|_{t=\l_\a} =\; \biggl(\frac{\Phi'(\l_\a+\e)}{\Phi(\l_\a+\e)}\m\frac1\e\biggr)\biggr|_{\e=0} \nonumber \\ && \= \biggl(\frac{\Phi'(\l_\a)+\Phi''(\l_\a)\,\e\+\cdots}{\Phi'(\l_\a)\,\e \+\frac12\,\Phi''(\l_\a)\,\e^2\+\cdots}\m\frac1\e\biggr)\biggr|_{\e=0} \= \frac12\,\frac{\Phi''(\l_\a)}{\Phi'(\l_\a)} \,,\nonumber \end{aligned}$$ multiplying by $X^{m-1}$ and summing over $m$, we obtain the second identity [(\[App: Main ID 2b\])]{}: $$\begin{aligned} \fl \qquad\frac1\D\, \sum_{m=1}^Nm\,\frac{{\partial}\D}{{\partial}t_m}\,X^{m-1} &&\= \sum_{\a=1}^N\frac{\Phi''(\l_\a)}{\Phi'(\l_\a)}\,\Phi_\a(X) = \sum_{\a=1}^N\frac{\Phi''(\l_\a)}{\Phi'(\l_\a)^2}\,\frac{\Phi(X)}{X-\l_\a} \nonumber \\ &&\= \Phi(X)\, \sum_{\a=1}^N\operatorname{Res}_{z=\l_\a}\biggl(\frac{\Phi''(z)}{\Phi'(z)}\,\frac{dz}{(X-z)\Phi(z)}\biggr) \nonumber \\ &&\= \Phi(X)\, \operatorname{Res}_{z=X}\biggl(\frac{\Phi''(z)\,dz}{(z-X)\Phi(z)\Phi'(z)}\biggr) \nonumber \\ &&\= \frac{\Phi''(X)}{\Phi'(X)} \= T(X) \+ \frac{T'(X)}{T(X)}\;. \qquad\qquad\qquad\square \nonumber \end{aligned}$$ We mention that one can use the same method of calculation to obtain other identities of this type. For instance, $$\begin{aligned} \fl \qquad \sum_{m=1}^N m (m-1) \frac{{\partial}T(X)}{{\partial}t_m}X^{m-2} &&\= \sum_{\a=1}^N\,\frac{\Phi'_\a(X)}{(X-\l_\a)^2} \nonumber \\ &&\= \sum_{a=1}^N\frac1{\Phi_\a'(\l_\a)}\,\biggl(\frac{\Phi'(X)}{(X-\l_\a)^3} \m \frac{\Phi(X)}{(X-\l_\a)^4}\biggr) \nonumber \\ &&\= \operatorname{Res}_{z=X}\biggl[\Bigl(\frac{\Phi'(X)}{(z-X)^3} \+ \frac{\Phi(X)}{(z-X)^4}\Bigr)\frac{dz}{\Phi(z)}\biggr] \nonumber \\ &&\= \frac{\Phi'(X)}2\,\Bigl(\frac1{\Phi(X)}\Bigr)'' \+ \frac{\Phi(X)}6\,\Bigl(\frac1{\Phi(X)}\Bigr)''' \nonumber \\ &&\= \frac13\, T(X)^3 \m \frac16\,T''(X) \nonumber \end{aligned}$$ and hence, in analogy with [(\[App: Main ID 1\])]{}, $$3\,\sum_{m=1}^N m(m-1)\,\frac{{\partial}t_{n+m}}{{\partial}t_m} \= \sum_{i,\,j,\,k\ge0 :\atop i+j+k=n} t_it_jt_k \m \frac{(n+1)(n+2)}2\,t_n\;.$$ Identities with polynomials of higher degree in $m$ on the left could be proved in the same way. References {#references .unnumbered} ========== [9]{} F. J. Dyson, *A Brownian-motion model for the eigenvalues of a random matrix*, J. Math. Phys. [**3**]{} 1191-1198 (1962). M. L. Mehta, *Random Matrices*, Third Edition, 142, Pure and Applied Mathematics (Elsevier/Academic Press, Amsterdam, 2004). P. Forrester, *Log-Gases and Random Matrices*, London Mathematical Society Monographs **34**, (Princeton University Press, 2010). G. W. Anderson, A. Guionnet, and O. Zeitouni, *An Introduction to Random Matrices*, Cambridge Studies in Advanced Mathematics [**118**]{} (Cambridge University Press, 2009). T. Guhr, A. Müller-Groeling, and H. A. Weidenmüller, *Random matrix theories in quantum physics: Common concepts*, Phys. Rep. 299, 189 (1998). G. Akemann, J. Baik, and P. Di Francesco (Ed.), *The Oxford Handbook of Random Matrix Theory*, (Oxford University Press, 2011). C. E. Porter, *Statistical Theories of Spectra: Fluctuations* (Academic Press, New York, 1965). F. Haake, *Quantum Signatures of Chaos*, Springer Series in Synergetics, Third Edition, (Springer-Verlag, Berlin 2010). Y. Sinai and A. Soshnikov, *Central limit theorem for traces of large random matrices with independent matrix elements*, Bol. Soc. Bras. [**29**]{} 1-24 (1998). F. Haake, M. Kuś, H.-J. Sommers, H. Schomerus and K. Życzkowski, *Secular determinants of random unitary matrices* J. Phys. A: Math. Gen. 29 (1996) 3641–3658. A Guionnet, *Uses of free probability in random matrix theory*, XVIth International Congress on Mathematical Physics (2010) pp. 106-122. P. Diaconis and M. Shahshahani, *On the eigenvalues of random matrices*, J. Appl. Probab. 31A (1994), 49?62. P. Diaconis and A. Gamburd, *Random matrices, magic squares and matching polynomials*, Electron. J. Combin. 11 (2004/06), no. 2, Research Paper 2, 26 pp. D. Bakry and M. Zani, *Dyson processes associated with associative algebras: The Clifford case*, in *Geometric Aspects of Functional Analysis* (eds. B. Klartag, E. Milman), Lecture Notes in Mathematics Volume 2116, pp 1-37 (Springer International Publishing Switzerland 2014). G. V. Dunne, *Slater decomposition of Laughlin states*, Int. Journ. Mod. Phys. N 7 (28), 4783 (1993). P. Vivo and S. N. Majumdar, *On invariant $2 \times 2$ $\beta$-ensembles of random matrices*, Phys. A. 387 (2008), no. 19-20, 4839-4855. C. H. Joyner and U. Smilansky, *Spectral statistics of Bernoulli matrix ensembles - a random walk approach (I)*, preprint (2015), <http://arxiv.org/abs/1501.04907>. Eugene P. Wigner, *Characteristic vectors of bordered matrices with infinite dimensions*, The Annals of Mathematics, 2nd Ser. [**62**]{}, 548-564 (1955). T. Tao and V. Vu, *Random matrices: Universality of the local eigenvalue statistics*, Acta Math. 206 (2011), 127-204. L. Erdős, H.-T. Yau and J. Yin, *Universality for generalized Wigner matrices with Bernoulli distribution*, J. of Combinatorics, 1 (2011), no. 2, 15–85 L. Erdős, A. Knowles, H.-T. Yau and J. Yin, *Spectral statistics of Erdős-Rényi graphs I: Local semicircle law.*, Ann. Probab. (2013) 41, no. 3B, 2279-2375 . L. Erdős, A. Knowles, H.-T. Yau and J. Yin, *Spectral statistics of Erdős-Rényi graphs II: Eigenvalue spacing and the extreme eigenvalues*, Comm. Math. Phys. 314 (2012), no. 3. 587-640. M. C. Wang and G. E. Uhlenbeck, *On the theory of the Brownian motion II*. Revs. Mod. Phys. [**17**]{} 323342 (1945). C. H. Joyner and U. Smilansky, *Spectral statistics of Bernoulli matrix ensembles - a random walk approach (II)*, in preparation. [^1]: For which we would like to thank P. Forrester. [^2]: We use the convention that $B^n_{pq}$ denotes the $p,q$-th element of the matrix $B^n$ and $(B_{pq})^n$ is the matrix element $B_{pq}$ raised to the $n$-th power.
--- abstract: 'We present results from strong-lens modelling of 10,000 SDSS clusters, to establish the universal distribution of Einstein radii. Detailed lensing analyses have shown that the inner mass distribution of clusters can be accurately modelled by assuming light traces mass, successfully uncovering large numbers of multiple-images. Approximate critical curves and the effective Einstein radius of each cluster can therefore be readily calculated, from the distribution of member galaxies and scaled by their luminosities. We use a subsample of 10 well-studied clusters covered by both SDSS and HST to calibrate and test this method, and show that an accurate determination of the Einstein radius and mass can be achieved by this approach “blindly”, in an automated way, and without requiring multiple images as input. We present the results of the first 10,000 clusters analysed in the range $0.1<z<0.55$, and compare them to theoretical expectations. We find that for this all-sky representative sample the Einstein radius distribution is log-normal in shape, with $\langle Log(\theta_{e}\arcsec)\rangle=0.73^{+0.02}_{-0.03}$, $\sigma=0.316^{+0.004}_{-0.002}$, and with higher abundance of large $\theta_{e}$ clusters than predicted by $\Lambda$CDM. We visually inspect each of the clusters with $\theta_{e}>40 \arcsec$ ($z_{s}=2$) and find that $\sim20\%$ are boosted by various projection effects detailed here, remaining with $\sim40$ real giant-lens candidates, with a maximum of $\theta_{e}=69\pm12 \arcsec$ ($z_{s}=2$) for the most massive candidate, in agreement with semi-analytic calculations. The results of this work should be verified further when an extended calibration sample is available.' author: - 'Adi Zitrin$^{1,4}$[^1], Tom Broadhurst$^{2,3}$, Matthias Bartelmann$^{4}$, Yoel Rephaeli$^{1}$,' - | Masamune Oguri$^{5,6}$, Narciso Benítez$^{7}$, Jiangang Hao$^{8}$, Keiichi Umetsu$^{9}$\ \ \ $^{1}$The School of Physics and Astronomy, the Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University,\ Tel Aviv 69978, Israel\ $^{2}$Department of Theoretical Physics, University of Basque Country UPV/EHU, Leioa, Spain\ $^{3}$IKERBASQUE, Basque Foundation for Science\ $^{4}$Institut für Theoretische Astrophysik, ZAH, Albert-Ueberle-Straße 2, 69120 Heidelberg, Germany\ $^{5}$Institute for the Physics and Mathematics of the Universe, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan\ $^{6}$Division of Theoretical Astronomy, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan\ $^{7}$Instituto de Astrofísica de Andalucía (CSIC), C/Camino Bajo de Huétor, 24, Granada, 18008, Spain\ $^{8}$Center for Particle Astrophysics, Fermi National Accelerator Laboratory, Batavia, IL 60510\ $^{9}$Institute of Astronomy and Astrophysics, Academia Sinica, P. O. Box 23-141, Taipei 10617, Taiwan title: 'The Universal Einstein Radius Distribution from 10,000 SDSS Clusters' --- \[firstpage\] cosmology: theory, dark matter, galaxies: clusters: general, galaxies: high-redshift, gravitational lensing: strong, mass function Introduction {#intro} ============ Clusters of galaxies play a fundamental role in testing cosmological models, by virtue of their position at the high end of the cosmic mass spectrum. Massive galaxy clusters gravitationally-lens background objects, forming distorted, magnified, and often multiple images of the same source, when the cluster surface density is high enough. These effects are in turn used to map the gravitational potentials and mass of the lensing clusters, hence providing some of the best constraints on the nature and shape of the underlying matter distributions (Broadhurst et al. 2005a, Bradač et al. 2006, Coe et al. 2010, Zitrin et al. 2010, Merten et al. 2011). Large sky surveys such as the *Sloan Digital Sky Survey* (SDSS; see Abazajian et al. 2003,2009) allow for important scientific work with different astrophysical implications (e.g., Tegmark et al. 2004, 2006, Tremonti et al. 2004, Eisenstein et al. 2005, Seljak et al. 2005, Wojtak, Hansen, & Hjorth 2011). The large amount of data enables extensive studies with a clear statistical advantage. Here we make use of the results of a new cluster-finding algorithm operated on the SDSS DR7 data (Hao et al. 2010; on DR7 data see Abazajian et al. 2009), in order to derive the Einstein radius distribution of a significant, statistical sample. As presented in their work, more than 55,000 clusters were found using this successful and rather conservative algorithm, which we have taken upon to analyse using our improved lensing-analysis tools (e.g., Zitrin et al. 2009b, see more details in §2), presenting here the results of the first 10,000 clusters analysed. The effective Einstein radius plays an important role in various studies. The Einstein radius describes the area in which multiply-lensed images may be seen due to the high mass-density of the cluster. By definition, within this critical area the average mass density is equal to $\Sigma_{crit}$ (for symmetric lenses), the critical density required for strong-lensing, whose value is dependent on the source and lens distances. In general, obtaining the critical curves with great accuracy allows matching up multiple-images, which in turn help to improve and better-constrain the model in order to derive the mass distribution and profile more accurately, teaching us about certain properties of both the observed and unseen matter. The Einstein radius therefore constitutes a measure of the strong-lens size (and efficiency), and directly enables us to estimate the amount of mass enclosed within it; $\theta_{e}=(\frac{4GM(<\theta_{e})}{c^{2}}\frac{d_{ls}}{d_{l}d_{s}})^{1/2}$ for symmetric lenses (e.g., Narayan & Bartelmann 1996, Bartelmann 2010), where $d_{l}$, $d_{s}$, and $d_{ls}$, are the lens, source and lens-to-source (angular-diameter) distances, respectively. Equivalently, the effective Einstein radius used here is simply a measure of the critical area, $A$, so that $\theta_{e}=\sqrt{A/\pi}$. In recent years it has been proposed that the Einstein radius distributions of several small samples of clusters, pose a challenge to $\Lambda$CDM (e.g., Broadhurst & Barkana 2008, Zitrin et al. 2009a, 2011a). Other discrepancies such as the arc abundance, several uniquely large Einstein radii, massive high-$z$ clusters, high NFW concentration parameters, and comparison to N-body simulations, contribute further to this tension, though most studies show mainly a moderate discrepancy (e.g., Bartelmann et al. 1995, Wambsganss et al. 1995, Dalal, Holder, & Hennawi 2004, Broadhurst et al. 2005b, 2008, Hennawi et al. 2007a,b, Hilbert et al. 2007, Sadeh & Rephaeli 2008, Oguri & Blandford 2009, Oguri et al. 2009, Puchwein & Hilbert 2009, Meneghetti et al. 2010a,2011, Sereno, Jetzer & Lubini 2010, Gralla et al. 2011, Horesh et al. 2011, Umetsu et al. 2011a, Zitrin et al. 2011a,c). Obtaining a credible empirical distribution of Einstein radii from an unprecedentedly large sample is of clear value, welcoming in addition complementary mass measurements through similarly automatic weak-lensing analyses (e.g., Hildebrandt et al. 2011) and other observations, such as of X-ray emission or the SZ-effect, when possible. The advances in computational power over the past decades along with higher quality data and our efficient method for analysing strong-lenses (Broadhurst et al. 2005a, Zitrin et al. 2009b) now enable such an extensive study. Based on previous analyses of many clusters, we now securely determine typical physical parameters to which the critical curves are relatively indifferent, so that we extrapolate and test these assumptions to perform our analysis on the sample presented here. In particular, in this work we describe a simple and efficient method to model cluster-lenses based on the light distribution of bright cluster members, which as we have targeted to show, allows to derive the Einstein radius with sufficient accuracy, in an automated mode. Automated surveys for lensing have been presented before, though mostly based on the observed arc properties, or relate to either galaxy-lensing scale or the weak-lensing regime (e.g., Webster, Hewett & Irwin 1988, Cabanac et al. 2006, Mandelbaum et al. 2006, Johnston et al. 2007b, Corless & King 2009, Marshall et al. 2009, Sheldon et al. 2009, Bayliss et al. 2011a,b, Hildebrandt et al. 2011), and have yet to produce statistically-significant results for the Einstein radius distribution directly from SL modelling. Other available SL methods, though can be successful, either require the location of many multiple-images as input or currently have too many free parameters, rendering such a “blind” study impossible. The SL modelling method we implement here is based on the reasonable assumption that light approximately traces mass, which we have shown is most efficient for finding new multiple-images as the mass model is initially well constrained with sufficient resolution to derive well-approximate critical curves (see Broadhurst et al. 2005a, Zitrin et al. 2009b, 2011a,b,c, Merten et al. 2011). Recently we have tested the assumptions of this approach in Abell 1703 (Zitrin et al. 2010), by applying the non-parametric technique of Liesenborgs et al. (2006, 2007, 2009) for comparison, yielding similar results with only minor differences in the overall mass distribution and critical curves, especially where galaxies are seen since they are not included in the non-parametric technique. Independently, it has been found that SL methods based on parametric modelling, i.e., based on physical assumptions or parametrisations (for other parametric methods see, e.g., Keeton 2001, Kneib et al. 1996, Gavazzi et al. 2003, Bradač et al. 2005, Jullo et al. 2007, Halkola et al. 2008), are accurate at the level of a few percent in determining the projected inner mass (Meneghetti et al. 2010b). Clearly, non-parametric techniques and methods that are based directly on arc morphologies are also important: non-parametric techniques (e.g., Diego et al. 2005, Coe et al. 2008, Merten et al. 2009) are novel in the sense that they are assumption-free and highly flexible (e.g., Coe et al. 2010, Ponente & Diego 2011), and methods based directly on arc morphologies yield high resolution results (see also Grillo et al. 2009). The parametric method presented here, is simply aimed to produce the critical curves in an automated way based on simple physical considerations (and thus is capable of finding multiple images as we have shown constantly before), and constitutes another important step towards the ability to deduce the lensing properties of clusters in large sky surveys in an automated way, so that we aim now to present the first observationally-deduced, universal distribution of Einstein radii. The incorporated method involves only four free parameters. Three of them are known sufficiently well a-priori and have only negligible effect on the critical curves and resulting Einstein radius, for which we adopt typical values deduced from detailed analyses of a few dozen clusters (more details are given in §\[model\]). The fourth parameter, which varies from cluster to cluster, is the overall (mass) normalisation, but since the respective distances are known, this can be simply overcome by finding a typical mass-to-light ratio ($M/L$) normalisation. The $M/L$ term is embedded, in practice, in a redshift-dependent normalisation factor, which is iterated for the best fit using 10 clusters which have been accurately-analysed in HST images and have parallel SDSS data listed in the Hao et al. (2010) catalog. These include some well-known lensing clusters such as A1689, A1703, MS1358, Z2701, and others (see, e.g., Broadhurst et al. 2005a, Richard et al. 2010, Zitrin et al. 2010, 2011a,b). The results of this comparison are shown in Figure \[TvTspec\] and Table \[systemo\]. ![Calibration sample. Einstein radii (for $z_s=2$) derived by our “blind” automated algorithm in SDSS data and based on the assumption that light traces mass, with a typical $M/L$, versus the Einstein radii derived by detailed analyses of HST images of the same clusters and using the multiple images as constraints (e.g., Broadhurst et al. 2005a, Richard et al. 2010, Zitrin et al. 2010, 2011a,b). As can be seen, the “blind” method, based on the light distribution of bright cluster galaxies and without using any information regarding the location of multiple-images, shows remarkably similar results to those derived by the detailed independent analyses (see also Fig. \[comparison\]). The errors in the Einstein radii are typically $\sim10\%$, and overplotted is also an $x=y$ dashed line. The comparison sample spans the redshift range $0.15<z_{l}<0.55$. *Blue open circles* are the results from the blind analysis with the best-fitting parameters derived from minimising by all 10 clusters together, while *red filled circles* are the results from a “Jackknife” minimisation described in §\[jack\]: in order to demonstrate how well one could assess the Einstein radius, we perform the minimisation for 9 different clusters at a time and analyse the tenth cluster with the resulting parameters. With this we obtain deviations of up to $\sim17\%$ in our ability to blindly estimate the critical curves by the automated procedure described in this work. In a complementary error-propagation check (§\[jack\]) we obtain similar results of $1\sigma\sim18\%$, which we take hereafter as the errors for the full-sample analysis.[]{data-label="TvTspec"}](TvsT048009v4Rev.jpg){width="90mm"} The paper is organised as follows: In §2 we detail the modelling and the assumptions on which our algorithm is based. In §3 we discuss the results and relevant uncertainties, which are then summarised in §4. Throughout this paper we adopt a concordance $\Lambda$CDM cosmology with ($\Omega_{\rm m0}=0.3$, $\Omega_{\Lambda 0}=0.7$, $h=0.7$). All Einstein radii referred to in this work are for a fiducial source redshift of $z_{s}=2$. We also note that all logarithmic quantities in this work are in base 10, unless stated otherwise, and are denoted conventionally as “*Log*”. Strong-Lens Modelling and Analysis {#model} ================================== The method we apply here is based on the simple assumption that mass traces light. This well-tested approach to lens modelling has previously uncovered large numbers of multiply-lensed galaxies in ACS images of e.g., Abell 1689, Cl0024, 12 high-$z$ MACS clusters, MS1358, “Pandora’s cluster” Abell 2744, and Abell 383 (respectively, Broadhurst et al. 2005a, Zitrin et al. 2009b, 2011a,b, Merten et al. 2011, Zitrin et al. 2011c). As the basic assumption adopted is that light approximately traces mass, the photometry of the red cluster member galaxies is used as the starting point for the mass model. ![The general starting point of our lens model, where we define the surface mass distribution based on the cluster member galaxies (see §\[model\]) listed in the Hao et al. (2010) cluster catalog. In this figure we show the lumpy (galaxy) component for Abell 1703 as an arbitrary example (see also Zitrin et al. 2010 for an equivalent figure but from HST observations. Axes are in pixels with $0.2 \arcsec /pixel$). We perform the same simple procedure for each of the 10,000 clusters drawn from the Hao et al. catalog.[]{data-label="lumpycomp"}](1703galsSdss.jpg){width="85mm"} ![Smoothed mass distribution. To represent the DM distribution we smooth the lumpy component (Fig. \[lumpycomp\]) of each of the 10,000 clusters drawn from Hao et al. clusters catalog (see also Zitrin et al. 2010 for an equivalent figure based on HST observations. Axes are in pixels with $0.2 \arcsec /pixel$). This smoothing procedure is most useful in generating, when combined with the lumpy component, a very reliable deflection field and corresponding critical curves, as we have shown for many clusters (e.g., Broadhurst et al. 2005a, Zitrin et al. 2009b, 2011a,b,c, Merten et al. 2011), allowing to find large numbers of multiple-images by the model.[]{data-label="smoothcomp"}](1703DMSdss.jpg){width="85mm"} Initial Mass Distribution ------------------------- We now wish to calculate the deflection field by the cluster galaxies, or the initial mass distribution. By assuming that the flux is proportional to the mass, i.e., assigning a certain $M/L$ ratio, the deflection field contributed by each galaxy can now be calculated by assigning a surface-density profile for each galaxy, $\Sigma(r)=Kr^{-q}$, which is integrated to give the interior mass, $M(<\theta)=\frac{2\pi K}{2-q}(d_{l}\theta)^{2-q}$. This results in a deflection angle of (due to a single galaxy): $$\label{deflection} \alpha(\theta)= \frac{4GM(<\theta)}{c^2\theta}\frac{d_{ls}}{d_{s}d_{l}},$$ or more explicitly by inserting $M(<\theta)$ from above: $$\label{deflectiona} \alpha(\theta)=\frac{4G\frac{2\pi K}{2-q}d_{l}^{~1-q}}{c^2}\frac{d_{ls}}{d_{s}}\theta^{1-q} .$$ We note that all quantities are known, except for $K$, the normalisation factor which is related to the M/L ratio (note that $q$ is maintained constant on a typical and known value, see §\[ss:totaldef\]). Thus, finding the explicit term for $K$ which scales correctly all clusters we analysed to date (taking into account the different lens and source redshifts) allows us - in principle - to perform the automated survey of Einstein radii, following the procedure described below. By defining $K_{q}=\frac{4G}{c^{2}}\frac{2\pi K}{2-q}\frac{d_{ls}}{d_{s}}d_{l}^{~1-q}$ we can reduce the latter formula to get: $$\label{deflection2} \alpha(\theta)= K_{q}\theta^{1-q} ,$$ where $K_{q}$ also depends on the redshifts involved, and on the power-law index, $q$ (which is set to constant throughout, §\[ss:totaldef\]). The deflection angle at a certain point $\vec{\theta}$ due to the lumpy galaxy components is simply a linear superposition of all galaxy contributions scaled by their luminosities, $L_i$ (in $L_{\odot}$ units): $$\label{deflection3} \vec{\alpha}_{gal}(\vec{\theta})=K_{q}\sum_{i} \frac{L_{i}}{L_{\odot}}\, |\vec{\theta}-\vec{\theta}_i|^{1-q} \frac{\vec{\theta}-\vec{\theta}_i}{|\vec{\theta}-\vec{\theta}_i|}.$$ In practice we use a discretised version of equation \[deflection3\], over a 2D square grid $\vec\theta_m$ of $N\times N$ pixels, given by: $$\label{deflection_x1} \alpha_{gal,x}(\vec\theta_m)=K_{q}\sum_{i} \frac{L_{i}}{L_{\odot}}\, \frac {\Delta x_{mi}}{[(\Delta x_{mi})^2 + (\Delta y_{mi})^2]^{q/2}},$$ $$\label{deflection_y1} \alpha_{gal,y}(\vec\theta_m)=K_{q}\sum_{i} \frac{L_{i}}{L_{\odot}}\, \frac {\Delta y_{mi}}{[(\Delta x_{mi})^2 + (\Delta y_{mi})^2]^{q/2}}, %\end{eqnarray}$$ where $(\Delta x_{mi},\Delta y_{mi})$ is the displacement vector $\vec\theta_m-\vec\theta_i$ of the $m$th pixel point, with respect to the $i$th galaxy position $\vec\theta_i$. Note that to obtain the luminosity $L_i$ of each member, we convert its SDSS $r$-band luminosity to the corresponding (Vega) B-band luminosity by the LRG template given in Benítez et al. (2009). From these expressions a deflection field for the galaxy contribution is easily calculated analytically as above, and the mass distribution is now rapidly calculated locally from the divergence of the deflection field, i.e., the 2D equivalent of Poisson’s equation. An example is given in Figure \[lumpycomp\]. The Dark Matter Distribution {#ss:DM} ---------------------------- The mass contribution of galaxies is anticipated to comprise only a small fraction of the total mass of the cluster, which is expected to be dominated by a smooth distribution of DM. We now simply assume that the galaxies approximately trace the DM. As mentioned, this assumption was found to work very well in earlier work on many clusters where large numbers of multiple-images were found accordingly. These multiply-lensed systems are not simply eye-ball candidates, but are reproduced and predicted by the preliminary model, indicating that this model, based on the assumption that light traces mass, is initially well constrained. Since the DM is of course expected to be smoother than the distribution of galaxies, we smooth the initial guess of the galaxy distribution obtained above, choosing for convenience a low-order cubic spline interpolation, typical to the many previous analyses mentioned above. The smoothing degree (the polynomial degree, $S$) is also a free parameter of the model, and the deflection field contributed by the DM is then simply the sum of the contribution from each point (or pixel) in this smooth DM component. This smoothing procedure is the key to our method’s success in locating multiple-images, and is in practice more useful than assuming a general DM shape such as NFW or pseudo-isothermal spheres, which are highly symmetric and do not necessarily describe the complex inner DM distribution in detail, often not allowing to find in advance the multiple images according to the initial mass distribution. An example of a smoothed component is shown in Figure \[smoothcomp\]. The deflection field of the DM is then (where each pixel is treated as a point mass) given by: $$\label{deflection_xDM} \alpha_{DM,x}(\vec\theta_m)=K_{q}\sum_{i} P_i\, \frac {\Delta x_{mi}}{[(\Delta x_{mi})^2 + (\Delta y_{mi})^2]},$$ $$\label{deflection_yDM} \alpha_{DM,y}(\vec\theta_m)=K_{q}\sum_{i} P_i\, \frac {\Delta y_{mi}}{[(\Delta x_{mi})^2 + (\Delta y_{mi})^2]}, %\end{eqnarray}$$ where $P_{i}$ represents the (unnormalised) mass value in the $i$th pixel of the smooth component. We therefore obtain now the deflection field due to the DM, hereafter $\vec\alpha_{DM}(\vec\theta)$. ![Joint $\chi^{2}$ minimisation of the relative weight of the galaxies, $K_{gal}$, and the $M/L$-dependent normalisation factor $K_{q}$, obtained by comparing the Einstein radii of the calibration sample as derived from detailed HST-based analyses, with the results of the automated analysis presented here. The *top* panel shows a $\chi^{2}$ map, whereas the *bottom* panel shows the 68.3%, 95.4%, 99% and 99.99% confidence levels (*color countours*), and the point of best fit (*black circle*). When these are jointly fit to the data, the two parameters minimised in the figure are strongly correlated, so that by fixing the relative galaxy weight, $K_{gal}$, to the best-fit constant value, we are able to reduce the number of of free parameters in our modelling to one, namely, the $M/L$-dependent normalisation factor $K_{q}$. Explicitly, this is done by fitting a least-squares line to the points within $\Delta\chi^{2}=2.3$ from the minimum $\chi^{2}$, thus obtaining the relation between the two parameters. The residuals around the minimal $\chi^{2}$ values are then used to obtain the $1\sigma$ errors. From this we obtain best fitting values (and $1\sigma$ errors) of $K_{gal}=11.4\pm0.6\%$ and $K_{q}=(51.6\pm1.9)\frac{d_{ls}}{d_{l}d_{s}}$. For more details see §\[model\].[]{data-label="chi2figure"}](qsMinLog.jpg "fig:"){width="90mm"} ![Joint $\chi^{2}$ minimisation of the relative weight of the galaxies, $K_{gal}$, and the $M/L$-dependent normalisation factor $K_{q}$, obtained by comparing the Einstein radii of the calibration sample as derived from detailed HST-based analyses, with the results of the automated analysis presented here. The *top* panel shows a $\chi^{2}$ map, whereas the *bottom* panel shows the 68.3%, 95.4%, 99% and 99.99% confidence levels (*color countours*), and the point of best fit (*black circle*). When these are jointly fit to the data, the two parameters minimised in the figure are strongly correlated, so that by fixing the relative galaxy weight, $K_{gal}$, to the best-fit constant value, we are able to reduce the number of of free parameters in our modelling to one, namely, the $M/L$-dependent normalisation factor $K_{q}$. Explicitly, this is done by fitting a least-squares line to the points within $\Delta\chi^{2}=2.3$ from the minimum $\chi^{2}$, thus obtaining the relation between the two parameters. The residuals around the minimal $\chi^{2}$ values are then used to obtain the $1\sigma$ errors. From this we obtain best fitting values (and $1\sigma$ errors) of $K_{gal}=11.4\pm0.6\%$ and $K_{q}=(51.6\pm1.9)\frac{d_{ls}}{d_{l}d_{s}}$. For more details see §\[model\].[]{data-label="chi2figure"}](chi2ContoursKKREV3.jpg "fig:"){width="90mm"} The Total Deflection Field {#ss:totaldef} -------------------------- Having calculated the two components of the deflection field, we now simply combine them to get a total deflection field as follows: $$\label{defTot} \vec{\alpha}_T(\vec\theta)=K_{gal} \vec\alpha_{gal}(\vec\theta)+(1-K_{gal})\vec\alpha_{DM}(\vec\theta),$$ where $K_{gal}$ is the relative contribution of the galaxy component to the deflection field. Both components of the deflection field are normalised by $K_{q}$, so that knowing its value enables us to approximate very well the overall deflection field. It should be stressed that although the degree of smoothing ($S$) and the index of the power-law ($q$) are the most important free parameters determining the mass profile, their effect on the Einstein radius is negligible. Based on the detailed analysis of $\sim30$ clusters (mentioned above and several more still unpublished), we note that the best-fitting parameters $q$ and $S$ show relatively little scatter among the different lenses. We can securely determine that the power-law $q$ will be in the range $1\leqslant q\leqslant1.5$, and the smoothing polynomial degree $S$ will be in the range $4\leqslant S\leqslant24$, with a sufficient resolution of $\Delta q=0.05$ and $\Delta S=2$, in order to expand the full plausible profile range per cluster. More importantly, the exact choice of $q$ and $S$ does not affect the deduced Einstein radius size, which is determined by the inner mass enclosed within it and not by the mass profile which varies (see Figs. 1 and 2 in Zitrin et al. 2009b). The crucial point to make here is that Einstein radii just constrain the enclosed mass, no matter how the mass is distributed. This is seen very clearly in Figure \[QSmag\] here, where we show that for many different combinations of the $q$ and $S$ parameters, the critical curves form at the same radius, given a reliable constraint. For example, in Figure \[QSmag\] the critical curves were constrained using multiple-images, while our point here, in this work, is to show that the $M/L$ ratio deduced from a calibration sample can be used as an alternative constraint, enabling an automated SL analysis. In addition, it is therefore clear, that the choice of $q$ and $S$ parameter values is not fundamentally important here, and any combination, after it is calibrated for, should in principle yield the critical curves at the right location. With this in mind, throughout the analysis here we maintain $q$ and $S$ constants with $q$=1.2 and $S$=10, which are typical values according to our many previous analyses. With $q$ and $S$ kept constants at these values, we now constrain the fixed value of the weight of the galaxies relative to the dark matter, $K_{gal}$, and the overall normalisation factor, $K_{q}$. Having 10 clusters as a reference sample, and 2 parameters to constrain, we can well determine their values by a joint minimisation, and in turn examine how these best-fit values reproduce the reference sample critical curves. Explicitly, we perform a $\chi^2$ minimisation by comparing the Einstein radii of the calibration sample, deduced from detailed analyses based on HST observations and identification of multiple-images (see also §\[intro\]), with the results of the automated procedure presented here, operated on the same clusters in SDSS data: $$\label{chi2} \chi^{2}=\sum_{i}^{N}[(\theta_{e,i}^{HST}-\theta_{e,i}^{SDSS})^{2}/(\sigma_{i}^{2})],$$ where i goes from 1 to $N=10$, for the ten calibration-sample clusters, and $\sigma_{i}$ is taken as $10\%$ of the HST deduced values, which is a typical value for the uncertainties in SL modelling results. The results of this $\chi^2$ minimisation are seen in Figure \[chi2figure\]. As can be seen, there is a strong correlation between the two parameters, $K_{gal}$ and $K_{q}$, which are degenerate so that many combinations of these can yield a good solution. This is a crucial point to make, since this correlation shows that indeed the number of free parameters in our modelling can be effectively reduced to one. By fixing the relative galaxy weight ($K_{gal}$) to its best-fit value, the model can now be constrained with one single parameter, namely, the $M/L$-related parameter $K_{q}$. To do this, we fit by a least-squares minimisation, a line to the minimum $\chi^{2}$ points (defined as lying within a $\Delta\chi^{2}$=2.3 above the minimal $\chi^{2}$), thus obtaining the linear relation between them. The fit is very good, $R^{2}=0.97$, reflecting the strong correlation, and from which the independent $1\sigma$ errors are derived (i.e., by the residuals around the minimal $\chi^2$ values). With this we obtain a best-fit (and $1\sigma$ errors) relative galaxies weight of $K_{gal}=11.4\pm0.6\%$, similar to the value expected based on our many previous HST analyses. However, one cannot expect the power-law lumpy component to represent only the galaxies, nor the smooth component to represent solely the DM, so that trying to assess the true physical weight of each component would be unwarranted at present. One can only know for certain that the combination of the two with $K_{gal}$ as the relative weight, yields a good solution. It should also be mentioned, that we make a prior assumption on the range of sensible $K_{gal}$ values, so that the critical curves are not too smooth nor too lumpy. This is done by inspecting the resulting critical curves by eye, so that roughly, the degree of “complexity” of the critical curves is similar to that seen in the aforementioned HST-based analyses of some of the calibration-sample clusters, and in agreement with the general expectation for the (small) contribution of galaxies relative to the total mass. ![image](histBinsALL3.jpg){width="180mm"} For the normalisation factor, we obtain in the minimisation a best-fit value of $K_{q}=(51.6\pm1.9)\frac{d_{ls}}{d_{l}d_{s}}$ ($1\sigma$ errors). Accordingly, the $M/L$ related coefficient, $K$, equals $2.21 \times10^{28} / d_{l}^{~2-q}$, in units of \[$gr/cm^{2-q}$; with $q$=1.2\], from which we can deduce the explicit typical $M/L$ relation: $$\label{MLdeduced} M_{(<\theta)}/L_{B}=8.7 \pm 0.3 \times10^{-5} L_{i}~\theta^{2-q}~~~ [M_{\odot}/L_{\odot}],$$ where $q=1.2$ here, $\theta$ is in radians, and $L_{i}$ is the galaxy luminosity (in solar unit). For example, for a typical BCG as bright as $10^{10}~L_{\odot}$, this yields an $M/L_{B}$ value of $\sim120~[M_{\odot}/L_{\odot}]$ within $3\arcsec$, or e.g., $\sim180~[M_{\odot}/L_{\odot}]$ within $5\arcsec$. Note that this is not the typical $M/L$ per galaxy, but overall, a scaling which describes, per $L_{\odot}$, the total projected mass enclosed along the line-of-sight and within a cylinder of radius $\theta$ centred on a galaxy, and thus includes major contribution from the cluster DM halo along this line. Therefore, this $M/L$ term is coupled to the modelling procedure applied and include some internal rescalings and compensation to various effects such as the difference in the depth between the usual SDSS and HST imaging, and the red-sequence membership definition (§\[results\]), and are coupled to the LRG template and its possible minor redshift evolution (Benítez et al. 2009). In fact, here we do not explicitly take into account this evolution of red-sequence galaxies and their host clusters, so that the resulting $M/L$ relation presented here may include a compensation to this effect, which although is expected to be minor (Benítez et al. 2009), would be interesting to probe when a larger calibration sample is available. With these best-fit values we analyse each of the SDSS reference-sample clusters sequentially in an automated way. The $1\sigma$ errors on these parameters mentioned above, propagate typically into a minor $\sim5\%$ error on the Einstein radius (of each individual cluster of the calibration sample), which may be too low in light of other uncertainties detailed in §\[results\]; accordingly, a more realistic error (or uncertainty) level is estimated as mentioned therein. The resulting Einstein radii of the SDSS blind analysis are compared to the results of detailed analyses in Figure \[TvTspec\], where a very good correlation with a small scatter is found ($R^{2}=0.97$, and deviation of less than $<5\%$). A more explicit example of the analysis results obtained by the different approaches is given in Figure \[comparison\]. ------------ ----------- ------------ ------------ ----------- ------------ -------------------- -------------------------- -------------------------- ----------- ------------------------ --------------- Identifier RA DEC $z_{phot}$ $z_{err}$ $z_{spec}$ $\theta_{e}^{HST}$ $\theta_{e,auto}^{SDSS}$ $\theta_{e,Jack}^{SDSS}$ $N_{gal}$ $\theta_{e}^{HST}$ ref other refs arcsecs arcsecs arcsecs A1689 197.87295 -1.3410050 0.2030 0.0180 0.1832 46.0 44.1 38.2 142 Z B05,HSP06,L07 A1703 198.77197 51.817494 0.2690 0.0180 0.2800 28.0 31.8 29.0 86 Z10 L08,R09 MS1358 209.96066 62.518110 0.3590 0.0290 0.3273 13.0 13.4 12.0 69 Z11a MACS1423 215.94948 24.078460 0.4410 0.0950 0.5430 20.0 23.2 20.7 16 Z11b L10 A1835 210.25886 2.8785320 0.2100 0.0390 0.2528 30.5 30.8 27.4 65 R10 Z2701 148.20456 51.885143 0.1920 0.0210 0.2151 9.0 10.7 8.8 11 R10 A611 120.23668 36.056725 0.2900 0.0160 0.2873 21.0 21.5 23.7 59 R10 D11 RXJ2129 322.41651 0.089227 0.2280 0.0140 0.2339 21.1 24.5 23.9 25 Z R10 A963 154.26499 39.047228 0.2230 0.0110 0.2056 23.0 23.9 23.3 50 Z R10 A2261 260.61326 32.132572 0.2250 0.0120 0.2233 35.0 36.3 35.8 74 Z U09, C12 ------------ ----------- ------------ ------------ ----------- ------------ -------------------- -------------------------- -------------------------- ----------- ------------------------ --------------- Results, Discussion, And Uncertainty {#results} ==================================== The sample analysed in this work is drawn from the Hao et al. (2010) SDSS cluster catalog. As mentioned in their work, Hao et al. (2010) have developed an efficient cluster finding algorithm named the Gaussian Mixture Brightest Cluster Galaxy (GMBCG) method. The algorithm uses the Error Corrected Gaussian Mixture Model (ECGMM) algorithm (Hao et al. 2009) to identify the BCG plus red sequence feature and convolves the identified red sequence galaxies with a spatial smoothing kernel to measure the clustering strength of galaxies around BCGs. The technique was applied to the Data Release 7 of Sloan Digital Sky Survey and produced a catalog of over 55,000 rich galaxy clusters in the redshift range $0.1 < z < 0.55$. The catalog is approximately volume limited up to redshift $z\sim0.4$ and shows high purity and completeness when tested against a mock catalog, and when compared to other well-established SDSS cluster catalogs such as MaxBCG (Koester et al. 2007; for more details see Hao et al. 2010). We go over the Hao et al. (2010) catalog, and apply the method described above (§\[model\]) to each cluster, deriving its resulting Einstein radius and mass. We present here the results from the first 10,000 clusters analysed. In practice these 10,000 SDSS clusters comprise only a relatively small fraction ($\sim20\%$) of the full catalog coverage, whose analysis results we aim to presented in a future work, once a larger calibration sample is available. ![Cumulative Einstein radius distribution from $\simeq$10,000 SDSS clusters ($0.1<z<0.55$). The cumulative distribution, and its upper and lower $1\sigma$ limits, are shown in *blue, red, and green solid lines*, respectively. Also plotted is the distribution predicted by the semi-analytic calculation of Oguri & Blandford (2009), normalised to the effective sky area of our sample (*black asterisks*, including errors). The two distributions disagree in two main aspects: there is a $\sim1-2$ orders-of-magnitude number discrepancy between them, and in addition, the two distributions have different slopes. The origin of the discrepancy is not clear and will be investigated elsewhere, although it may be as a result of different mass limit, and the choice of concentration-mass relation and mass function used in the semi analytic calculation. Correspondingly, we find a higher abundance of large $\theta_{e}$ clusters than predicted by the semi-analytic calculation. Our analysis yields $\sim40$ candidates with $\theta_{e}>40 \arcsec$ ($z_{s}=2$), with a maximum of $\theta_{e}\simeq69\pm12\arcsec$ ($z_{s}=2$) for the most massive cluster. Interestingly, this value is in agreement with the estimate by Oguri & Blandford (2000) for the largest Einstein radius. For more details see §\[results\].[]{data-label="cumulate"}](Cumu4.jpg){width="90mm"} ![To assess the difference from the semi-analytic expectation by Oguri & Blandford (2009; see also Figure \[cumulate\]), we compare the width of the tails for $\theta_{e}>10\arcsec$, which is the lower limit taken in their calculation. The histogram shows the results from 1851 SDSS clusters following our analysis; the filled-circles curve shows the all-sky distribution from Oguri & Blandford (2009), and the open-circles curve shows the same distribution normalised to the same sky area as our distribution. Both distributions are (semi) log-normal although with two main differences. The Oguri & Blandford (2009) distribution has a width of $\sigma=0.1448$ (in $Log(\theta_{e})$), while our distribution shows a slower (or wider) decrease, with $\sigma=0.2436$. Also, the overall number of clusters in their analysis for the same sky area, is much lower. For more details see also Figure \[cumulate\] and §\[results\].[]{data-label="diffOguri"}](HistDif10.jpg){width="90mm"} Einstein Radius Distribution {#R_EinsteinRadius} ---------------------------- The resulting Einstein radius distribution for this sample is seen in Figure \[histAll\] as a function of lens redshift (for constant $z_{s}=2$), along with the average and median Einstein radii for each redshift bin, which evolve in redshift and peak at $z_{l}\sim0.1-0.2$ (for $z_{s}=2$), as may be generally expected given the hierarchical growth history of clusters and the distances involved in lensing (this is further discussed in §\[dependency\]). The Hao et al. (2010) cluster catalog lists clusters with at least 8 members within 0.5 Mpc from the BCG. This low limit results in a realistic domination of galaxy-scale lenses ($\theta_{e}$ of an order of a few arcseconds), which are usually not massive enough to form impressive lenses with large Einstein radii and many multiple-images. The more interesting information may be the higher end of the distribution at larger radii. The concept of the largest Einstein radius in the Universe and the expected abundance of large lenses have been discussed thoroughly in the literature, and are especially of high interest as they teach us about the reliability of the standard $\Lambda$CDM model in predicting these extreme cases, as the $\Lambda$CDM model does not favor the formation of giant lenses (e.g., Broadhurst & Barkana 2008, Sadeh & Rephaeli 2008, Zitrin et al. 2009a). Note that clusters with large Einstein radii are found also towards higher redshifts. In addition, though not included in this work, the largest known lens to date, MACS J0717.5+3745 is at a similarly high redshift of $z_{l}\simeq0.55$, with $\theta_{e}\simeq55\arcsec$ for $z_{s}\sim2.5$ (see Zitrin et al. 2009a). Due to a very shallow mass distribution in this cluster (Zitrin et al. 2009a), for $z_{s}=2$ the Einstein radius will be only slightly lower, around $\theta_{e}\sim50\arcsec$ (see also recent paper by Limousin et al. 2011 for new redshift information for this cluster). The exact number will be derived elsewhere, in the framework of the CLASH program. The abundance of larger lenses at these redshifts is caused usually (e.g., Zitrin et al. 2011a), by a spread-out, unrelaxed matter distribution. At these higher redshifts many clusters are not yet relaxed and still undergo mergers, so that the mass distribution is already sufficiently dense for significant lensing, but widely-distributed so that the Einstein radii of the different substructures are merged to form extended critical curves (e.g., Torri et al., 2004, Dalal, Holder, & Hennawi 2004). On the other hand, at a lower redshift, more concentrated clusters are those yielding larger Einstein radii, as there is more mass in the centre enhancing the critical area (see also §\[dependency\]). The blind analysis performed here yielded initially 69 candidates with $\theta_{e}>40 \arcsec$ ($z_{s}=2$), many coincident with various Abell or MACS clusters. We visually inspect each of these clusters and find that some are boosted by various effects detailed below (we omit these clusters from our further analysis), but infer that at least about half of these are most likely real giant-lens candidates, with a maximum of $\theta_{e}\simeq69\pm12 \arcsec$ ($z_{s}=2$) for the most massive candidate. We direct the reader to works by Hennawi et al. (2007a) and Oguri & Blandford (2009) which have investigated in detail the Einstein radius abundance on various scales, based on simulations and $\Lambda$CDM expectations, and taking into account triaxialities which induce a prominent lensing bias. Our realistic, observationally-based results free from lensing bias, are compared to some such expectations explicitly in Figure \[cumulate\], where we plot the cumulative distribution of clusters above each radius with the expected $1\sigma$ errors, propagated from the errors on the best-fitting parameters as described in §\[R\_EinsteinRadius\]. Note, the lower limit shifts the maximal Einstein radius from $\theta_{e}\simeq69 \arcsec$ to $\theta_{e}\sim57 \arcsec$ ($z_{s}=2$), close to that of the largest known lens, MACS J0717.5+3745 (Zitrin et al. 2009a). We note that Oguri & Blandford (2009) who examined in detail the Einstein radius distribution based on semi-analytic expectations, have derived maximal Einstein radius values of $\sim60\arcsec$, but these as shown in their work are very susceptible to the cosmological parameters in general and to $\sigma_{8}$ in particular, and can reach (within the $3\sigma$ confidence) values that are nearly twice as high. Their expected distribution, scaled to the same sky area as our sample and with WMAP7 parameters (Komatsu et al. 2011), is overplotted in Figure \[cumulate\]. Aside for an agreement between their expected largest Einstein radius and the largest lenses found in our analysis, the two cumulative distributions clearly disagree. Although normalised to the same effective sky area, there is a $\sim1$ order-of-magnitude number difference for small Einstein radii, which reaches a $\sim2$ orders-of-magnitude difference for higher Einstein radii, so that in addition, the two distributions have also different slopes. The origin of the discrepancy is not clear, but part of the difference may be due to a different (lower) mass limit probed by the two methods. In addition, the effect of the concentration-mass ($c-M$) relation and the chosen mass function used in semi analytic calculations should clearly have a strong influence on the resulting distribution (e.g., Duffy et al. 2008, Macció et al. 2008, Prada et al. 2011; for differences among various $c-M$ relations), as higher concentrations entail higher inner mass and Einstein radius. We leave the examination of how these may influence the cumulative distribution, for future work. To assess the difference from the semi-analytic expectation by Oguri & Blandford (2009; see also Figure \[cumulate\]), we compare the width of the tails for $\theta_{e}>10\arcsec$, which is the lower limit taken in their calculation. As can be seen in Figure \[diffOguri\], both distributions are (semi) log-normal although with two main differences. The Oguri & Blandford (2009) distribution has a width of $\sigma=0.1448$ (in $Log(\theta_{e})$), while our distribution shows a slower (or wider) decrease, with $\sigma=0.2436$. This difference is commensurate with the different decline of the cumulative distribution seen in Figure \[cumulate\]. Also, the overall number of clusters in their analysis, for the same sky area, is much lower, although this may be, as mentioned, entailed by the different mass limits probed by each method, which will be checked in future work. As mentioned above, one of the most interesting aspects of our study, namely the largest Einstein radius, is in good agreement with the estimation by Oguri & Blandford (2009). Uncertainty and Error Estimation {#R_EinsteinRadius} -------------------------------- Various factors of error should be taken into account when addressing the results reported in this work, though these are mostly statistical and therefore due to the extensively large sample are not significant. In addition, these uncertainties arise mainly from the data themselves, so that applying the method presented here to higher-end, dedicated cluster-survey data (e.g., the expected J-PAS survey; Moles et al. 2010), should produce much cleaner results. ### Possible Factors of Error {#factorsoferror} The first error factor we have investigated is the lens photometric-redshift error. The typical photo-$z$ uncertainty for the sample BCGs (by which we determine the lens distance) is 0.015. A $\sim10\%$ error in the lens redshift can be translated into a noticeable ($>10\%$) difference in the measured Einstein radius, and only about half of the sample has spectroscopic redshifts for the BCG, thus the results for some of the clusters are affected by this error. As can be seen in Hao et al. (2010), the photometric redshifts were tested against the spectroscopic redshifts where possible, yielding a very tight relation strengthening the confidence in them. We have tested the effect of the photometric redshifts on the calibration sample, and regenerated Fig. \[TvTspec\] based on the photometric redshifts (instead of the spectroscopic redshifts; see also Table \[systemo\]). Only slight differences are seen and the overall scatter remains essentially the same. To assess the effect of the photo-$z$ error more quantitatively, we analysed a sample of 500 random clusters (detailed in §\[jack\]) with the catalog photometric redshifts, and then repeated the analysis by photometric redshifts drawn randomly from a normal distribution centred on the catalog photometric redshift for each cluster, with a width of $\sigma=0.015$ (which is the photo-$z$ error quoted in Hao et al. 2010). From this we indeed obtain a low uncertainty of only $1.15\%$ on the cumulative Einstein radius distribution, and for the differential (log-normal) distribution, differences of only $0.38\%$ and $0.92\%$ on $\langle Log(\theta_{e}) \rangle$ and $\sigma$, respectively. ![Einstein radius (log) distribution, from $\simeq$10,000 SDSS clusters ($0.1<z<0.55$). The sample has a log-normal distribution, with $\langle Log(\theta_{e}\arcsec)\rangle=0.73^{+0.02}_{-0.03}$ and $\sigma=0.316^{+0.004}_{-0.002}$.[]{data-label="reHistLog"}](histALLLog.jpg){width="90mm"} ![Luminosity distribution (log). Plotted is the histogram of total B-band solar luminosities for each cluster, i.e., the sum of all cluster member luminosities. We converted the SDSS $r$-band luminosities to B-band (Vega) luminosities by the LRG template characterised in Benítez et al. (2009).[]{data-label="lumHistLog"}](Lum_hist_log.jpg){width="90mm"} Second, the SDSS imaging is shallower than typical HST imaging dedicated to SL analysis. Correspondingly, and supplemented by the conservative cluster-finding algorithm, some of the cluster members are overlooked and often not associated with the cluster, and only the brighter galaxies are incorporated. Luckily, these are also the more massive galaxies and thus the effect on the lens model is minor. In addition, the inclusion of (less-massive) cluster members is known to affect locally the shape of the critical curve, but not to change their overall size (e.g., Flores, Maller & Primack 2000, Meneghetti et al. 2000). The constant $K_{q}$ which was iterated for (which includes the $M/L$ ratio) is probably boosted by the relative loss of galaxy mass-representations in our modelling of the SDSS catalog. This, however, can be very well assumed to be a relatively constant ratio and thus, along with the $M/L$ intrinsic scatter, does not affect substantially the results, as can be seen in the calibration sample comparison, where a clear consistency is found. This may not at all be surprising, since the modelling here is based on simple physical considerations: it has been well established that light approximately traces mass, and clearly, a reasonable $M/L$ relation can be incorporated. Another factor of possible contamination is the lower resolution of SDSS images compared with typical lensing images by HST. This we find may result in local overestimation of the BCG, if another cluster member is found too close to the BCG core to be resolved, thus boosting the Einstein radius (and mass), especially for higher-redshift lenses. The reason is that the smoothing procedure, or the (2D polynomial) fit, is dominated by the BCG. Therefore, although the lumpy (galaxy) component in such scenarios should not have a substantial effect, the smooth component will be over-boosted in the middle (since the BCG would be too bright), thus pushing outwards the Einstein radius. However, this chance alignment is naturally not too common, and in any case will affect mostly the lower end of the distribution, i.e., clusters with small Einstein radius that is dominated fully by the BCG. Clusters with large Einstein radii will be less susceptible to such contamination as the critical curves are not fully dominated by the BCG and contain substantial contribution from other massive cluster members as well. A known factor of systematic error considered in related work on various samples of (often SDSS) clusters, is the miss-centring of mass with respect to the BCG (e.g., Becker et al. 2007, Johnston et al. 2007a, Rozo et al. 2009,2010, Oguri & Takada 2011). In that sense, methods which depend on a predefined centre may be affected from a scatter in the location of the BCG with respect to the true centre-of-mass, if the prior is constantly assumed to lie at the very centre of the cluster. However, in our method, there is no need to predefine the exact centre-of-mass. The smoothing procedure we implement has the advantage of being independent from a predefined centre, and the result is ultimately determined simply and directly by the galaxy (light) distribution. In fact, this has enabled us to find various such shifts between the BCG and the centre of (dark) mass (e.g., Zitrin et al. 2009b, Umetsu et al. 2011a). However, if the GMBCG catalog itself has misidentified a galaxy as the BCG, which we use as the centre-of-frame for our analysis, this may entail a shift in the analysed field, so that in principle some relevant galaxies may lie outside it. Nevertheless, since the Hao et al. (2010) catalog considers galaxies within 0.5 Mpc, even for the highest redshift clusters of the sample ($z=0.55$), this size translates into $\simeq78$ arcseconds. Since the Einstein radius is determined by the mass enclosed within it, and since only less than a handful of clusters may have such a large Einstein radius (following the $1\sigma$ upper limit, see Figure \[cumulate\]), this may only have a negligible effect over the whole sample. It should be noted that the results for $z_{l}>0.43$ should be more cautiously addressed, as the catalog is officially volume limited up to this redshift due to the luminosity cuts that require potential member galaxies to be brighter than 0.4L\*, where L\* is the characteristic luminosity in the Schechter luminosity function. Also, for higher redshifts, the different red-sequence criteria ($r-i$ instead of $g-r$, see Hao et al. 2010) may come in play and input some more noise, mostly with respect to the richness level, so that overall one should expect fewer members assigned for $z_{l}>0.43$ clusters relative to clusters below this redshift. In order to test this effect we repeated our analysis including only clusters in the volume limit of $z_{l}<0.43$ and verified that only negligible differences are seen with regard to the Einstein radius distribution (e.g., such analysis yields a log-normal Einstein radius distribution with $\langle Log(\theta_{e}\arcsec)\rangle=0.75$ and $\sigma=0.31$, similar to the full sample; see Figure \[reHistLog\]). In addition, high redshift clusters in the calibration sample also show a satisfying result following the same scaling relation as lower redshift clusters (with a scatter of up to $\simeq15\%$ with the best-fitting parameters, or up to $\sim5\%$ scatter with the Jackknife minimisation, see §\[jack\]). Also, if we exclude these from the calibration-sample minimisation, the best-fitting parameters differ by less than $3\%$ from those obtained with the full sample. ![Total luminosity versus Einstein radius. As is evident, the total luminosity itself is not an accurate indicator of the Einstein radius. Following the more realistic procedure described in this work is necessary in order to obtain a reliable mass model and consequently a reliable Einstein radius distribution.[]{data-label="lum_re"}](Lum_vs_reLine2.jpg){width="90mm"} ![Consistency check for our procedure. The Einstein mass versus the Einstein radius of the $\simeq$10,000 clusters analysed here. As can be expected, the Einstein radius and mass correlate with a square relation, whereas some scatter is seen since naturally the lenses are not strictly symmetric. Spurious detections did not follow the presented relation, aiding us in excluding them from further analysis.[]{data-label="mre"}](MRE.jpg){width="90mm"} We have identified within the 69 initial candidates with $\theta_{e}>40 \arcsec$ ($z_{s}=2$), several clusters that were misidentified as higher redshift clusters (according to their observed BCG), though they are most likely substructures of a foreground more massive (and known) cluster on the same line-of-sight. This boosts significantly the Einstein radius, and such cases as mentioned were omitted from our further analysis. Due to the low chances of such alignments and resulting misidentifications, the effect of this on the full sample and especially on the lower $\theta_{e}$ regime, is expected to be minimal. ### Quantification of Errors and Uncertainty {#jack} In order to assess better the amount of statistical uncertainty caused by the various factors (e.g., §\[factorsoferror\]), we have first examined by eye a sample of 100 random clusters from the catalog and the critical curves generated for them by our automated modelling. We found only 3 clusters whose Einstein radius is boosted due to an unresolved galaxy near the BCG, and $\sim$15 more clusters with some galaxies that by eye do not necessarily seem to have similar colors, so that if these are misidentified as cluster members, may be introducing some additional noise. Clearly, this designation is not purely objective, but still allows us to conclude, in addition to the other consistency checks we performed, that the overall noise level in our analysis is reasonable. As an additional complementary step, and regardless of the calibration sample, we searched for SDSS $z_{s}\sim2$ arcs found in Bayliss et al. (2011b) and examined how well our critical curves could in principle reproduce these (giant) arcs. Although the location of the arcs is not used as input, our blind analysis automatically reproduces critical curves that pass through them as expected, strengthening further our automated approach. An example is given in Figure \[Bayliss\]. To explicitly quantify the errors and level of uncertainty we perform two majors procedures. Firstly, we perform a “Jackknife” minimisation: on top of the $\chi^2$ (eq. \[chi2\]) minimisation with the full calibration sample (to find the best parameters for the blind analysis), we perform the minimisation 10 more times, each time omitting one cluster from the fit, and then analysing it with our automated procedure to examine how well the Einstein radius is estimated. We note, that by doing so the best-fit values for $K_{gal}$ and $K_{q}$ in each such iteration distribute around the best-fitting parameters when minimised by all ten clusters together, with values up to $\simeq3\sigma$ away. With this, we obtain that the Einstein radii for all ten clusters are estimated within $\sim17\%$ of their reference value (according to HST-based detailed analyses with multiple-images as input), while 9 clusters show a scatter of up to $\sim13\%$. These (only) represent how well each of the Einstein radii of the calibration sample can be reproduced individually. Therefore, secondly, we wish to examine how the ($1\sigma$) errors drawn from the full reference sample minimisation are propagated into the statistical results, since these should depend on other quantities such as, e.g., the number of lenses per (Einstein radius) bin. For that purpose, we analyse 500 random clusters with the best-fitting parameter values, and then repeat the analysis marginalising over the $1\sigma$ errors. For the differential, log-normal Einstein radius distribution, these result in differences of $^{+2.8\%}_{-3.5\%}$ on $\langle Log(\theta_e)\rangle$, and $^{+1.31\%}_{-0.7\%}$ on $\sigma$, and upper and lower limits of $\sim18\%$ on the cumulative Einstein radius distribution (Fig. \[cumulate\]). We take these to represent the level of (statistical) uncertainty in our analysis. It should be mentioned, that although the sample analysed here was not selected based on mass or arc abundance and thus is not biased in terms of lensing, the calibration sample used to determine the model parameters ($K_{q}$ in particular) consists of 10 well-known massive clusters, which might introduce a systematic error boosting the Einstein radii. Though low and moderate-mass lensing clusters are hard to model for comparison due to lack of multiple-image constraints, the calibration sample contains clusters with as few as 11 members, and as many as 142 members, thus spanning nearly the full richness range of the probed SDSS sample. The possible bias might be further looked into by comparing galaxy and group-scale lenses with known prominent arcs, often found in systematic surveys for gravitational arcs (e.g., Sand et al. 2005, Hennawi et al. 2008, Kubo et al. 2010, Bayliss et al. 2011a,b, Wen, Han & Jiang 2011), which should also be useful for extending the calibration sample and examining further this effect. We note that due to the approach implemented here which does not use multiple-images as input, the profiles and magnifications are not well constrained for each cluster, and the only relevant measure which we refer to is the effective Einstein radius (and enclosed mass), as seen in Figure \[QSmag\]. Naively, one could in principle derive the mass profile for each lens by simply assuming different fiducial source redshifts and calculate the enclosed mass by implementing their distance-redshift relation, but this would be premature at this stage, as though the parameters maintained constant here (on typical values) do not considerably affect the critical curves shape and size, the mass profile is susceptible to these and thus a separate calibration is required for each source redshift based on the full reference sample. This however may indeed be plausible, as we intend to probe in future work. ### Consistency checks To further verify that the data used here is reasonable for our purpose, especially the luminosity of clusters members to which our method is coupled, we perform a few simple self-consistency checks. The overall Einstein radius distribution is plotted is Figure \[reHistLog\], and is clearly log-normal in shape, with $\langle Log(\theta_{e}\arcsec)\rangle=0.73^{+0.02}_{-0.03}$ and $\sigma=0.316^{+0.004}_{-0.002}$. The luminosity distribution is plotted in Figure \[lumHistLog\] for comparison, and for each cluster we explicitly compare in Figure \[lum\_re\] the total luminosity to its resulting Einstein radius, where it is importantly evident that the total luminosity itself is not an accurate enough measure of the Einstein radius. Following a more realistic procedure as described in this work is necessary in order to obtain a reliable mass and Einstein radius distribution. More explicitly, as the mass is more concentrated than the light, one must choose a more concentrated representation for the galaxies (e.g., the power-law used here), which is then simply scaled by the luminosity. Similarly, we stressed that the DM is well represented by smoothing the galaxies mass distribution, which is more efficient in practice for the inner SL region, than e.g. assuming a symmetric mass distribution such as NFW which often does not allow to immediately uncover the multiple-images by the model. In Figure \[mre\] we plot the enclosed mass versus the Einstein radius for each cluster. Although a tight relation is expected directly from the lensing equations, this constitutes an important self-consistency check. We simply measure the Einstein radius for each cluster as the area enclosed within the critical curves (exploiting the magnification sign-changes to estimate this automatically), where the mass is measured by summing the surface-density in the pixels which fall within the critical curves. The Einstein masses correlate well with the Einstein radii, with a square relation as expected, and with a reasonable scatter since the clusters cannot be expected to be strictly symmetric. Explicitly, the $R^{2}$ of the fit is 0.985, and the mean scatter is lower than $20\%$ (although there is an excess in the scatter of up to almost $100\%$ around 100 kpc for some individual clusters, probably due to the different factors of error elaborated in §\[factorsoferror\]). Also, in this consistency check, clusters spuriously assigned with large Einstein radii due to effects detailed above did not follow the expected relation, aiding us to exclude them from our further analysis. Correlations with cluster parameters {#dependency} ------------------------------------ Since the Einstein radius correlates with the mass interior to it, some dependence on the examined cluster parameters which are related to the observed mass, such as redshift, richness, and luminosity, can be expected. For example, in Figure \[histAll\] we showed the Einstein radius distribution in different redshift bins, where it is evident that large Einstein radii are observed more frequently in the lower ($z\sim0.1$) and higher ($z\sim0.5$) redshifts of the sample, whereas in intermediate redshifts the mean Einstein radius is smaller. To further quantify this effect, and since Figure \[reHistLog\] shows that the Einstein-radius distribution is log-normal, we plot the mean (and width) of the log-normal effective Einstein radius distribution in different redshift bins. The result is seen in Figure \[3params\] (*top*), where we also fit first- and second-order polynomials to the data. Although this tendency is only of the order of the log-normal distribution widths, the mean effective Einstein radii steadily decrease from $z\sim0.1$ to $z\sim0.45$, and then increase again (see Figure \[3params\]). This tentative decrease of the mean effective Einstein radius with redshift may be related to cluster evolution. For example, lower redshift clusters, which have had more time to collapse, relax, and virialise, are expected to have more concentrated mass distributions and thus be stronger lenses (e.g., Giocoli et al. 2011). On the other hand, the tentative increase of the mean effective Einstein radii towards $z\sim0.5$ may be related to more substructured mass distributions, whose critical curves for the several merging subclumps are merged together to a bigger critical curve (e.g., Torri et al., 2004, Dalal, Holder, & Hennawi 2004), although it is unclear at present how prominent is this effect. It should be noted, however, that the cluster catalog is volume limited only up to $z\sim0.43$ where the tentative rise with redshift towards larger Einstein radii sets in. No weight should therefore currently be given to the two highest redshift bins since they may be affected by different selection criteria applied for cluster detection above $z_{l}\sim0.43$. To test this, we examined plots of the cluster luminosity and richness versus redshift, as larger $\theta_{e}$ clusters should have in principle higher luminosity and richness. A pronounced step is seen at $z=0.43$, so that the majority of luminous clusters ($\gtrsim2\times10^{12}~L_{\odot}$) is found at $z_{l}>0.43$, which most likely renders the rise in Einstein radii above $z_{l}>0.43$ a result of the different red-sequence criteria applied for higher-$z$ clusters. Given this, we ignore the mean effective Einstein radii above $z=0.43$, and concentrate on the indication of a decrease from low redshift towards $z=0.43$. One should quantify the effect of geometry on the observed evolution of the mean (log) Einstein radius with redshift. The motivation is to check what is the contribution of pure geometrical effects, versus, say, evolutionary processes of the clusters. However, this can only be done qualitatively, since in order to know the geometrical dependence of the Einstein radius, one has to know the mass profiles in advance. For example, for a point mass $\theta_{e}\propto (d_{ls}d_{l}/d_{s})^{0.5}$, while for an isothermal sphere $\theta_{e}\propto (d_{ls}/d_{s})$. Generally, for a power-law projected mass distribution $\propto\theta^{-w}$, the angular Einstein radius scales with the distances as $\theta_{e}\propto (d_{ls}d_{l}/d_{s})^{1/w}d_{l}^{-1}$. If we take the power law to be isothermal, $w=1$, the angular Einstein radius (for constant $z_{s}=2$) decreases by $\sim25\%$ from the $z_{l}\sim0.13$ bin to the $z_{l}\sim0.45$ bin. For mass profiles steeper than isothermal, $\theta_e$ decreases more rapidly with lens redshift, while for flatter profiles it increases (or shows a more complex behavior such as increasing and then decreasing). The observed monotonic decline between $z_{l}\sim0.13$ and $z_{l}\sim0.45$ seen in Figure \[3params\] is of $\sim40\%$, so that a steeper profile than isothermal ($w\simeq1.5$) is needed to fully explain this decline by geometrical means. Any possible significance of the redshift evolution of $\theta_e$ for cluster evolution can thus only be assessed once the mass profile is known. We also compare the observed decline to semi-analytic calculations, in which standard concentration-mass relations and mass functions are incorporated. Quite independent of their detailed assumptions, such calculations yield *increasing* mean Einstein radii with lens redshift, opposite of our result. One such semi-analytic calculation we use for comparison here (M. Redlich, private communication) is based on the Press & Schechter (1976) mass function. Effective Einstein radii are derived from assumed relaxed (i.e. non-merging), triaxial NFW haloes, adopting the concentration-mass relation from Jing & Suto (2002) and including only halos with $M\ge10^{14}~M_{\odot}$. This calculation yields, similar to the result by Oguri & Blandford (2009), a larger fraction of smaller Einstein radii than our findings, and the mean effective (logarithmic) Einstein radius increases with redshift, contrary to the decline we observe here. This however, may be partly explained (like the discrepancy in Figure \[cumulate\]) by the lower-mass limit (more lower-mass clusters in higher redshifts would alleviate the discrepancy), or the choice of mass function and concentration-mass relation implemented therein. A more extensive calibration sample needs to be obtained before strict conclusions can be drawn. In principle, however, our results may help to pin down the true concentration-mass relation, in order to be compared with the evolution trends obtained in semi-analytic calculations, independent cluster evolution studies (e.g., Maughan et al. 2008 and references therein), or related numerical simulations (e.g. Duffy et al. 2008, Macció et al. 2008, Prada et al. 2011). Finally, we note that the (logarithmic) mean effective physical Einstein radius, i.e. the angular Einstein radius times the angular-diameter distance to the cluster, is constant across the redshift range $0.1<z_{l}<0.43$ with $\langle Log(\theta_{e}~[$kpc$])\rangle=1.25\pm0.03$. Tentatively, near-constant physical Einstein radii with redshift can be achieved with standard, e.g. NFW density profiles if the concentration-mass relation evolves steeply with redshift. Adopting a relation of the form $c\propto M^{-\alpha}(1+z)^{-\beta}$ (e.g. Duffy et al. 2008), we find that $\beta\approx-3$ is required to reproduce the trend seen in Figure \[3params\]. Current well-established concentration-mass relations extracted from numerical simulations find $\beta\sim-1$ (e.g., Bullock et al. 2001, Duffy et al. 2008). We emphasize once more that these conclusions are tentative and preliminary, as the results of this work should be revised once a more significant calibration sample is available. A more profound investigation of the indicated redshift evolution and its comparison to numerical simulations would thus be premature. ![Evolution of the mean effective logarithmic Einstein radii with redshift (*top*), total luminosity (*centre*), and richness (*bottom*). The horizontal error bars mark the bin widths, and the vertical error bars mark the width of the distribution in the corresponding bin, $\sigma$. In each plot we least-square fit a linear curve to the data (solid lines), where for the redshift plot (top) we also show a second-order fit (dashed line). The curve fits in the redshift and richness plots include only the first seven bins, due to incompleteness of the catalog at higher redshifts, governed by higher richness and luminosity clusters. In the total luminosity plot, we do not show the full luminosity range, since there are too few clusters to deduce a representative distribution for higher luminosity bins than those shown. See §\[dependency\] for more details.[]{data-label="3params"}](AsfuncOfZ2.jpg "fig:"){width="80mm"} ![Evolution of the mean effective logarithmic Einstein radii with redshift (*top*), total luminosity (*centre*), and richness (*bottom*). The horizontal error bars mark the bin widths, and the vertical error bars mark the width of the distribution in the corresponding bin, $\sigma$. In each plot we least-square fit a linear curve to the data (solid lines), where for the redshift plot (top) we also show a second-order fit (dashed line). The curve fits in the redshift and richness plots include only the first seven bins, due to incompleteness of the catalog at higher redshifts, governed by higher richness and luminosity clusters. In the total luminosity plot, we do not show the full luminosity range, since there are too few clusters to deduce a representative distribution for higher luminosity bins than those shown. See §\[dependency\] for more details.[]{data-label="3params"}](AsFuncOfL.jpg "fig:"){width="80mm"} ![Evolution of the mean effective logarithmic Einstein radii with redshift (*top*), total luminosity (*centre*), and richness (*bottom*). The horizontal error bars mark the bin widths, and the vertical error bars mark the width of the distribution in the corresponding bin, $\sigma$. In each plot we least-square fit a linear curve to the data (solid lines), where for the redshift plot (top) we also show a second-order fit (dashed line). The curve fits in the redshift and richness plots include only the first seven bins, due to incompleteness of the catalog at higher redshifts, governed by higher richness and luminosity clusters. In the total luminosity plot, we do not show the full luminosity range, since there are too few clusters to deduce a representative distribution for higher luminosity bins than those shown. See §\[dependency\] for more details.[]{data-label="3params"}](AsfuncOfN.jpg "fig:"){width="80mm"} Apart for the redshift dependence, we repeated the above procedure and examined the mean effective logarithmic Einstein radii also in different total luminosity and richness bins (namely, how many red sequence galaxies are assigned to each cluster by the Hao et al. 2010 catalog). The evolution of $\langle Log(\theta_{e}\arcsec)\rangle$ and $\sigma$ for these is plotted in Figure \[3params\] (*centre* and *bottom*), respectively. A (mild, given the distribution widths) trend is uncovered as a function of both luminosity and richness, so that on average, higher luminosity and richness clusters, show larger Einstein radii. These trends are quite expected, since richer clusters are naturally more luminous and more massive, correspondingly (for established mass-richness relations see, e.g., Rozo et al. 2009, Bauer et al. 2012). In addition, although not specifically shown here, for completeness we also examined both the richness and luminosity in the difference redshifts bins. We note that the richness is $\sim$constant throughout the volume-limited redshift range, while $\langle Log(L_{tot}~[L_{\odot}])\rangle$ monotonically increases by $\simeq0.4$ from $z\sim0.1$ to $z\sim0.45$, as can be generally expected from passive evolution of the cluster galaxies (although also here the trend is insignificant given the widths of the distribution in each bin, which is of the same size as the increase throughout the range, $\sigma\simeq0.4$). We note, in addition, that with respect to the *widths* of the logarithmic distributions in different redshifts, luminosity or richness bins, we do not uncover any prominent trend (which could withhold information on, say, the level of different population mix in the different bins). Summary ======= In this paper we presented an automated strong-lens modelling tool, which is used to efficiently derive the Einstein radius and mass distributions of 10,000 SDSS clusters, found by the Hao et al. (2010) cluster-finding algorithm in DR7 data. We adopt the well-tested approach that light overall traces mass (with a more realistic representation of the galaxies and DM, see §\[model\]), and normalise according to a typical average mass-to-light relation established here, to obtain a reliable deflection field based on the distribution and luminosity of bright cluster members. This procedure, as we have shown in many previous SL analyses, is sufficient to derive the critical curves with enough accuracy to immediately identify many multiple-images across the lensing field, as the primary mass distribution is initially well-constrained. Here we used a subsample of 10 well-studied clusters covered by both SDSS and HST to calibrate and test our analysis method, and showed that remarkably accurate determination of the Einstein radius can be made in an automated way, based on the light distribution of bright galaxies, and scaled by their luminosity. A tight correlation is seen between the Einstein radii derived in detailed analyses of HST data and using multiple images as input, and those from the “blind” survey tool presented here, operated on the same clusters but in SDSS data and without using any multiple-images information as input. This efficient modelling method enables us to present the first observationally-based representative Einstein radius distribution, based on a coherent unbiased sample of the first 10,000 clusters in the Hao et al. (2010) catalog, larger by a few orders of magnitude than the number of SL clusters analysed to date. For this all-sky representative sample the Einstein radius distribution is log-normal in shape, with $\langle Log(\theta_{e}\arcsec)\rangle=0.73^{+0.02}_{-0.03}$, $\sigma=0.316^{+0.004}_{-0.002}$, and with higher abundance of large $\theta_{e}$ clusters than predicted by $\Lambda$CDM, and with a maximum of $\theta_{e}\simeq69\pm12 \arcsec$ ($z_{s}=2$) for the most massive candidate, in agreement with semi-analytic calculations. In addition to characterising the overall Einstein distribution, we also uncover various relations with cluster properties listed in the probed catalog. For example, as may be expected, a clear relation is seen between the logarithmic Einstein radius distribution mean (for $z_{s}=2$), and the luminosity or richness, so that richer and more luminous clusters exhibit, on average, larger Einstein radii. An especially intriguing trend is found with the cluster redshift. On average, the mean effective Einstein radii steadily increase from $z\sim0.45$ to $z\sim0.1$. If real, and not fully accounted for by geometry (this requires knowledge of the mass profile, see §\[dependency\]), this may be a result of cluster evolution and relaxation processes, which make lower-$z$ clusters more concentrated, thus boosting the mass in the centre and thus the Einstein radius. Reexamining the log-normal Einstein distribution in physical scales rather than angular scales, we obtain best-fitting values of $\langle Log(\theta_{e}~[$kpc$])\rangle=1.418\pm0.006$ and $\sigma=0.30\pm0.01$, for the full sample. Subdividing this distribution, the mean effective Einstein radii are constant throughout the volume-limited range of the catalog ($0.1<z_{l}<0.43$), $\langle Log(\theta_{e}~[$kpc$])\rangle=1.25\pm0.03$. The redshift trend tentatively seen in our results could for instance be explained by a concentration-mass relation that evolves more steeply in redshift than found in numerical simulations. If confirmed, the redshift evolution indicated here could help deriving an observational concentration-mass relation once a broader calibration sample is available. The results presented here are affected to some extent by statistical noise and uncertainty as detailed above (typically $\leq18\%$; §\[results\]), but it should be stressed that these uncertainties arise mostly from the data themselves, and not from the modelling method, which in light of higher-end data will produce much cleaner results. In fact, our SL algorithm could independently verify or at least probe, the cluster catalog and its cluster finding algorithm, by marking possible misidentified cluster candidates which do not follow the relations we obtained throughout this work. Such an efficient modelling method can also aid in actually finding massive large lenses and many multiple-images, which in turn could be used to fine-tune the mass model and profile, especially when redshift information and preferably high-resolution deep space-imaging data are available. Combined with complementary data such as weak-lensing, this will allow for an extensive examination of many other cluster properties, such as the mass-concentration relation, or a “universal” mass profile shape (e.g., the CLASH program, Postman et al. 2011; see also Umetsu et al. 2011a,b). Further analysis results of the SDSS cluster catalog of Hao et al. (2010) will be presented in upcoming work, where we will aim to include a larger reference calibration sample to validate further the results and uncertainties presented here. acknowledgments {#acknowledgments .unnumbered} =============== We thank the anonymous referee for significant and very constructive comments. AZ is grateful for the John Bahcall excellence prize which further encouraged this work, and to Sharon Sadeh, Dan Maoz, Irene Sendra, Gregor Seidel, Elisabeta Lusso, Björn M. Schäfer, Matthias Redlich, Joseph Hennawi and Andrea Macció, for useful discussions. This work was supported by contract research “Internationale Spitzenforschung II/2” of the Baden-Württemberg Stiftung, and by the transregional collaborative research centre TR 33 of the Deutsche Forschungsgemeinschaft. Part of this work is based on observations made with the NASA/ESA Hubble Space Telescope. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This work was supported in part by the FIRST program “Subaru Measurements of Images and Redshifts (SuMIRe)”, World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and Grant-in-Aid for Scientific Research from the JSPS (23740161). ![image](comparisonNEW.jpg){width="140mm"} Abazajian, K., et al., 2003, AJ, 126, 2081 Abazajian, K.N., et al., 2009, ApJS, 182, 543 Bartelmann, M., Steinmetz, M., Weiss, A., 1995, A&A, 297, 1 Bartelmann, M., 2010, arXiv:1010.3829 Bauer, A.H., Baltay, C.,; Ellman, N., Jerke, J., Rabinowitz, D., Scalzo, R., 2012, arXiv:1202.1371 Bayliss, M.B., Gladders, M.D., Oguri, M., Hennawi, J.F., Sharon, K., Koester, B.P., Dahle, H., 2011a, ApJ, 727L, 26 Bayliss, M.B., Hennawi, J.F., Gladders, M.D., Koester, B.P., Sharon, K., Dahle, H., Oguri, M., 2011b, ApJS, 193, 8 Becker, M.R., et al., 2007, ApJ, 669, 905 Benítez, N., et al., 2009, ApJ, 691, 241 Bradač, M., et al., 2005, A&A, 437, 49 Bradač, M., et al., 2006, ApJ, 652, 937 Broadhurst, T., et al. 2005a, ApJ, 621, 53 Broadhurst, T., Takada, M., Umetsu, K., Kong, X., Arimoto, N., Chiba, M., Futamase, T., 2005b, ApJ, 619, 143 Broadhurst, T. & Barkana, R., 2008, MNRAS, 390, 1647 Broadhurst, T, Umetsu, K, Medezinski, E., Oguri,M., Rephaeli, Y., 2008, ApJ 685, L9 Bullock J.S., Kolatt T.S., Sigad Y., Somerville R.S., Kravtsov A.V., Klypin A.A., Primack J.R., Dekel A., 2001, MNRAS, 321, 559 Cabanac, R.A., et al., 2007, A&A, 461, 813 Coe, D., Benítez, N., Broadhurst, T., Moustakas, L.A., 2010, ApJ, 723, 1678 Coe, D., Fuselier, E., Benítez, N., Broadhurst, T., Frye, B., Ford, H., 2008, ApJ, 681, 814 Coe, D., et al., 2012, arXiv:1201.1616 Corless, V.L. & King, L.J., 2009, MNRAS, 396, 315 Dalal, N., Holder, G., Hennawi, J.F., 2004, ApJ, 609, 50 Diego J. M., Sandvik H. B., Protopapas P., Tegmark M., Benítez N., Broadhurst T., 2005, MNRAS, 362, 1247 Donnarumma, A. et al., 2011, A&A, 528A, 73 Duffy, A.R., Schaye, J., Kay, Scott T.; Dalla Vecchia, C., 2008, MNRAS, 390L, 64 Eisenstein, D.J., 2005, ApJ, 633, 560 Flores, R.A., Maller, A.H., Primack, J.R., 2000, ApJ, 535, 555 Gavazzi, R., Fort, B., Mellier, Y., Pelló, R., Dantel-Fort, M., 2003, A&A, 403, 11 Giocoli, C., Meneghetti, M., Bartelmann, M., Moscardini, L., Boldrin, M., 2011, arXiv:1109.0285 Gralla, M.B., et al., 2011, ApJ, 737, 74 Grillo, C., Eichner, T., Seitz, S., Bender, R., Lombardi, M., Gobat, R., Bauer, A., 2010, ApJ, 710, 372 Halkola A., Hildebrandt H., Schrabback T., Lombardi M., Bradač M., Erben T., Schneider P., Wuttke D., 2008, A&A, 481, 65 Halkola, A., Seitz, S., Pannella, M., 2006, MNRAS, 372, 1425 Hao, J., et al., 2009, ApJ, 702, 745 Hao, J., et al., 2010, ApJS, 191, 254 Hennawi J.F., Dalal N., Bode P., Ostriker J.P., 2007a, ApJ, 654, 714 Hennawi J.F., Dalal N., Bode P., 2007b, ApJ, 654, 93 Hennawi J.F., et al., 2008, AJ, 135, 664 Hilbert, S., White, S.D.M., Hartlap, J., Schneider, P., 2007, MNRAS, 382, 121 Hildebrandt, H., et al., 2011, arXiv:1103.4407 Horesh, A., Maoz, D., Hilbert, S., Bartelmann, M., arXiv:1101.4653 Jing, Y. P. & Suto, Y., 2002, ApJ, 574, 538 Johnston, D.E., et al., 2007a, arXiv:0709.1159 Johnston, D.E., Sheldon, E.S., Tasitsiomi, A., Frieman, J.A., Wechsler, R.H., McKay, T.A., 2007b, ApJ, 656, 27 Jullo, E., Kneib, J.-P., Limousin, M., Elíasdóttir, A., Marshall, P. J., Verdugo, T., 2007, NJPh, 9, 447 Keeton, C. R. 2001, preprint \[astro-ph/0102340\] Kneib, J.-P., Ellis, R. S., Smail, I., Couch, W. J., Sharples, R. M , 1996, ApJ, 471, 643 Koester, B.P., et al., 2007, ApJ, 660, 239 Komatsu, E., et al., 2011, ApJS, 192, 18 Kubo, J.M., et al., 2010, ApJ, 724L, 137 Liesenborgs, J., De Rijcke, S., Dejonghe, H., 2006, MNRAS, 367, 1209 Liesenborgs, J., De Rijcke, S., Dejonghe, H., Bekaert, P., 2007, MNRAS, 380, 1729 Liesenborgs, J., De Rijcke, S., Dejonghe, H., Bekaert, P., 2009, MNRAS, 397, 341 Limousin, M., et al., 2007, ApJ, 668, 643 Limousin, M., et al., 2008, A&A, 489, 23 Limousin, M., et al., 2010, MNRAS, 405, 777 Limousin, M., et al., 2011, arXiv:1109.3301 Macció, Andrea V., Dutton, Aaron A., van den Bosch, Frank C., 2008, MNRAS, 391, 1940 Mandelbaum, R., Seljak, U., Cool, R.J., Blanton, M., Hirata, C.M., Brinkmann, J., 2006, MNRAS, 372, 758 Marshall, P.J., Hogg, D.W., Moustakas, L.A., Fassnacht, C.D., Bradač, M., Schrabback, T., Blandford, R.D., 2009, ApJ, 694, 924 Maughan, B.J., Jones, C., Forman, W., Van Speybroeck, L., 2008, ApJS, 174, 117 Meneghetti, M., Bolzonella, M., Bartelmann, M., Moscardini, L., Tormen, G., 2000, MNRAS, 314, 338 Meneghetti, M., Fedeli, C., Pace, F., Gottlöber, S., Yepes, G., 2010a, A&A, 519A, 90 Meneghetti, M., Rasia, E., Merten, J., Bellagamba, F., Ettori, S., Mazzotta, P., Dolag, K., Marri, S., 2010b, A&A, 514A, 93 Meneghetti, M., Fedeli, C., Zitrin, A., Bartelmann, M., Broadhurst, T., Gottlöeber, S., Moscardini, L., Yepes, G., 2011, arXiv:1103.0044, A&A in press Merten, J., Cacciato, M., Meneghetti, M., Mignone, C., Bartelmann, M., 2009, A&A, 500, 681 Merten, J., et al., 2011, MNRAS, 417, 333 Moles, M., Sánchez, S.F., Lamadrid, J.L., Cenarro, A.J., Cristóbal-Hornillos, D., Maicas, N., Aceituno, J., 2010, PASP, 122, 363 Narayan, R. & Bartelmann, M., Lectures on Gravitational Lensing, 1996, arXiv:astro-ph/9606001v2 Oguri, M. & Blandford, R.D., 2009, MNRAS, 392, 930 Oguri, M., et al., 2009, ApJ, 699, 1038 Oguri, M. & Takada, M. 2011, PhRvD, 83b, 023008 Ponente, P.P. & Diego, J.M., 2011, A&A, 535A, 119 Postman, et al., 2011, arXiv:1106.3328 Prada, F., Klypin, A.A., Cuesta, A.J., Betancort-Rijo, J.E., Primack, J., 2011, arXiv:1104.5130 Press, W.H. & Schechter P., 1974, ApJ, 187, 425 Puchwein, E. & Hilbert, S., 2009, MNRAS, 398, 1298 Richard, J., et al., 2010, MNRAS, 404, 325 Richard, J., Pei, L., Limousin, M., Jullo, E., Kneib, J. P., 2009, A&A, 498, 37 Rozo, E., et al., 2009, ApJ, 699, 768 Rozo, E., et al., 2010, ApJ, 708, 645 Sadeh, S. & Rephaeli, Y., 2008, MNRAS, 388, 1759 Sand, D.J., Treu, T., Ellis, R.S., Smith, G.P., 2005, ApJ, 627, 32 Seljak, U., et al., 2005, PhRvD, 71j3515 Sereno, M.; Jetzer, Ph.; Lubini, M, 2010, MNRAS, 403, 2077 Sheldon, E.S., et al., 2009, ApJ, 703, 2217 Tegmark, M., et al., 2004, ApJ, 606, 702 Tegmark, M., et al., 2006, PhRvD, 74l3507 Torri, E., Meneghetti, M., Bartelmann, M., Moscardini, L., Rasia, E., Tormen, G., 2004, MNRAS, 349, 476 Tremonti, C.A., et al., 2004, ApJ, 613, 898 Umetsu, K., et al., 2009, ApJ, 694, 1643 Umetsu, K., Broadhurst, T., Zitrin, A., Medezinski, E., Hsu, L.-Y., 2011a, ApJ, 729, 127 Umetsu, K., Broadhurst, T., Zitrin, A., Medezinski, E., Coe, D., Postman, M., 2011b, arXiv:1105.0444 Wambsganss, J., Cen, R., Ostriker, J.P., Turner, E.L., 1995, Sci, 268, 274 Webster, R.L., Hewett, P.C., Irwin, M.J., 1988, AJ, 95, 19 Wen, Z-L., Han, J-L., Jiang, Y.-Y., 2011, RAA, 11.1185 Wojtak, R., Hansen, S.H., Hjorth, J., 2011, Nature, 477, 567 Zitrin, A., Broadhurst, T., Rephaeli, Y., Sadeh, S., 2009a, ApJ, 707L, 102 Zitrin, A., et al., 2009b, MNRAS, 396, 1985 Zitrin, A., et al., 2010, MNRAS, 408, 1916 Zitrin, A., Broadhurst, T., Barkana, R., Rephaeli, Y., Benítez, N., 2011a, MNRAS, 410, 1939 Zitrin, A., Broadhurst, T., Coe, D., Liesenborgs, J., Benítez, N., Rephaeli, Y., Ford, H., Umetsu, K., 2011b, MNRAS, 413, 1753 Zitrin, A., et al., 2011c, ApJ, 742, 117 \[lastpage\] [^1]: E-mail:adiz@wise.tau.ac.il
--- abstract: 'In this note, by making use of a hypergeometric series identity derived by Guillera, I prove a Ramanujan-type series for the Catalan’s constant. The convergence rate of this central binomial series representation surpasses those of all known similar series, including a classical formula by Ramanujan and a recent formula by Lupas. Interestingly, this suggests that an Apéry-like irrationality proof could be found for this constant.' address: 'Institute of Physics, University of Brasília, P.O. Box 04455, 70919-970, Brasília-DF, Brazil' author: - 'F. M. S. Lima' title: 'A rapidly converging Ramanujan-type series for Catalan’s constant' --- Catalan’s constant ,Hypergeometric series ,Central binomial sums ,Convergence acceleration 11Y60 ,11M06 ,30B50 ,33F05 ,33C20 Introduction ============ Catalan’s constant, so named in honor to Eugène C. Catalan (1814-1894), who first developed series and definite integrals representations for it, is a classical mathematical constant which may be defined as [@Finch] $$G := \sum_{n=0}^\infty{\frac{(-1)^n}{(2n+1)^2}} = 0.9159655941\ldots % \, , %L(2,\chi_4) \, , \label{eq:def}$$ This constant is a special value of some important functions such as the Dirichlet’s beta function $\,\beta{(s)} := \sum_{n=0}^\infty{(-1)^n/(2n+1)^s}$, namely $\,\beta{(2)} = G$, and the Clausen’s function $\,\mathrm{Cl}_2(\theta) := \Im{\left(\mathrm{Li}_2(e^{i\theta})\right)}$, namely[^1] $$\mathrm{Cl}_2\left(\frac{\pi}{2}\right) = G \label{eq:Clausen}$$ and $\,\mathrm{Cl}_2(3\pi/2) = -\,G$. For positive integer values of $\,n$, we can trace an analogy between $\beta{(n)}$ and $\zeta{(n)} := \sum_{k=1}^\infty{1/k^n}$, the Riemann zeta function, since both $\zeta(2n)$ and $\beta(2n-1)$ are rational multiples of $\,\pi^{2n}$ and $\pi^{2n-1}$, respectively, whereas finite closed-form expressions for both $\zeta(2n+1)$ and $\beta(2n)$ in terms of other basic constants are unknown [@Lima2011]. However, the proof by Apéry (1978) that $\zeta{(3)}$ is irrational [@Apery] has created an ‘asymmetry’ in that analogy because the irrationality of $\,\beta(2)$, though very suspected, remains unproven.[^2] From the point of view of numerical computation, Catalan himself (1865) computed $G$ to $14$ decimal places [@Catalan1865]. By making use of a technique from Kummer, Bresse (1867) computed it to $24$ decimals, a result that was improved to $32$ decimals by Glaisher (1913) [@Wolfram]. With the advent of computers, $G$ has been computed to a large number of digits. For instance, Yee and Chan (2009) computed it to $31$ billion decimals [@Yee]. Their computation employs two formulas, one of which is a central binomial formula due to Ramanujan (1915) [@Ramanujan]:[^3] $$G = \frac{\pi}{8} \, \ln{(2+\sqrt{3})} + \frac38 \, \sum_{n=0}^{\infty}{\frac{1}{\,(2n+1)^2 \, \binom{2n}{n}}} \, . \label{eq:Rama}$$ On searching for similar rapidly converging series, Lupas (2000) has found the following alternating series [@Lupas] $$G = -\,\frac{1}{64} \, \sum_{n=1}^{\infty}{(-1)^n \, \frac{\, 2^{8n} \, (40n^2-24n+3)}{\,n^3 \, (2n-1) \, \binom{2n}{n} \, \binom{4n}{2n}^2 \, }} \, . \label{eq:Lupas}$$ This series converges so fast that it has been implemented in *Mathematica*$^\mathrm{TM}$ (version $6$) for computing $G$. On searching for new congruences modulo primes, Z.-W. Sun (2011) has pointed out that the following central binomial series should converge to $G$ (see Conjecture A7 of Ref. [@Sun]): $$G \overset{?} = -\,\frac12 \, \sum_{n=1}^{\infty}{(-1)^n \, \frac{\,(3n-1) \, 8^n}{n^3\, {\binom{2n}{n}}^3 }} \, . \label{eq:Sun}$$ He indeed comments that this formula could be derived from a hypergeometric identity proved by Guillera in a recent work [@Guillera], but a complete proof is not provided there in Ref. [@Sun]. Here in this note, starting from a Guillera’s hypergeometric identity I prove a Ramanujan-type series representation for Catalan’s constant similar to that in Eq.  whose convergence rate surpasses that of all known central binomial series representations for this constant. A new Ramanujan-type series for $\,G$ ===================================== Let us adopt the usual notation for the generalized hypergeometric series: $$_{p} F_{q} \! \left( \! \begin{array}{r} a_1 , \ldots , a_p \\ b_1 , \ldots, b_q \end{array} ; z \right) = \sum_{n=0}^\infty{\frac{\left(a_1\right)_n \, \ldots \, \left(a_p\right)_n}{\left(b_1\right)_n \, \ldots \, \left(b_q\right)_n} \, \frac{z^n}{n!}} \, ,$$ where $\,\left(a\right)_n = \Gamma{(a+n)} / \Gamma{(a)}\,$ is the Pochhammer symbol. Our main result makes use of the lemma below, which determines a special value for $\, _{3} F_{2} \! \left( \! \begin{array}{r} a_1 , a_2 , a_3 \\ b_1 , b_2 \end{array} ; z \right)$, a function that converges at $\,z=1\,$ whenever $\,\Re{\left\{(b_1+b_2)-(a_1+a_2+a_3)\right\}} > 0\,$ (see, e.g., Eq. (2.2.1) of Ref. [@Slater]). \[lem:3F2\] $$_{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right) = 2\,G . \label{eq:3F2}$$ We start from a well-known integral representation for generalized hypergeometric functions (see, e.g., Eq. (1.2) of Ref [@Kratt]), namely $$_{p+1} F_{p} \! \left( \! \begin{array}{l} \alpha , \alpha_1 , \ldots , \alpha_p \\ \gamma , \beta_1 , \ldots , \beta_{p-1} \end{array} \! ; \, t \right) = \frac{\Gamma{(\gamma)}}{\Gamma{(\alpha)} \, \Gamma{(\gamma-\alpha)}} \int_0^1{z^{\alpha-1}(1-z)^{\gamma-\alpha-1} \, _{p} F_{p-1} \! \left( \! \begin{array}{l} \alpha_1 , \ldots , \alpha_p \\ \beta_1 , \ldots , \beta_{p-1} \end{array} \! ; t z \right) dz} ,$$ valid whenever $\,\Re{(\alpha)}>0\,$ and $\,\Re{(\gamma-\alpha)}>0$. It then follows that $$\begin{aligned} _{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right) &=& \frac{\Gamma{(\frac32)}}{\Gamma{(\frac12)} \, \Gamma{(1)}} \, \int_0^1{z^{-\frac12}\,(1-z)^0 \, _{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; 1\,z \right) \, dz} \nonumber \\ %= \frac{\sqrt{\pi}/2}{\sqrt{\pi} } \, \int_0^1{\frac{1}{\sqrt{z}} ~ _{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; z \right) \, dz} \nonumber \\ &=& \frac12 \, \int_0^1{\frac{\,_{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; z \right)}{\sqrt{z}} \: dz} \nonumber \\ &=& \int_0^1{\,_{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; x^2 \right) \, dx} \, . \label{eq:8}\end{aligned}$$ Now, let us show that, for all $\,x \in (0,1)$, $$_{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; x^2 \right) = \frac{\,\arcsin{x}}{\,x\,\sqrt{1-x^2}\:} \, . \label{eq:pau}$$ It is well-known that $$_{2} F_{1} \! \left( \! \begin{array}{r} \frac12 , \frac12 \\ \frac32 \end{array} ; x^2 \right) = \frac{\,\arcsin{x}}{x} \label{eq:Slater}$$ for all non-null values of $\,x\,$ for which the hypergeometric series at the left-hand side converges (see Eq. (1.5.10) of Ref. [@Slater]). Two sucessive applications of the Euler transformation formula $$_{2} F_{1} \! \left( \! \begin{array}{r} a , b \\ c \end{array} ; z \right) = (1-z)^{-a} ~ _{2} F_{1} \! \left( \! \begin{array}{r} a , c-b \\ c \end{array} ; \frac{z}{z-1} \right)$$ on Eq.  lead us to $$\frac{\arcsin{x}}{x} = \frac{1}{\sqrt{1-x^2}} ~ \, _{2} F_{1} \! \left( \! \begin{array}{r} \frac12 , 1 \\ \frac32 \end{array} ; \frac{-\,x^2}{1-x^2} \right)$$ and, after some algebra, $$\frac{\arcsin{x}}{x} = \frac{1-x^2}{\sqrt{1-x^2}} ~ \, _{2} F_{1} \! \left( \! \begin{array}{r} 1 , 1 \\ \frac32 \end{array} ; x^2 \right) .$$ This completes the proof of Eq. . From Eqs.  and , one has $$_{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right) = \int_0^1{\frac{\arcsin{x}}{x\,\sqrt{1-x^2}\:} \: d x} \, .$$ The trigonometric substitution $\,x = \sin{\theta}\,$ reduces this integral to $$_{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right) = \int_0^{\,\pi/2}{\!\frac{\theta}{\sin{\theta}} \: d\theta} \, , \label{eq:trigsub}$$ which can be solved in terms of the dilogarithm function $\,\mathrm{Li}_2{(z)}\,$ as follows. First, note that $$\int{\frac{\theta}{\,\sin{\theta}} \: d\theta} = \theta \, \ln{\!\left[ \frac{1-\exp{(i \theta)}}{1+\exp{(i \theta)}} \right]} +i \left[ \mathrm{Li}_2{\left(-e^{i \theta}\right)} - \mathrm{Li}_2{\left(e^{i \theta}\right)}\right] , \label{eq:primitiva}$$ as can be easily checked by differentiating the right-hand side. Then $$\begin{aligned} & & \int_0^{\,\pi/2}{\frac{\theta}{\,\sin{\theta}} \: d\theta} \nonumber \\ &=& \frac{\pi}{2} \, \ln{\!\left( \frac{1-i}{1+i} \right)} +i\left[ \mathrm{Li}_2{(-i)} - \mathrm{Li}_2{(i)}\right] -\left\{ \lim_{a \rightarrow 0^{+}}{a \, \ln{\!\left[ \frac{1-e^{(i a)}}{1+e^{(i a)}} \right]}} +i \left[ \mathrm{Li}_2{(-1)} - \mathrm{Li}_2{(1)}\right]\right\} \nonumber \\ &=& \frac{\pi}{2} \, \ln{\!\left[ \frac{(1-i)^2}{2} \right]} +i \left[-2\,i \, \mathrm{Cl}_2{\left(\frac{\pi}{2}\right)}\right] -\left\{0 +i \left( -\frac{\pi^2}{12} -\frac{\pi^2}{6}\right)\right\} \nonumber \\ &=& \frac{\pi}{2} \, \ln{\!\left({-\,i}\right)} +2 \, \mathrm{Cl}_2{\left(\frac{\pi}{2}\right)} +i \, \frac{\pi^2}{4} %\nonumber \\ \: = \: \frac{\pi}{2} \, \left(\ln{1} -i\,\frac{\pi}{2}\right) +2\, G +i \, \frac{\pi^2}{4} \nonumber \\ &=& 2 \, G \, , \label{eq:2G}\end{aligned}$$ where the special value of the Clausen function in Eq.  and the principal value of the logarithm function, with $\,\mathrm{Arg}(z) \in (-\pi,\pi]$, were taken into account.[^4] The substitution of this result in Eq.  completes the proof. $\Box$ We are now in a position to prove a rapidly converging central binomial formula for the Catalan’s constant, which is our main result. \[teo:main\] $$G = \frac12 \, \sum_{n=0}^\infty{(-1)^n \, \frac{(3n+2) \: 8^n}{(2n+1)^3 \, {\binom{2n}{n}}^3}} \, .$$ Let $$f(x) := \sum_{n=0}^\infty{(-1)^n \, \frac{\left(x+\frac12\right)_n^{\,3}}{8^n \, (x+1)_n^{\,3}} \, [6(x+n)+1]}$$ be a function of a real variable $\,x\,$ in the open interval $\,(0,1)$. Then $$\begin{aligned} f\!\left(\frac12\right) = \sum_{n=0}^\infty{(-1)^n \, \frac{(1)_n^{\,3} \: (6n+4)}{8^n \, \left(\frac32\right)_n^{\,3} }} = \sum_{n=0}^\infty{(-1)^n \, \frac{{n!}^3 \, (6n+4)}{8^n \, (2n+1)^3 \, \frac{{(2n)!}^3}{{n!}^3 \, 4^{3n}} } } \nonumber \\ = 2 \, \sum_{n=0}^\infty{(-1)^n \, \frac{{n!}^6 \, (3n+2) \, 64^n}{8^n \, (2n+1)^3 \, {(2n)!}^3 } } \nonumber \\ = 2 \, \sum_{n=0}^\infty{(-1)^n \, \frac{(3n+2) \, 8^n}{(2n+1)^3 \, {\binom{2n}{n}}^3 } } \, . \label{eq:fmeio}\end{aligned}$$ On the other hand, from Guillera’s third identity in Ref. [@Guillera], we know that $$f(x) = 4 \,x \: \sum_{n=0}^\infty{ \frac{\left(x/2+\frac14\right)_n \, \left(x/2+\frac34\right)_n}{ {(x+1)_n}^2}} \,$$ for all values of $\,x\,$ for which this series converges. Therefore $$f(x) = 4 \, x \, ~ _{3} F_{2} \! \left( \! \begin{array}{r} \frac{2x+1}{4} , \frac{2x+3}{4} , 1 \\ x+1 , x+1 \end{array} ; 1 \right) ,$$ which implies that $$f\!\left(\frac12\right) = 2 \; _{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right) .$$ On substituting the result obtained in Lemma \[lem:3F2\] for $\:_{3} F_{2} \! \left( \! \begin{array}{r} \frac12 , 1 , 1 \\ \frac32 , \frac32 \end{array} ; 1 \right)$, we find $\,f(\frac12) = 4\,G$. From Eq. , one then has $$2 \, \sum_{n=0}^\infty{(-1)^n \, \frac{(3n+2) \: 8^n}{(2n+1)^3 \, {\binom{2n}{n}}^3}} = 4 \, G \, ,$$ which completes the proof. $\Box$ The convergence rate of the just proved central binomial series representation for $\,G\,$ is to be compared to that of other known similar series, including those in Eqs.  and . This is done in details in the next section. Convergence rates ================= Let us now check the convergence rate of each Ramanujan-type series for $G$ mentioned in this work. By applying the Stirling’s improved formula, namely $$n! \sim \left(\frac{n}{e}\right)^n \sqrt{2 \pi \left(n+\frac16\right)} \, ,$$ which implies that $$\binom{2n}{n} \sim \frac{\,2^{2n}\sqrt{2 n+\frac16}}{\,\sqrt{2 \pi} \, \left( n+\frac16 \right)} \, ,$$ we shall develop an order estimate of the $n$-th term for each series. We begin with Ramanujan’s series, given in Eq. . Its $n$-th term is $$\begin{aligned} & & \frac{1}{(2n+1)^2 \, \binom{\,2n}{n}} \: \sim \: \frac{\sqrt{2 \pi \left(n+\frac16\right)}}{\, 2^{2n} \, \sqrt{2n+\frac16} \: (2n+1)^2} \nonumber \\ &=& \sqrt{\frac{\pi}{3}} \, \frac{6n+1}{\,2^{2n} \, \sqrt{12n+1} \: (2n+1)^2} %\nonumber \\ \: \sim \: \frac{6n+1}{\, 2^{2n} \, \sqrt{12n+1} \: (2n+1)^2} \nonumber \\ & \sim & \frac{3}{\, 2^{2n} \, (2n+1) \, \sqrt{12n+1}} \, .\end{aligned}$$ The more complex central binomial series by Lupas, see Eq. , has an $n$-th term whose absolute value can be estimated as follows: $$\begin{aligned} \frac{2^{8n} \, |40 n^2-24n +3|}{n^3 \, (2n-1) \, \binom{2n}{n} \, {\binom{4n}{2n}}^2} \, \sim \, \frac{2^{8n} \, |40 n^2-24n +3|}{n^3 \, (2n-1) \, \frac{2^{2n}\,\sqrt{2n+\frac16}}{\sqrt{2 \pi} \, (n+\frac16)} \, 2^{8n} \, \frac{3}{\pi} \, \frac{24n+1}{(12n+1)^2}} \nonumber \\ \sim \, \frac{\,|40 n^2-24n +3| \, (12n+1)^2 \, \sqrt{2 \pi} \, (n+\frac16)}{n^3 \, (2n-1) \, 2^{2n} \, \sqrt{2n+\frac16} \, (24n+1)} \, . %\nonumber \\\end{aligned}$$ For sufficiently large values of $\,n$, this simplifies to $$\begin{aligned} \frac{(40n-24) \, (12n+1)^2 \sqrt{2 \pi} \: (n+\frac16)}{\, 2^{2n} \, n^2 \, (2n-1) \, \sqrt{2n+\frac16} \: (24n+1)} %\nonumber \\ \sim \frac{(40n-24) \, (12n+1)^2 \, (6n+1)}{\, 2^{2n} \, n^2 \, (2n-1) \, \sqrt{12n+1} \: (24n+1)} \nonumber \\ = \frac{4 \, (5n-3) \, (12n+1)^2 \, (6n+1)}{\, 2^{2n} \, n^2 \, (2n-1) \, \sqrt{12n+1} \: (12n+\frac12)} %\nonumber \\ \sim \frac{4 \, (5n-3) \, (12n+1) \, (6n+1) \, \frac52}{\, 2^{2n} \, n^2 \, (5n-\frac52) \, \sqrt{12n+1}} \nonumber \\ \sim \frac{10 \, (12n+1) \, (6n+1)}{\, 2^{2n} \, n^2 \, \sqrt{12n+1}} %\nonumber \\ \: = \: \frac{10 \, \sqrt{12n+1} \, (6n+1)}{\, 2^{2n} \, n^2} \nonumber \\ = \frac{5 \, \sqrt{12n+1} \, (6n+1)}{\, 2^{2n-1} \, n^2} \, .\end{aligned}$$ This convergence rate is clearly slower than that of Ramanujan’s series, not to say the number of basic arithmetic operations needed to compute the $n$-th term, which is considerably larger in the Lupas’ series.[^5] The convergence rate of the series we proved in our Theorem \[teo:main\] can be estimated as follows. The $n$-th term of our series is $$\begin{aligned} \frac{(3n+2) \: 2^{3n}}{(2n+1)^3 \, {\binom{2n}{n}}^3} \, \sim \, \frac{(3n+2) \: 2^{3n}}{(2n+1)^3 \, \frac{2^{6n+3}}{(12n+2)^{\frac32}}} \nonumber \\ = \frac{3 (n+\frac23) \: 2^{3n} \, (12n+2)^{\frac32}}{8 \, (n+\frac12)^3 \, 2^{6n+3}} \, .\end{aligned}$$ For sufficiently large values of $\,n$, this can be approximated by $$\begin{aligned} \frac38 \, \frac{(12n+2)^{\frac32}}{(n+\frac12)^2 \, 2^{3n+3}} \: = \: 3 \, \sqrt{2} \, \frac{(6n+1)^{\frac32}}{(2n+1)^2 \, 2^{3n+3}} \nonumber \\ \sim \: 9 \, \sqrt{2} \, \frac{\sqrt{6n+1}}{(2n+1) \, 2^{3n+3}} = 27 \, \sqrt{2} \, \frac{\sqrt{6n+1}}{\left(\sqrt{6n+3}\right)^2 \, 2^{3n+3}} \nonumber \\ \sim \: 27 \, \sqrt{2} \, \frac{1}{\sqrt{6n+3} \; 2^{3n+3}} \: \sim \: \frac{\,5 / {\sqrt{3}}}{\,2^{3n} \, \sqrt{2n+1}} \: .\end{aligned}$$ The factor $2^{3n}$ makes our series converges faster than both the Ramanujan and Lupas’ series. In Table \[tabela\], below, we compare the error committed in approximating $\,G\,$ by the partial sum of the first $\,N\,$ terms corresponding to each central binomial series mentioned in this work. It is clear from this table that our series is the one that yields the smallest absolute error. The only competitive series is that conjectured by Sun, see our Eq. , but a direct comparison of its $n$-th term with that of the series in our Theorem \[teo:main\] shows that it converges slower. Therefore, even an eventual proof of Sun’s conjecture will not furnish a central binomial series faster than the one I am presenting here in this work. Acknowledgments {#acknowledgments .unnumbered} =============== Thanks are due to Mrs. Marcia R. Souza and Mr. Bruno S. S. Lima for checking computationally all hypergeometric series discussed in this work. [19]{} Tables {#tables .unnumbered} ====== $N$ Lupas Ramanujan Sun Theorem \[teo:main\] ------ ------------------------- ------------------------- ------------------------- ------------------------- 5 $+2.9 \times 10^{-4}$ $-3.0 \times 10^{-6}$ $+1.1 \times 10^{-5}$ $-1.3 \times 10^{-6}$ 10 $-2.0 \times 10^{-7}$ $-1.3 \times 10^{-9}$ $-2.6 \times 10^{-10}$ $+3.1 \times 10^{-11}$ 50 $-7.7 \times 10^{-32}$ $-1.1 \times 10^{-34}$ $-9.1 \times 10^{-47}$ $+1.1 \times 10^{-47}$ 100 $-4.3 \times 10^{-62}$ $-3.3 \times 10^{-65}$ $-4.5 \times 10^{-92}$ $+5.6 \times 10^{-93}$ 500 $-2.9 \times 10^{-303}$ $-4.6 \times 10^{-307}$ $-1.2 \times 10^{-453}$ $+1.5 \times 10^{-454}$ 1000 $-1.9 \times 10^{-604}$ $-1.5 \times 10^{-608}$ $-2.4 \times 10^{-905}$ $+2.9 \times 10^{-906}$ : Deviations from $\,G\,$ of the partial sums obtained by adding the first $\,N\,$ terms of each central binomial series mentioned in the text.[]{data-label="tabela"} [^1]: As usual, $\mathrm{Li}_2(z)\,$ denotes the dilogarithm function, defined as $\,\sum_{n=1}^\infty{z^n/n^2}$ for real values of $z$, $z<1$, and extended to $\mathbb{C}$, except for the cut $[1,\infty)$, by analytic continuation. [^2]: Presently, the only known irrationality results for even beta values are the recent proofs by Rivoal and Zudilin (2003) that there exist infinitely many positive integers $n$ for which $\beta{(2n)}$ is irrational, and that at least one of the seven numbers $\beta(2), \, \ldots , \beta(14)$ is irrational [@RZ]. [^3]: This can be proved from the fact that $\:G = -\int_0^{\pi/4}{\ln{(\tan{\theta})} \: d\theta} = -\frac32 \, \int_0^{\pi/12}{\ln{(\tan{\theta})} \: d\theta}$, as nicely described in Ref. [@Bradley]. [^4]: On Entry 9 of Adamchik’s webpage [@Adamchik], where several representations for $\,G\,$ are proved computationally with *Mathematica*$^\mathrm{TM}$, one finds the integral formula $\, \frac12 \, \int_0^{\,\pi/2}{\theta/\sin{\theta} \, d\theta} = G$. Our Eqs.  and can then be viewed as a formal proof of this formula. [^5]: Despite these disadvantages, Lupas’ series has been implemented in *Mathematica*$^\mathrm{TM}$ (version $6$) for computing $\,G$ [@Wolfram].
--- abstract: 'We define the superclasses for a classical finite unipotent group $U$ of type $B_{n}(q)$, $C_{n}(q)$, or $D_{n}(q)$, and show that, together with the supercharacters defined in [@AN2], they form a supercharacter theory in the sense of [@DI]. In particular, we prove that the supercharacters take a constant value on each superclass, and evaluate this value. As a consequence, we obtain a factorization of any superclass as a product of elementary superclasses. In addition, we also define the space of superclass functions, and prove that it is spanned by the supercharacters. As as consequence, we (re)obtain the decomposition of the regular character as an orthogonal linear combination of supercharacters. Finally, we define the supercharacter table of $U$, and prove various orthogonality relations for supercharacters (similar to the well-known orthogonality relations for irreducible characters).' address: - | Departamento de Matemática\ Faculdade de Ciências da Universidade de Lisboa\ Campo Grande\ Edifício C6\ Piso 2\ 1749-016 Lisboa\ Portugal - | Instituto Superior de Economia e Gestão\ Universidade Técnica de Lisboa\ Rua do Quelhas 6\ 1200-781 Lisboa\ Portugal - | Centro de Estruturas Lineares e Combinatórias\ Complexo Interdisciplicar da Universidade de Lisboa\ Av. Prof. Gama Pinto 2\ 1649-003 Lisboa\ Portugal author: - 'Carlos A. M. André & Ana Margarida Neto' title: 'A Supercharacter Theory for the Sylow $p$-subgroups of the finite symplectic and orthogonal groups' --- [^1] Introduction {#sec:intro} ============ This paper is a continuation of the authors’ papers [@AN1; @AN2], and develops a supercharacter theory for the Sylow $p$-subgroup of the symplectic or orthogonal groups defined over the finite field ${\mathbb{F}_{q}}$ with $q$ elements; throughout the paper, $p$ will stand for an arbitrary odd prime number, and $q = p^{e}$, $e \geq 1$, will be a fixed power of $p$. The concept of a “supercharacter theory” for an arbitrary finite group was developed by P. Diaconis and I. M. Isaacs in the paper [@DI]. Roughly, a supercharacter theory replaces irreducible characters by “supercharacters”, and conjugacy classes by “superclasses”, in such a way that a “supercharacter table” can be constructed as an “almost unitary” matrix with similar properties as the usual character table (namely, orthogonality of rows and columns). More precisely, given any finite group $G$, a [*supercharacter theory*]{} for $G$ consists of a partition ${\mathcal{K}}$ of $G$ and a set ${\mathcal{X}}$ of (complex) characters of $G$ satisfying the following three axioms: 1. $|{\mathcal{K}}| = |{\mathcal{X}}|$; 2. every irreducible character of $G$ is a constituent of a unique $\xi \in {\mathcal{X}}$; 3. the characters in ${\mathcal{X}}$ are constant on the member of ${\mathcal{K}}$. The elements of ${\mathcal{K}}$ will be referred to as [*superclasses*]{}, and the elements of ${\mathcal{X}}$ as [*supercharacters*]{} of $G$. (We observe that, by [@DI Lemma 2.1], axiom (S2) is equivalent to requiring that $\{1\} \in {\mathcal{K}}$.) Every finite group $G$ has two “trivial” supercharacter theories: the full character theory (where ${\mathcal{X}}$ consists of all irreducible characters of $G$, and ${\mathcal{K}}$ of all the conjugacy classes of $G$), and the one where ${\mathcal{X}}= \{1_{G}, \rho_{G}-1_{G}\}$ and ${\mathcal{K}}$ consists of the sets $\{1\}$ and $G - \{1\}$; as usual, we denote by $1_{G}$ the trivial character and by $\rho_{G}$ the regular character of $G$. Although for some groups these are the only possibilities, there are many groups for which nontrivial supercharacter theories exist, and in many cases it may be possible to obtain useful information using some particular supercharacter theory. An illustrating example can be found in the paper [@ADS] where E. Arias-Castro, P. Diaconis and R. Stanley showed that a special supercharacter theory can be applied to study a random walk on upper triangular matrices over finite fields using techniques that traditionally required the knowledge of the full character theory. Supercharacters theories were initially developed for the upper unitrangular group $U_{n}(q)$ consisting of all unipotent upper-triangular $n \times n$ matrices over the finite field ${\mathbb{F}_{q}}$ with $q$ elements (where $q$ is a power of some prime number $p$). In his PhD thesis [@A0], the first author begun the study of the “basic characters” of $U_{n}(q)$ (under the assumption that $p \geq n$), and was able to show that “clumping” together some of the conjugacy classes and some of the irreducible characters one attains a workable “approximation” to the representation theory of $U_{n}(q)$. His results were published in a series of papers in the Journal of Algebra, and showed in particular that the “basic characters” determine uniquely the superclasses of a supercharacter theory for $U_{n}(q)$. The original theory relies on a construction due to D. Kazdhan (see [@K]) and is based on Kirillov’s method of coadjoint orbits (see [@Ki] for a description of Kirillov’s method for the unitriangular group). Later, in his PhD thesis [@Y], N. Yan showed how the “basic characters” can be obtained using more elementary methods which avoid Kazhan’s construction and the algebraic geometry involved in it. Yan’s approach is valid for an arbitrary prime, and was generalized later by P. Diaconis and M. Isaacs in the paper [@DI] in order to extend the theory for an arbitrary finite algebra group defined over ${\mathbb{F}_{q}}$. In the papers [@AN1] (for sufficiently large primes) and [@AN2] (for arbitrary odd primes), the authors started to develop a supercharacter theory for a Sylow $p$-subgroup $U$ of one the (non-twisted) Chevalley groups $C_{n}(q)$, $B_{n}(q)$, and $D_{n}(q)$, by defining the supercharacters of $U$ and proving some of their main properties. As in the case of the unitriangular group, the supercharacters of $U$ are parametrized by certain “minimal” subsets of (positive) roots. In fact, it is known that the supercharacters of $U_{n}(q)$ can be obtained as certain “reduced” products of “elementary characters” which are irreducible characters corresponding to the “matrix entries” $(i,j)$, for $1 \leq i < j \leq n$, labelled by nonzero elements of ${\mathbb{F}_{q}}$; in Yan’s thesis, the “elementary characters” were called “primary characters”, and the supercharacters were called “transition characters”. Following Yan’s method, one can show that the supercharacters of $U_{n}(q)$ are parametrized by certain combinatorial data consisting of a “basic set” $D$ of matrix entries such that no two elements of $D$ agree in, either the first, or the second, coordinate, and of a map $\phi$ from $D$ to the nonzero elements of ${\mathbb{F}_{q}}$. In the papers [@AN1; @AN2], the authors defined the supercharacters also as certain “reduced” products of “elementary characters” (which in general are not necessarily irreducible characters) of the given Sylow $p$-subgroup $U$. These “reduced” products are parametrized by pairs consisting of a conveniently chosen “basic subset of roots” and of a map to the nonzero elements of ${\mathbb{F}_{q}}$. (We note that the roots in the unitriangular case are in one-to-one correspondence with the matrix entries.) In fact, the group $U$ can be naturally identified with a subgroup of a unitriangular group, and the supercharacters of $U$ can be obtained as constituents of the restriction of a supercharacter of that unitriangular group. On the other hand, as shown in Yan’s thesis, the same combinatorial data parametrize the superclasses of $U_{n}(q)$ (see also [@A2; @A3] where the superclasses were called “basic subvarieties”). In fact, a superclass of $U_{n}(q)$ can be obtained as a “basic” product of “elementary superclasses” which are conjugacy classes corresponding to the “matrix entries”, and labelled by nonzero elements of ${\mathbb{F}_{q}}$. As for supercharacters, we define the superclasses of $U$ by “restricting” the superclasses of the larger unitriangular group, and show that they are indexed by the same combinatorial data consisting of a conveniently chosen “basic set of roots” where each root is labeled by a nonzero element of ${\mathbb{F}_{q}}$. The paper is organized as follows. In [Section \[sec:supchar\]]{}, we introduce the necessary notation, and recall the definition and main property of the supercharacters of the group $U$. In [Section \[sec:supclass\]]{}, we define the superclasses of $U$ by intersecting with the superclasses of the unitriangular group which contains $U$, and show that the set ${\mathcal{K}}$ of all superclasses gives a partition of $U$. Then, in [Section \[sec:scf\]]{}, we define a superclass function on $U$ as a function taking a constant (complex) value on each superclass, and prove that the supercharacters are superclass functions, and form an orthogonal basis for the complex vector space ${\operatorname{scf}}(U)$ consisting of all superclass functions. As a consequence, we obtain an explicit decomposition of the regular character as a linear combination with positive integers of all the supercharacters (see [@AN2 Theorem 5.2] for a different proof), and (re)prove the main theorem on supercharacters which states that every irreducible character is a constituent of a (unique) supercharacter. In [Section \[sec:value\]]{}, we determine the constant value of a supercharacter on a superclass, and conclude that the superclasses and supercharacters satisfy the axioms of a supercharacter theory for $U$ in the sense of Diaconis and Isaacs. As a consequence, we show that every superclass factorizes uniquely as a product of elementary superclasses. Finally, in [Section \[sec:table\]]{}, we define the supercharacter table of $U$ as the square matrix with entries given by the supercharacter values, and prove the main orthogonality relations for supercharacters. As a consequence, we also deduce that the space ${\operatorname{scf}}(U)$ of superclass functions is a commutative semisimple algebra with respect to the convolution product. Supercharacters {#sec:supchar} =============== Let $p \geq 3$ be a prime number, $q = p^{e}$ ($e \geq 1$) a power of $p$, and ${\mathbb{F}_{q}}$ the finite field with $q$ elements. For a fixed positive integer $n$, let $G$ denote one of the following classical finite groups: the symplectic group ${Sp_{2n}(q)}$, the even orthogonal group ${O_{2n}(q)}$, or the odd orthogonal group ${O_{2n+1}(q)}$ (in alternative notation, these are the (non-twisted) Chevalley groups $C_{n}(q)$, $B_{n}(q)$, and $D_{n}(q)$, respectively). Throughout the paper, we set $U = G \cap U_{m}(q)$ where $$m = {\begin{cases}}2n, & \text{if $G = {Sp_{2n}(q)}$, or $G = {O_{2n}(q)}$,} \\ 2n+1, & \text{if $G = {O_{2n+1}(q)}$,} {\end{cases}}$$ and $U_{m}(q)$ denotes the upper unitriangular group consisting of all unipotent upper-triangular $m \times m$ matrices over ${\mathbb{F}_{q}}$. Then, $U$ is a Sylow $p$-subgroup of $G$, and it is described as follows. Let $J = J_{n}$ be the $n \times n$ matrix with 1’s along the anti-diagonal and 0’s elsewhere. Then, $U$ consists of all (block) matrices of the form $$\label{eq:e1} \begin{pmatrix} x & xu & xz \\ 0 & I_{r} & -u^{T}J \\ 0 & 0 & Jx^{-T}J \end{pmatrix}$$ where $x \in U_{n}(q)$, $u$ is an $n {\times}r$ matrix over ${\mathbb{F}_{q}}$, and 1. $r = 0$, and $Jz^{T} - zJ = 0$, if $U \leq {Sp_{2n}(q)}$; 2. $r = 0$, and $Jz^{T} + zJ = 0$, if $U \leq {O_{2n}(q)}$; 3. $r = 1$, and $Jz^{T} + zJ = -uu^{T}$, if $U \leq {O_{2n+1}(q)}$. As mentioned in the Introduction, both supercharacters and superclasses of $U$ will be parametrized by certain subsets of (positive) roots. Thus, we introduce some notation and recall some elementary facts concerning roots; for the details, we refer to the books [@C1; @C2] by R. Carter (see also [@CR Chapter 8]). Let $T$ be the maximal torus of $G$ consisting of all diagonal matrices, and $\Sigma$ the root system defined by $T$. The elements of $\Sigma$ are described as follows. For each $1 \leq i \leq n$, let ${{\varepsilon}_{i} \colon T \to {{\mathbb{F}_{q}}^{\;\times}}}$ be the map defined by ${\varepsilon}_{i}(t) = t_{i}$ for all $t \in T$; here, we denote by $t_{i} \in {{\mathbb{F}_{q}}^{\;\times}}$ the $(i,i)$th entry of the matrix $t \in T$. Then, $\Sigma = \Phi \cup (-\Phi)$ where $$\Phi = {\{ {\varepsilon}_{i} \pm {\varepsilon}_{j} \colon 1 \leq i < j \leq n \}} \cup \Phi'$$ and $$\Phi' = {\begin{cases}}{\{ 2{\varepsilon}_{i} \colon 1 \leq i \leq n \}}, & \text{if $G = {Sp_{2n}(q)}$,} \\ \emptyset, & \text{if $G = {O_{2n}(q)}$,} \\ {\{ {\varepsilon}_{i} \colon 1 \leq i \leq n \}}, & \text{if $G = {O_{2n+1}(q)}$.} {\end{cases}}$$ The roots in $\Phi$ are said to be [*positive*]{}, and the roots in $-\Phi$ are said to be [*negative*]{}. Throughout the paper, the word “root” will always stand for “positive root”. With $\Phi$ we associate the subset of “matrix entries” ${\mathcal{E}}{\subseteq}{\{ (i,j) \colon -n \leq i, j \leq n \}}$ as follows. For any ${\alpha}\in \Phi$, we set $${\mathcal{E}}({\alpha}) = {\begin{cases}}\{(i,j), (-j,-i)\}, & \text{if ${\alpha}= {\varepsilon}_{i} - {\varepsilon}_{j}$ for $1 \leq i < j \leq n$}, \\ \{(i,-j), (j,-i)\}, & \text{if ${\alpha}= {\varepsilon}_{i} + {\varepsilon}_{j}$ for $1 \leq i < j \leq n$}, \\ \{(i,-i)\}, & \text{if $G = {Sp_{2n}(q)}$ and ${\alpha}= 2{\varepsilon}_{i}$ for $1 \leq i \leq n$,} \\ \{(i,0), (0,-i)\}, & \text{if $G = {O_{2n+1}(q)}$ and ${\alpha}= {\varepsilon}_{i}$ for $1 \leq i \leq n$,} {\end{cases}}$$ and we define $${\mathcal{E}}= {\bigcup}_{{\alpha}\in \Phi} {\mathcal{E}}({\alpha}).$$ More generally, for each subset $\Psi {\subseteq}\Phi$, we set $${\mathcal{E}}(\Psi) = {\bigcup}_{{\alpha}\in \Psi} {\mathcal{E}}({\alpha});$$ hence, ${\mathcal{E}}= {\mathcal{E}}(\Phi)$. On the other hand, we consider the mirror order $\prec$ on the set $\{0, \pm 1, \ldots, \pm (n+1)\}$ which is defined as $$1 \prec 2 \prec \cdots \prec n+1 \prec 0 \prec -(n+1) \prec \cdots \prec -2 \prec -1,$$ and we shall index the rows (from left to right) and columns (from top to bottom) of any $m {\times}m$ matrix according to this ordering. Hence, the entries of any matrix $x \in U_{m}(q)$ are indexed by all the pairs $(i,j) \in {\mathcal{E}}$: for each $(i,j) \in {\mathcal{E}}$, we shall write $x_{i,j}$ to denote the $(i,j)$th entry of $x$ (which occurs in the $i$th row and in the $j$th column). For our purposes, it is convenient to consider the set $${\mathcal{E}}^{+} = {\{ (i,j) \in {\mathcal{E}}\colon 1 \leq i \leq n,\ i \prec j \preceq -i \}},$$ and extend this notation to any subset $\Psi {\subseteq}\Phi$ by setting $${\mathcal{E}}^{+}(\Psi) = {\mathcal{E}}(\Psi) \cap {\mathcal{E}}^{+}.$$ We observe that there exists a one-to-one correspondence between $\Phi$ and ${\mathcal{E}}^{+}$. For any ${\alpha}\in \Phi$, we define the subgroup $U_{{\alpha}}$ of $U$ as follows: 1. if ${\alpha}= {\varepsilon}_{i}-{\varepsilon}_{j}$ for $1 \leq i < j \leq n$, then $$U_{{\alpha}} = {\{ x \in U \colon x_{i,k} = 0,\ i < k < j \}};$$ 2. if ${\alpha}= {\varepsilon}_{i}-{\varepsilon}_{j}$ for $1 \leq i < j \leq n$, then $$U_{{\alpha}} = {\{ x \in U \colon x_{i,k} = x_{j,l} = 0,\ i < k \leq n,\ j \prec l \preceq 0 \}};$$ 3. if, either ${\alpha}= 2{\varepsilon}_{i}$ for $1 \leq i \leq n$ (in the case where $U \leq {Sp_{2n}(q)}$), or ${\alpha}= {\varepsilon}_{i}$ for $1 \leq i \leq n$ (in the case where $U \leq {O_{2n+1}(q)}$), then $$U_{{\alpha}} = {\{ x \in U \colon x_{i,k} = 0,\ i < k \leq n \}}.$$ Let ${{\vartheta}\colon {\mathbb{F}_{q}}\to {{\mathbb{C}}^{{\times}}}}$ be a non-trivial linear character of the additive group ${\mathbb{F}_{q}}^{\;+}$ of ${\mathbb{F}_{q}}$ (this character will be kept fixed throughout the paper; moreover, all characters will be taken over the complex field). For any $r \in {{\mathbb{F}_{q}}^{\;\times}}$, the mapping $x \mapsto {\vartheta}(rx_{i,j})$ defines a linear character ${{\lambda_{\alpha,r}}\colon U_{\alpha} \to {{\mathbb{C}}^{{\times}}}}$ of $U_{\alpha}$, and we define the [*elementary character*]{} ${\xi_{\alpha,r}}$ to be the induced character $${\xi_{\alpha,r}}= ({\lambda_{\alpha,r}})^{U}$$ (see [@A1] for the corresponding definition in the case of the unitriangular group; see also [@DI Corollary 5.11] and the discussion thereon). We next define the notion of a “basic subset of roots”. To start with, we recall that a subset ${\mathcal{D}}{\subseteq}{\mathcal{E}}$ is said to be [*basic*]{} if it contains at most one entry from each row and at most one root from each column; in other words, ${\mathcal{D}}{\subseteq}{\mathcal{E}}$ is basic if $$|{\{ j \colon i \prec j \preceq -1,\ (i,j) \in {\mathcal{D}}\}}| \leq 1 \quad \text{and} \quad |{\{ i \colon 1 \preceq i \prec j,\ (i,j) \in {\mathcal{D}}\}}| \leq 1$$ for all $-n \leq i, j \leq n$. Then, we say that $D {\subseteq}\Phi$ is a [*basic subset*]{} if ${\mathcal{D}}= {\mathcal{E}}(D)$ is a basic subset of ${\mathcal{E}}$. (We will always use script letters to denote basic subsets of ${\mathcal{E}}$, in contrast to basic subsets of $\Phi$ which will be mostly denoted by italic letters.) Given any non-empty basic subset $D {\subseteq}\Phi$ and any map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$, we define the supercharacter ${\xi_{D,\phi}}$ to be the product $${\xi_{D,\phi}}= \prod_{{\alpha}\in D} \xi_{{\alpha},\phi({\alpha})}.$$ For convenience, if $D$ is the empty subset of $\Phi$, we consider the empty map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$, and define ${\xi_{D,\phi}}$ to be the unit character $1_{U}$ of $U$. Let $$U_{D} = {\bigcap}_{{\alpha}\in D} U_{{\alpha}} \quad \text{and} \quad {\lambda_{D,\phi}}= \prod_{{\alpha}\in D} ({\lambda}_{{\alpha},\phi({\alpha})})_{U_{D}}.$$ Then, ${\lambda_{D,\phi}}$ is clearly a linear character of $U_{D}$ and, by [@AN1 Proposition 2.2], the supercharacter ${\xi_{D,\phi}}$ can be obtained as the induced character $$\label{eq:e2} {\xi_{D,\phi}}= ({\lambda_{D,\phi}})^{U}.$$ Throughout the paper, we will refer to the pair $(D,\phi)$ as a [*basic pair*]{} for $U$; hence, $D {\subseteq}\Phi$ is a basic subset, and ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$ is a map. The main result of [@AN2] is the following theorem. (Given any finite group $G$, we denote by ${\operatorname{Irr}}(G)$ the set of all irreducible characters of $G$, and by ${\langle \cdot , \cdot \rangle}$ (or by ${\langle \cdot , \cdot \rangle}_{G}$ if necessary) the Frobenius’ scalar product on the complex vector space of all class functions defined on $G$.) \[thm:t1\] Let $\chi$ be an arbitrary irreducible character of $U$. Then, $\chi$ is a constituent of a unique supercharacter of $U$; in other words, there exists a unique basic subset $D {\subseteq}\Phi$ and a unique map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$ such that ${\langle \chi , {\xi_{D,\phi}}\rangle} \neq 0$. We note that this theorem establishes axiom (S2) of the definition of a supercharacter theory. Superclasses {#sec:supclass} ============ In this section, we define the superclasses of $U$. This notion depends strongly on certain “basic subvarieties” defined by polynomial equations on the Lie algebra ${{\mathfrak{u}}}$ of $U$. Let ${{\mathfrak{g}}}$ denote one of the following classical Lie algebras defined over ${\mathbb{F}_{q}}$: the symplectic Lie algebra ${\mathfrak{sp}_{2n}(q)}$, the even orthogonal Lie algebra ${\mathfrak{o}_{2n}(q)}$, or the odd orthogonal Lie algebra ${\mathfrak{o}_{2n+1}(q)}$. Then, ${{\mathfrak{u}}}= {{\mathfrak{g}}}\cap {{\mathfrak{u}}}_{m}(q)$ where ${{\mathfrak{u}}}_{m}(q)$ denotes the upper niltriangular Lie algebra consisting of all nilpotent upper-triangular $m \times m$ matrices over ${\mathbb{F}_{q}}$. Thus, ${{\mathfrak{u}}}$ consists of all (block) matrices of the form $$\label{eq:e3} \begin{pmatrix} a & u & w \\ 0 & 0_{r} & -u^{T}J \\ 0 & 0 & -Ja^{T}J \end{pmatrix}$$ where $a \in {{\mathfrak{u}}}_{n}(q)$, $u$ is an $n {\times}r$ matrix over ${\mathbb{F}_{q}}$, and 1. $r = 0$, and $Jw^{T} - wJ = 0$, if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$; 2. $r = 0$, and $Jw^{T} + wJ = 0$, if ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n}(q)}$; 3. $r = 1$, and $Jw^{T} + wJ = -uu^{T}$, if ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n+1}(q)}$. For any ${\alpha}\in \Phi$, we will denote by $e_{{\alpha}}$ the matrix in ${{\mathfrak{u}}}$ defined as follows (as usual, $1 \leq i < j \leq n$): $$e_{{\alpha}} = {\begin{cases}}e_{i,j} - e_{-j,-i}, & \text{if ${\alpha}= {\varepsilon}_{i}-{\varepsilon}_{j}$,} \\ e_{i,-j} + e_{j,-i}, & \text{if ${\alpha}= {\varepsilon}_{i}+{\varepsilon}_{j}$ and ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$,} \\ e_{i,-j} - e_{j,-i}, & \text{if ${\alpha}= {\varepsilon}_{i}+{\varepsilon}_{j}$ and ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n}(q)}$ or ${{\mathfrak{u}}}= {\mathfrak{o}_{2n+1}(q)}$,} \\ e_{i,-i}, & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and ${\alpha}= 2{\varepsilon}_{i}$,} \\ e_{i,0} - e_{0,-i}, & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n+1}(q)}$ and ${\alpha}= {\varepsilon}_{i}$.} {\end{cases}}$$ It is clear that ${\{ e_{{\alpha}} \colon {\alpha}\in \Phi \}}$ is an ${\mathbb{F}_{q}}$-basis of ${{\mathfrak{u}}}$. Given be an arbitrary basic pair $(D,\phi)$ for $U$ with $D$ non-empty basic subset, we define the element $${e_{D,\phi}}= \sum_{{\alpha}\in D} \phi(\alpha) e_\alpha \in {{\mathfrak{u}}}.$$ Since ${{\mathfrak{u}}}{\subseteq}{{\mathfrak{u}}}_{m}(q)$, we may consider the orbit $${V_{D,\phi}}= U_{m}(q) {e_{D,\phi}}U_{m}(q) {\subseteq}{{\mathfrak{u}}}_{m}(q)$$ for the natural action of $U_{m}(q) {\times}U_{m}(q)$ on ${{\mathfrak{u}}}_{m}(q)$ given by $(x,y) \cdot u = xuy{^{-1}}$ for all $x,y \in U_{m}(q)$ and $u \in {{\mathfrak{u}}}_{m}(q)$. By definition (see, for example, [@A3], [@ADS], [@DI], or [@Y]), the superclass of $U_{m}(q)$ which contains the element $1+{e_{D,\phi}}\in U_{m}(q)$ is the subset $$1+{V_{D,\phi}}= {\{ 1+a \colon a \in {V_{D,\phi}}\}} {\subseteq}U_{m}(q),$$ and we define the [*superclass*]{} ${K_{D,\phi}}$ of $U$ to be the intersection $${K_{D,\phi}}= U \cap (1+{V_{D,\phi}}) = {\{ x \in U \colon x-1 \in {V_{D,\phi}}\}}.$$ We will also consider the intersection $${O_{D,\phi}}= {V_{D,\phi}}\cap {{\mathfrak{u}}},$$ but observe that it is not necessarily true that ${K_{D,\phi}}= 1+{O_{D,\phi}}$; however, there exists a bijection between ${K_{D,\phi}}$ and ${O_{D,\phi}}$ (see [Lemma \[lem:l2\]]{} below). The following result is a clear consequence of the definition; in the case where $D$ is the empty subset of $\Phi$, we consider the empty map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$, and define ${e_{D,\phi}}= 0$, so that ${V_{D,\phi}}= \{0\}$ and ${K_{D,\phi}}= \{1\}$. \[lem:l1\] For every basic pair $(D,\phi)$ for $U$, the superclass ${K_{D,\phi}}$ is invariant under conjugation. In particular, ${K_{D,\phi}}$ is a union of conjugacy classes. On the other hand, by [@A3 Theorem 2.1], for any basic pairs $(D,\phi)$ and $(D',\phi')$, we have ${V_{D,\phi}}\cap {V_{D',\phi'}}\neq \emptyset$ if and only if $(D,\phi) = (D',\phi')$. In fact, by the definition, ${\mathcal{E}}(D)$ and ${\mathcal{E}}(D')$ are basic subsets of ${\mathcal{E}}= {\mathcal{E}}(\Phi)$ satisfying ${\mathcal{E}}(D) = {\mathcal{E}}(D')$ if and only if $D = D'$, and the elements ${e_{D,\phi}}, e_{D',\phi'} \in {{\mathfrak{u}}}_{m}(q)$ are equal if and only if $D = D'$ and $\phi = \phi'$. Therefore, we obtain a disjoint union ${\bigcup}_{D,\phi} {K_{D,\phi}}$ indexed by all basic pairs $(D,\phi)$ for $U$. We next show that $U$ equals this union, thus obtaining a partition ${\mathcal{K}}$ of $U$ which satisfies axiom (S1) for the required supercharacter theory. The proof of the following auxiliary result can be found in [@AN2 Lemma 2.3]. \[lem:l2\] Let $(D,\phi)$ be a basic pair for $U$. Let $$z = \begin{pmatrix} x & xv & xw \\ 0 & I_{r} & -v^{T}J \\ 0 & 0 & J x^{-T} J \end{pmatrix} \in U \qquad \text{and} \qquad a_{z} = \begin{pmatrix} u & v & w \\ 0 & 0_{r} & -v^{T}J \\ 0 & 0 & -J u^{T}J \end{pmatrix} \in {{\mathfrak{u}}}$$ where $x = 1+u$. Then, $z \in {K_{D,\phi}}$ if and only if $a_{z} \in {V_{D,\phi}}$. Moreover, the mapping $z \mapsto a_{z}$ defines a bijection from $U$ to ${{\mathfrak{u}}}$. Now, let $z \in U$ be arbitrary, and $a_{z} \in {{\mathfrak{u}}}$ be as in the lemma. Then, by [@A3 Theorem 2.1] (see also [@DI Appendix A]), there exists a unique basic subset ${\mathcal{D}}{\subseteq}{\mathcal{E}}$ and a unique map ${{{\varphi}\colon {\mathcal{D}}\to {{\mathbb{F}_{q}}^{\;\times}}}}$ such that $u \in {V_{{\mathcal{D}},{\varphi}}}$ where ${V_{{\mathcal{D}},{\varphi}}}= U_{m}(q) {e_{{\mathcal{D}},{\varphi}}}U_{m}(q)$ and $${e_{{\mathcal{D}},{\varphi}}}= \sum_{(i,j) \in {\mathcal{D}}} {\varphi}(i,j) e_{i,j} \in {{\mathfrak{u}}}_{m}(q).$$ We claim that ${\mathcal{D}}= {\mathcal{E}}(D)$ for a uniquely determined basic subset $D {\subseteq}\Phi$, and ${e_{{\mathcal{D}},{\varphi}}}= {e_{D,\phi}}\in {{\mathfrak{u}}}$ for a uniquely determined map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$; thus, we must have $\phi({\alpha}) = {\varphi}(i,j)$ for all ${\alpha}\in D$ with $(i,j) \in {\mathcal{E}}^{+}({\alpha})$. To see this, we recall that ${V_{{\mathcal{D}},{\varphi}}}$ is the zero set of a finite family of polynomial equations which are defined as follows. For each entry $(i,j) \in {\mathcal{E}}$, let ${\mathcal{D}}(i,j)$ denote the subset $${\mathcal{D}}(i,j)={\{ (k,l) \in {\mathcal{D}}\colon i \prec k \preceq -1,\ 1\preceq l \prec j \}}.$$ Let ${\mathcal{D}}(i,j) = \{(i_{1},j_{1}), \ldots, (i_{t},j_{t})\}$ where ${j_{1}}\prec {j_2} \prec \ldots \prec {j_{t}}$, and let ${\sigma}\in S_{t}$ be the permutation such that ${i_{{\sigma}(1)}} \prec {i_{{\sigma}(2)}} \prec \ldots \prec {i_{{\sigma}(t)}}$; as usual, we denote by $S_{t}$ the symmetric group of degree $t$. Then, for any $u \in {{\mathfrak{u}}}_{m}(q)$, we define ${\Delta}_{i,j}^{{\mathcal{D}}}(u)$ to be the determinant $${\Delta}_{i,j}^{{\mathcal{D}}}(u) = \begin{vmatrix} u_{i,j_{1}} & \cdots & u_{i,j_{t}} & u_{i,j} \\ u_{i_{{\sigma}(1)},j_{1}} & \cdots & u_{i_{{\sigma}(1)},j_{t}} & u_{i_{{\sigma}(1)},j} \\ \vdots & & \vdots & \vdots \\ u_{i_{{\sigma}(t)},j_{1}} & \cdots & u_{i_{{\sigma}(t)},j_{t}} & u_{i_{{\sigma}(t)},j} \end{vmatrix}.$$ We note that ${\Delta}_{i,j}^{{\mathcal{D}}}(u) = u_{i,j}$ whenever ${\mathcal{D}}(i,j)=\emptyset$; in particular, if ${\mathcal{D}}$ is empty, then ${\Delta}_{i,j}^{{\mathcal{D}}}(u) = u_{i,j}$ for all $u \in {{\mathfrak{u}}}_{m}(q)$. By [@A3 Proposition 2.3], we know that $$\label{eq:e4} {V_{{\mathcal{D}},{\varphi}}}= {\{ u \in {{\mathfrak{u}}}_{m}(q) \colon {\Delta}_{i,j}^{{\mathcal{D}}}(u) = {\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) {\text{ for all }}(i,j) \in R({\mathcal{D}}) \}}$$ where $R({\mathcal{D}}) = {\mathcal{E}}- S({\mathcal{D}})$ and $$S({\mathcal{D}}) = {\bigcup}_{(i,j) \in {\mathcal{D}}} {\left(}{\{ (i,s) \colon j \prec s \preceq -1 \}} \cup {\{ (r,j) \colon 1 \preceq r \prec i \}} {\right)};$$ we refer to the entries in $S({\mathcal{D}})$ as the [*${\mathcal{D}}$-singular entries*]{}, and to those in $R({\mathcal{D}})$ as the [*${\mathcal{D}}$-regular entries*]{}. It is easy to show that, for an arbitrary ${\mathcal{D}}$-regular entry $(i,j) \in R({\mathcal{D}})$, we have $$\label{eq:e5} {\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = {\begin{cases}}(-1)^{t} {\operatorname{sgn}}({\sigma}) {\varphi}(i,j) \prod_{s=1}^{t} {\varphi}(i_{s},j_{s}), & \text{if $(i,j) \in {\mathcal{D}}$,} \\ 0, & \text{if $(i,j) \notin {\mathcal{D}}$,} {\end{cases}}$$ where ${\mathcal{D}}(i,j) = \{(i_{1},j_{1}), \ldots, (i_{t},j_{t})\}$, ${j_{1}}\prec {j_2} \prec \ldots \prec {j_{t}}$, and ${\sigma}\in S_{t}$ is such that ${i_{{\sigma}(1)}} \prec {i_{{\sigma}(2)}} \prec \ldots \prec {i_{{\sigma}(t)}}$. We now prove the following auxiliary result; for simplicity of writing, given any basic subset $D {\subseteq}\Phi$ and any entry $(i,j) \in {\mathcal{E}}$, we set $D(i,j) = {\mathcal{D}}(i,j)$ for ${\mathcal{D}}= {\mathcal{E}}(D)$. \[lem:l3\] Let $D {\subseteq}\Phi$ be a basic subset, let $u \in {{\mathfrak{u}}}$, and let $(i,j) \in {\mathcal{E}}^{+}$. Then, $${\Delta}_{i,j}^{{\mathcal{E}}(D)}(u) = (-1)^{r+1} {\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u)$$ where $$r = {\begin{cases}}|D(i,j)|, & \text{if, either ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n}(q)}$, or ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n+1}(q)}$,} \\ |D(i,j)| , & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $j \leq n$,} \\ |D'(i,j)| - 1, & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $-j \leq n$,} {\end{cases}}$$ and $D'(i,j) = D(i,j) \cap {\{ (i,j) \in {\mathcal{E}}^{+} \colon j \leq n \}}$. Let $D(i,j) = \{(i_{1},j_{1}), \ldots, (i_{t},j_{t})\}$ where $j_{1}\prec \ldots\prec j_{t} \prec j$, and let ${\sigma}\in S_{t}$ be such that $i \prec i_{{\sigma}(1)}\prec \ldots i_{{\sigma}(t)} $. By the definition of ${\mathcal{E}}(D)$, we clearly have $$D(-j,-i) = \{(-j_{1},-i_{1}), \ldots, (-j_{t},-i_{t})\}$$ where $-j \prec -j_{t}\prec \ldots\prec -j_{1}$ and $-i_{{\sigma}(t)}\prec \ldots\prec -i_{{\sigma}(1)} \prec -i$. Thus, $${\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u) = \begin{vmatrix} u_{-j,-i_{{\sigma}(t)}} & \cdots & u_{-j,-i_{{\sigma}(1)}} & u_{-j,-i} \\ u_{-j_{t},-i_{{\sigma}(t)}} & \cdots & u_{-j_{t},-i_{{\sigma}(1)}} & u_{-j_{t},-i} \\ \vdots & & \vdots & \vdots \\ u_{-j_{1},-i_{{\sigma}(t)}} & \cdots & u_{-j_{1},-i_{{\sigma}(1)}} & u_{-j_{1},-i} \end{vmatrix}.$$ Firstly, we assume that, either ${{\mathfrak{u}}}\not\leq {\mathfrak{sp}_{2n}(q)}$, or ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $j \leq n$. Let $C_{j_{1}}$, $\ldots$, $C_{j_{t}}$, $C_{j}$ denote the column vectors of ${\Delta}^{{\mathcal{E}}(D)}_{i,j}(u)$, and $L_{-j}$, $L_{-j_{t}}$, $\ldots$, $L_{-j_{1}}$ the row vectors of ${\Delta}^{{\mathcal{E}}(D)}_{-j,-i}(u)$. For any $k \in \{{ j_{1}, \ldots, j_{t}}, j\}$, we have $L_{-k} = - C_{k}^{\;T} J$ where $J = J_{t+1}$ is the $(t+1) {\times}(t+1)$ matrix with $1$’s along the anti-diagonal and $0$’s elsewhere, and thus we deduce that $$\begin{aligned} {\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u) &= (-1)^{t+1} \begin{vmatrix} u_{i_{{\sigma}(t)}, j} & \cdots & u_{i_{{\sigma}(1)}, j} & u_{i, j} \\ u_{i_{{\sigma}(t)}, j_{t}} & \cdots & u_{i_{{\sigma}(1)}, j_{t}} & u_{i, j_{t}} \\ \vdots & & \vdots & \vdots \\ u_{i_{{\sigma}(t)}, j_{1}} & \cdots & u_{i_{{\sigma}(1)}, j_{1}} & u_{i, j_{1}} \end{vmatrix} = (-1)^{t+1} \begin{vmatrix} u_{i_{{\sigma}(t)}, j} & u_{i_{{\sigma}(t)}, j_{t}} & \cdots & u_{i_{{\sigma}(t)}, j_{1}} \\ \vdots & \vdots & & \vdots \\ u_{i_{{\sigma}(1)}, j} & u_{i_{{\sigma}(1)}, j_{t}} & \cdots & u_{i_{{\sigma}(1)}, j_{1}} \\ u_{i, j} & u_{i, j_{t}} & \cdots & u_{i, j_{1}} \end{vmatrix} \\ &= (-1)^{t+1} \det(J)^{2} \begin{vmatrix} u_{i,j_{1}} & \cdots & u_{i,j_{t}} & u_{i,j} \\ u_{i_{{\sigma}(1)},j_{1}} & \cdots & u_{i_{{\sigma}(1)},j_{t}} & u_{i_{{\sigma}(1)},j} \\ \vdots & & \vdots & \vdots \\ u_{i_{{\sigma}(t)},j_{1}} & \cdots & u_{i_{{\sigma}(t)},j_{t}} & u_{i_{{\sigma}(t)},j} \end{vmatrix} = (-1)^{t+1} {\Delta}^{{\mathcal{E}}(D)}_{i,j}(u),\end{aligned}$$ as required. On the other hand, suppose that ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $j = -k$ for some $1 \leq k \leq n$. By setting $i_{{\sigma}(0)} = i$ and $j_{t+1} = j$, let $0 \leq s \leq t$ and $1 \leq s' \leq t$ be such that $i_{{\sigma}(s)} \preceq n \prec i_{{\sigma}(s+1)}$ and $j_{s'} \preceq n \prec j_{s'+1}$. Since $j_{1}\prec j_{2} \prec \ldots \prec j_{s'} \preceq n\prec i_{{\sigma}(s+1)} \prec \ldots i_{{\sigma}(t)}$, we have $u_{-j_{b},-i_{{\sigma}(a)}} = u_{i_{{\sigma}(a)},j_{b}} = 0$ for all $s < a \leq t$ and all $1 \leq b \leq s'$, and thus $${\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u) = \begin{vmatrix} u_{-j,-i_{{\sigma}(t)}} & \cdots & u_{-j,-i_{{\sigma}(s+1)}}& u_{-j,-i_{{\sigma}(s)}} & \cdots & u_{-j,-i_{{\sigma}(1)}} & u_{-j,-i} \\ u_{-j_{t},-i_{{\sigma}(t)}} & \cdots & u_{-j_{t},-i_{{\sigma}(s+1)}} & u_{-j_{t},-i_{{\sigma}(s)}} & \cdots & u_{-j_{t},-i_{{\sigma}(1)}} & u_{-j_{t},-i} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ u_{-j_{s'+1},-i_{{\sigma}(t)}} & \cdots & u_{-j_{s'+1},-i_{{\sigma}(s+1)}} & u_{-j_{s'+1},-i_{{\sigma}(s)}} & \cdots & u_{-j_{s'+1},-i_{{\sigma}(1)}} & u_{-j_{s'+1},-i} \\ 0 & \cdots & 0 & u_{-j_{s'},-i_{{\sigma}(s)}} & \cdots & u_{-j_{s'},-i_{{\sigma}(1)}} & u_{-j_{s'},-i} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & u_{-j_{1},-i_{{\sigma}(s)}} & \cdots & u_{-j_{1},-i_{{\sigma}(1)}} & u_{-j_{1},-i} \end{vmatrix}.$$ Since ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$, we deduce that $$\begin{aligned} {\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u) &= \begin{vmatrix} -u_{i_{{\sigma}(t)},j} & \cdots & -u_{i_{{\sigma}(s+1)},j} & u_{{{\sigma}(s)},j} & \cdots & u_{i_{{\sigma}(1)},j} & u_{i,j} \\ -u_{i_{{\sigma}(t)},j_{t}} & \cdots & -u_{i_{{\sigma}(s+1)},j_{t}} & u_{i_{{\sigma}(s)},j_{t}} & \cdots & u_{i_{{\sigma}(1)},j_{t}} & u_{i,j_{t}} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ -u_{i_{{\sigma}(t)},j_{s'+1}} & \cdots & -u_{i_{{\sigma}(s+1)},j_{s'+1}} & u_{i_{{\sigma}(s)},j_{s'+1}} & \cdots & u_{i_{{\sigma}(1)},j_{s'+1}} & u_{i,j_{s'+1}} \\ 0 & \cdots & 0 & -u_{i_{{\sigma}(s)},j_{s'}} & \cdots & -u_{i_{{\sigma}(1)},j_{s'}} & -u_{i,j_{s'}} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & -u_{i_{{\sigma}(s)},j_{1}} & \cdots & -u_{i_{{\sigma}(1)},j_{1}} & -u_{i,j_{1}} \end{vmatrix} \\ &= (-1)^{t-s+s'} \begin{vmatrix} u_{i_{{\sigma}(t)},j} & \cdots & u_{i_{{\sigma}(s+1)},j} & u_{{{\sigma}(s)},j} & \cdots & u_{i_{{\sigma}(1)},j} & u_{i,j} \\ u_{i_{{\sigma}(t)},j_{t}} & \cdots & u_{i_{{\sigma}(s+1)},j_{t}} & u_{i_{{\sigma}(s)},j_{t}} & \cdots & u_{i_{{\sigma}(1)},j_{t}} & u_{i,j_{t}} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ u_{i_{{\sigma}(t)},j_{s'+1}} & \cdots & u_{i_{{\sigma}(s+1)},j_{s'+1}} & u_{i_{{\sigma}(s)},j_{s'+1}} & \cdots & u_{i_{{\sigma}(1)},j_{s'+1}} & u_{i,j_{s'+1}} \\ 0 & \cdots & 0 & u_{i_{{\sigma}(s)},j_{s'}} & \cdots & u_{i_{{\sigma}(1)},j_{s'}} & u_{i,j_{s'}} \\ \vdots & & \vdots & \vdots & & \vdots & \vdots \\ 0 & \cdots & 0 & u_{i_{{\sigma}(s)},j_{1}} & \cdots & u_{i_{{\sigma}(1)},j_{1}} & u_{i,j_{1}} \end{vmatrix}.\end{aligned}$$ Arguing as above (transposing and conjugating by the matrix $J = J_{t+1}$), we conclude that $${\Delta}_{-j,-i}^{{\mathcal{E}}(D)}(u) = (-1)^{t-s+s'} {\Delta}_{i,j}^{{\mathcal{E}}(D)}(u),$$ and the result follows because $t-s+s' = |D'(i,j)|$. We are now able to prove the following result. \[prop:p1\] Let $u \in {{\mathfrak{u}}}$ be arbitrary, and $({\mathcal{D}},{\varphi})$ be the (unique) basic pair for $U_{m}(q)$ such that $u \in {V_{{\mathcal{D}},{\varphi}}}$. Then, ${e_{{\mathcal{D}},{\varphi}}}\in {{\mathfrak{u}}}$; in particular, there exists a unique basic pair $(D,\phi)$ for $U$ such that ${\mathcal{D}}= {\mathcal{E}}(D)$ and ${e_{{\mathcal{D}},{\varphi}}}= {e_{D,\phi}}$. It is enough to show that, for all $(i,j) \in {\mathcal{D}}\cap {\mathcal{E}}^{+}$, we have $${\varphi}(i,j) = (-1)^{{\varepsilon}_{j}} {\varphi}(-j,-i)$$ where $${\varepsilon}_{j} = {\begin{cases}}1, & \text{if, either ${{\mathfrak{u}}}\not\leq {\mathfrak{sp}_{2n}(q)}$, or ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $j \leq n$,} \\ 0, & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $-j \leq n$.} {\end{cases}}$$ To prove this, we proceed by induction on $|{\mathcal{D}}|$. We consider the total order $\preceq$ on the set of entries ${\mathcal{E}}$ defined as follows: for all $(i,j),(k,l) \in {\mathcal{E}}$, we set $$\label{eq:ord} (i,j)\prec (k,l) \iff \text{either $j\prec l$, or $j=l$ and $k\prec i$.}$$ Let $(i,j) \in {\mathcal{D}}\cap {\mathcal{E}}^{+}$ be the smallest entry satisfying ${\varphi}(i,j) \neq (-1)^{{\varepsilon}_{j}} {\varphi}(-j,-i)$ (if it exists), let $${\mathcal{D}}(i,j)=\{(i_{1},j_{1}),\ldots,(i_{t},j_{t})\}$$ with $j_{1}\prec\ldots\prec j_{t}$, and let ${\sigma}\in S_{t}$ such that $i_{{\sigma}(1)} \prec \ldots \prec i_{{\sigma}(t)}$. Since $(r,s) \prec (i,j)$, we have ${\varphi}( r,s) = (-1)^{{\varepsilon}_{s}} {\varphi}(-s,-r)$ for all $(r,s) \in {\mathcal{D}}(i,j) \cap {\mathcal{E}}^{+}$. Using and , it is easy to conclude that $${\mathcal{D}}(-j,-i) = \{(-j_{1},-i_{1}),\ldots,(-j_{t},-i_{t})\}.$$ Let ${\mathcal{D}}_{0} = {\mathcal{D}}(i,j) \cup {\mathcal{D}}(-j,-i)$, and ${{\varphi}_{0} \colon {\mathcal{D}}_{0} \to {{\mathbb{F}_{q}}^{\;\times}}}$ be the restriction of ${\varphi}$ to ${\mathcal{D}}_{0}$. Then, by induction, the element $$e_{{\mathcal{D}}_{0},{\varphi}_{0}} = \sum_{(r,s) \in {\mathcal{D}}_{0}} {\varphi}(r,s) e_{r,s}$$ lies in ${{\mathfrak{u}}}$. On the other hand, let $c_{-j,-i} \in {\mathbb{F}_{q}}$ denote the $(i,j)$th coefficient of ${e_{{\mathcal{D}},{\varphi}}}$; hence, $$c_{-j,-i} = {\begin{cases}}{\varphi}(-j,-i), & \text{if $(-j,-i) \in {\mathcal{D}}$,} \\ 0, & \text{if $(-j,-i) \notin {\mathcal{D}}$.} {\end{cases}}$$ By , we have $${\Delta}_{-j,-i}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = (-1)^{t} {\operatorname{sgn}}({\sigma}) c_{-j,-i} \prod_{s=1}^{t} {\varphi}(-j_{s},-i_{s}).$$ It is easy to show that $$\prod_{s=1}^{t} {\varphi}(-j_{s},-i_{s})=(-1)^{r'}\prod_{s=1}^{t} {\varphi}(i_{s},j_{s})$$ where $$r' = {\begin{cases}}|{\mathcal{D}}(i,j)|, & \text{if, either ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n}(q)}$, or ${{\mathfrak{u}}}\leq {\mathfrak{o}_{2n+1}(q)}$,} \\ |{\mathcal{D}}'(i,j)|, & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$,} {\end{cases}}$$ and ${\mathcal{D}}'(i,j) = {\mathcal{D}}(i,j) \cap {\{ (i,j) \in {\mathcal{E}}^{+} \colon j \leq n \}}$. Thus, we deduce that $${\Delta}_{-j,-i}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = (-1)^{r'} c_{-j,-i}\, {\varphi}(i,j){^{-1}}{\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}).$$ Since $u \in {V_{{\mathcal{D}},{\varphi}}}$, we have $${\Delta}_{i,j}^{{\mathcal{D}}}(u) = {\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = {\Delta}_{i,j}^{{\mathcal{D}}_{0}}({e_{{\mathcal{D}},{\varphi}}})$$ and $${\Delta}_{-j,-i}^{{\mathcal{D}}}(u) = {\Delta}_{-j,-i}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = {\Delta}_{-j,-i}^{{\mathcal{D}}_{0}}({e_{{\mathcal{D}},{\varphi}}});$$ we note that $(-j,-i) \in R({\mathcal{D}})$ (by induction and by the choice of $(i,j)$). By the previous lemma, we conclude that $${\Delta}_{i,j}^{{\mathcal{D}}}(u) = (-1)^{r+1} {\Delta}_{-j,-i}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}})$$ where $$r = {\begin{cases}}r'-1 , & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $-j \leq n$,} \\ r', & \text{otherwise.} {\end{cases}}$$ It follows that $$(-1)^{r'} c_{-j,-i} {\varphi}(i,j){^{-1}}{\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}) = (-1)^{r'+1} {\Delta}_{i,j}^{{\mathcal{D}}}({e_{{\mathcal{D}},{\varphi}}}).$$ Since ${\Delta}_{i,j}^{{\mathcal{D}}}(u) \neq 0$ (because $(i,j)\in {\mathcal{D}}$), we obtain $$c_{-j,-i} = {\begin{cases}}{\varphi}(i,j), & \text{if ${{\mathfrak{u}}}\leq {\mathfrak{sp}_{2n}(q)}$ and $-j \leq n$,} \\ -{\varphi}(i,j), & \text{otherwise.} {\end{cases}}$$ It follows that $(-j,-i) \in {\mathcal{D}}$ and that $c_{-j,-i} = {\varphi}(-j,-i)$. Moreover, we note that, in the orthogonal case, if $j = i$, we obtain ${\varphi}(i,-i) = -{\varphi}(i,-i)$, hence $(i,-i) \notin {\mathcal{D}}$. The above contradicts the minimal choice of $(i,j)$, and thus we conclude that $${\varphi}(i,j) = (-1)^{{\varepsilon}_{j}} \phi_{{\varepsilon}}(-j,-i)$$ for all $(i,j) \in {\mathcal{D}}\cap {\mathcal{E}}^{+}$. The proof is complete. As observed above, this concludes the proof of the following result; we recall that ${O_{D,\phi}}= {{\mathfrak{u}}}\cap {V_{D,\phi}}$ (by the definition). \[thm:t2\] Let $u \in {{\mathfrak{u}}}$ be arbitrary. Then, there exists a unique basic subset $D {\subseteq}\Phi$ and a unique map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$ such that $u \in {O_{D,\phi}}$. Thus, ${{\mathfrak{u}}}$ is the disjoint union $${{\mathfrak{u}}}= {\bigcup}_{D,\phi} {O_{D,\phi}}$$ where the union runs over all basic pairs $(D,\phi)$ for $U$. Moreover, we have $${O_{D,\phi}}= {\{ a \in {{\mathfrak{u}}}\colon {\Delta}_{i,j}^{D}(a) = {\Delta}_{i,j}^{D}({e_{D,\phi}}) {\text{ for all }}(i,j) \in R(D) \}}$$ for every basic pair $(D,\phi)$ for $U$. As a consequence of [Theorem \[thm:t2\]]{} and [Lemma \[lem:l2\]]{}, we obtain the following main theorem. \[thm:t3\] Let $z \in U$ be arbitrary. Then, there exists a unique basic subset $D {\subseteq}\Phi$ and a unique map ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$ such that $z \in {K_{D,\phi}}$. Thus, $U$ is the disjoint union of all its superclasses; that is, $$U = {\bigcup}_{D,\phi} {K_{D,\phi}}$$ where the union runs over all basic pairs $(D,\phi)$ for $U$. Superclass functions {#sec:scf} ==================== In this section, we prove that every supercharacter is a “superclass function” of $U$; by definition, a function ${\eta \colon U \to {{\mathbb{C}}^{{\times}}}}$ is said to be a [*superclass function*]{} if it takes a constant value on each superclass of $U$. In fact, we shall prove the following result (cf. [@AN2 Theorem 3.1]). \[thm:t4\] Every supercharacter of $U$ is a superclass function. Moreover, every superclass function on $U$ is a linear combination of supercharacters; hence, the supercharacters of $U$ form a basis for the complex vector space ${\operatorname{scf}}(U)$ consisting of all superclass functions on $U$. Since every supercharacter is a product of elementary characters (by definition), it is enough to show that every elementary character of $U$ takes a constant value on each superclass of $U$. In fact, the theorem above will follow from the following result. (Henceforth, for each $z \in U$, we denote by $a_{z}$ the element of ${{\mathfrak{u}}}$ given by [Lemma \[lem:l2\]]{}.) \[prop:p2\] Let ${\alpha}\in \Phi$ and $r \in {{\mathbb{F}_{q}}^{\;\times}}$ be arbitrary. Let $(D,\phi)$ be a basic pair for $U$, and denote by ${z_{D,\phi}}$ the unique element of $z \in U$ with $a_{z} = {e_{D,\phi}}$. Then, $${\xi_{\alpha,r}}(z) = {\xi_{\alpha,r}}({z_{D,\phi}})$$ for all $z \in {K_{D,\phi}}$. Let $(i,j) \in {\mathcal{E}}^{+}({\alpha})$; hence, $1 \leq i \leq n$ and $i \prec j \preceq -i$. In this first part of the proof, we shall assume that $j \neq -i$ (in the case where $U \leq {Sp_{2n}(q)}$). Let ${{\zeta}_{i,j,r}}$ be the elementary character of $U_{m}(q)$ associated with $(i,j)$ and $r$. We recall its definition (see [@A1 Lemma 3]). We consider the subgroup $U_{i,j} = {\{ x \in U_{m}(q) \colon x_{i,k} = 0 {\text{ for all }}i \prec k \prec j \}}$ of $U_{m}(q)$, and the linear character ${{\mu_{i,j,r}}\colon U_{i,j} \to {{\mathbb{C}}^{{\times}}}}$ defined by ${\mu_{i,j,r}}(x) = {\vartheta}(rx_{i,j})$ for all $x \in U_{i,j}$. Then, ${{\zeta}_{i,j,r}}$ is defined to be the induced character ${{\zeta}_{i,j,r}}= ({\mu_{i,j,r}})^{U_{m}(q)}$. By [@AN2 Proposition 3.2], we have ${\xi_{\alpha,r}}= ({{\zeta}_{i,j,r}})_{U}$, and so ${\xi_{\alpha,r}}(z) = {{\zeta}_{i,j,r}}(z)$ for all $z \in U$. By [@A3 Proposition 5.1] (see also [@DI Theorem 5.8], or [@ADS Theorem 2.2]), we have ${{\zeta}_{i,j,r}}(x) = {{\zeta}_{i,j,r}}(1+{e_{D,\phi}})$ for all $x \in 1+{V_{{\mathcal{D}},{\varphi}}}$. In particular, we deduce that $${\xi_{\alpha,r}}(z) = {{\zeta}_{i,j,r}}(z) = {{\zeta}_{i,j,r}}(1+{e_{D,\phi}}) = {\xi_{\alpha,r}}({z_{D,\phi}})$$ for all $z \in {K_{D,\phi}}$; we recall that $z \in {K_{D,\phi}}$ if and only if $a_{z} \in {V_{D,\phi}}$ (by [Lemma \[lem:l2\]]{}). In order to complete the proof of [Proposition \[prop:p2\]]{}, it remains to consider the case where $U \leq {Sp_{2n}(q)}$ and ${\alpha}= 2{\varepsilon}_{i}$ for some $1 \leq i \leq n$. In what follows, we will always assume that this is the case; moreover, the basic pair $(D,\phi)$ will be kept fixed. We prove some elementary auxiliary lemmas; we mention that similar results are valid in the general case (a proof can be found in the second’s author PhD thesis [@N]). The proof of the first lemma is straightforward. \[lem:l4\] Let $u \in {O_{D,\phi}}$ be arbitrary, and $(k,l) \in {\mathcal{E}}(D)$ be the smallest entry of ${\mathcal{E}}(D)$ (with respect to the total order $\preceq$ on ${\mathcal{E}}$ as defined in ); hence, $1 \leq k \leq n$ and $k \prec l \preceq -k$. Then, there exists $x \in U$ such that $v = xux{^{-1}}\in {O_{D,\phi}}$ satisfies $v_{k',l} = v_{k,l'} = 0$ for all $1 \preceq k', l' \preceq -1$ with $k' \neq k$ and $l' \neq l$. As a consequence, we obtain the following result. \[cor:c1\] Let ${\beta}\in \Phi$ and $s \in {{\mathbb{F}_{q}}^{\;\times}}$ be arbitrary. Then, $O_{{\beta},s} = {\{ x(se_{{\beta}})x{^{-1}}\colon x \in U \}}$ is the adjoint $U$-orbit which contains $se_{{\beta}} \in {{\mathfrak{u}}}$. Moreover, $K_{{\beta},s} = {\{ xz_{{\beta},s}x{^{-1}}\colon x \in U \}}$ is the conjugacy class which contains the element $z_{{\beta},s} = 1 + se_{{\beta}} \in U$. (In particular, we have ${\xi_{\alpha,r}}(z) = {\xi_{\alpha,r}}(z_{{\beta},s})$ for all $z \in K_{{\beta},s}$.) The first assertion is an immediate consequence of the previous lemma. For the second, we note that $xz_{{\beta},s}x{^{-1}}= 1+x(se_{{\beta}})x{^{-1}}\in U \cap (1+V_{{\beta},s}) = K_{{\beta},s}$. On the other hand, if $z \in K_{{\beta},s}$, then $a_{z} \in O_{{\beta},s}$ (by [Lemma \[lem:l2\]]{}), and thus the mapping $z \mapsto a_{z}$ defines a bijection from $K_{{\beta},s}$ to $O_{{\beta},s}$. Therefore, $|K_{{\beta},s}| = |O_{{\beta},s}| = |{\{ 1+x(se_{{\beta}})x{^{-1}}\colon x \in U \}}|$, and the result follows. We observe that, in the notation of the corollary, we have $z_{{\beta},s} = {z_{D,\phi}}$ for $D = \{{\beta}\}$ and ${{\phi \colon D \to {{\mathbb{F}_{q}}^{\;\times}}}}$ defined by $\phi({\beta}) = s$; hence, [Proposition \[prop:p2\]]{} is true whenever the basic subset $D$ has a unique element. Therefore, we will assume that $|D| > 1$, and ${\xi_{\alpha,r}}(z) = {\xi_{\alpha,r}}(z_{D',\phi'})$ for all $z \in K_{D',\phi'}$ and every basic pair $(D',\phi')$ with $|D'| < |D|$. [Proposition \[prop:p2\]]{} will then follow by induction. However, we need a concrete formula for the values of an elementary character on any superclass. As usual, we denote by ${{\mathfrak{u}}}^{\ast}$ the dual vector space of ${{\mathfrak{u}}}$, and let ${\{ e^{\ast}_{{\alpha}} \colon {\alpha}\in \Phi \}}$ be the ${\mathbb{F}_{q}}$-basis of ${{\mathfrak{u}}}^{\ast}$ dual to the basis ${\{ e_{{\alpha}} \colon {\alpha}\in \Phi \}}$ of ${{\mathfrak{u}}}$; hence, $e^{\ast}_{{\alpha}}(e_{{\beta}}) = {\delta}_{{\alpha},{\beta}}$ for all ${\alpha}, {\beta}\in \Phi$. For each $f \in {{\mathfrak{u}}}^{\ast}$, we define $$u(f) = \sum_{{\beta}\in \Phi} u_{{\beta}} e^{\ast}_{{\beta}} \in {{\mathfrak{u}}}_{2n}(q)$$ where $$u_{{\beta}} = \begin{cases} \frac{1}{2}\, f(e_{{\beta}}), & \text{if ${\beta}= {\varepsilon}_{k} \pm {\varepsilon}_{l}$ for $1 \leq k < l \leq n$,} \\ f(e_{{\beta}}), & \text{if ${\beta}= 2{\varepsilon}_{k}$ for $1 \leq k \leq n$.} \end{cases}$$ It is easy to see that $f(v) = {\operatorname{Tr}}(u(f)^{T} v)$ for all $v \in {{\mathfrak{u}}}$, and that the mapping $f \mapsto u(f)$ defines a vector space isomorphism from ${{\mathfrak{u}}}^{\ast}$ to ${{\mathfrak{u}}}$. Finally, we define the linear function ${\hat{f}}\in {{\mathfrak{u}}}_{2n}(q)^{\ast}$ by $${\hat{f}}(v) = {\operatorname{Tr}}(u(f)^{T} v)$$ for all $v \in {{\mathfrak{u}}}_{2n}(q)$, and set $${O_{\alpha,r}}^{\ast} = {\{ f \in {{\mathfrak{u}}}^{\ast} \colon {\hat{f}}\in U_{2n}(q)(r e^{\ast}_{i,-i}) U_{2n}(q) \}}$$ where $(xgy)(a) = g(x{^{-1}}a y{^{-1}})$ for all $x,y \in U_{2n}(q)$, $g \in {{\mathfrak{u}}}_{2n}(q)^{\ast}$ and $a \in {{\mathfrak{u}}}_{2n}(q)$. By [@AN2 Proposition 5.2], we know that $${\xi_{\alpha,r}}(z) = \frac{{\xi_{\alpha,r}}(1)}{|{O_{\alpha,r}}^{\ast}|} \sum_{f \in {O_{\alpha,r}}^{\ast}} {\vartheta}_{f}(a_{z})$$ for all $z \in U$. Given any ${\mathbb{F}_{q}}$-vector space $V$ and any linear map $f \in V^{\ast}$, we denote by ${\vartheta}_{f}$ the composite map ${{\vartheta}\circ f \colon V \to {{\mathbb{C}}^{{\times}}}}$; it is straightforward to check that ${\vartheta}_{f}$ is a linear character of the additive group $V^{+}$ and that ${\operatorname{Irr}}(V^{+}) = {\{ {\vartheta}_{f} \colon f \in V^{\ast} \}}$. For our purposes, it is convenient to describe the subset ${O_{\alpha,r}}^{\ast} {\subseteq}{{\mathfrak{u}}}^{\ast}$ (and the elementary character ${\xi_{\alpha,r}}$) as follows. By [@AN1 Corollary 5.3], we have $f \in {O_{\alpha,r}}^{\ast}$ if and only if the following holds: 1. ${\hat{f}}(e_{a,-b}) = 0$ for all $(a,-b) \in {\mathcal{E}}$ with $1 \leq a < i$ or $1 \leq b < i$; 2. ${\hat{f}}(e_{i,-i}) = r$; 3. $\begin{vmatrix} {\hat{f}}(e_{i,b}) & {\hat{f}}(e_{i,-i}) \\ {\hat{f}}(e_{a,b}) & {\hat{f}}(e_{a,-i}) \end{vmatrix} = 0$ for all $(a,b) \in {\mathcal{E}}$ with $1 \prec a \prec b \prec -i$. On the other hand, for $u = u(f)$, we have $${\hat{f}}(e_{a,b}) = {\operatorname{Tr}}(u^{T} e_{a,b}) = {\operatorname{Tr}}(e_{a,b}^{\;T} u) = e_{a,b}^{\ast}(u) = u_{a,b}$$ for all $(a,b) \in {\mathcal{E}}$. Therefore, for any $(a,b) \in {\mathcal{E}}^{+}$, we deduce that $${\hat{f}}(e_{-b,-a}) = {\begin{cases}}{\hat{f}}(e_{a,b}), & \text{if $(a,b) \in {\mathcal{E}}^{+}({\varepsilon}_{a}-{\varepsilon}_{b})$,} \\ -{\hat{f}}(e_{a,b}), & \text{if $(a,b) \in {\mathcal{E}}^{+}({\varepsilon}_{a}+{\varepsilon}_{-b})$.} {\end{cases}}$$ In particular, for $i \prec a \prec -i$, we get $${\hat{f}}(e_{a,-i}) = {\begin{cases}}{\hat{f}}(e_{i,-a}), & \text{if $-n \preceq a \prec -i$,} \\ -{\hat{f}}(e_{i,-a}), & \text{if $i \prec a \preceq n$,} {\end{cases}}$$ and so the elements of ${O_{\alpha,r}}^{\ast}$ can be parametrized by the set ${\mathcal{C}}$ consisting of all functions ${c \colon {\{ a \colon i \prec a \prec -i \}} \to {\mathbb{F}_{q}}}$. In fact, for each $c \in {\mathcal{C}}$, there exists a unique linear function $f_{c} \in {{\mathfrak{u}}}^{\ast}$ such that ${\hat{f}}_{c} \in {{\mathfrak{u}}}_{2n}(q)^{\ast}$ satisfies ${\hat{f}}_{c}(e_{i,a}) = c_{a}$ for all $i \prec a \prec -i$; here, we write $c_{a} = c(a)$ for all $i \prec a \prec -i$. This concludes the proof of the following result. \[lem:l5\] Let ${\alpha}= 2{\varepsilon}_{i} \in \Phi$ for some $1 \leq i \leq n$, and $r \in {{\mathbb{F}_{q}}^{\;\times}}$. Then, in the notation as above, we have $${O_{\alpha,r}}^{\ast} = {\{ f_{c} \colon c \in {\mathcal{C}}\}}.$$ Moreover, the mapping $c \mapsto f_{c}$ defines a bijection from ${\mathcal{C}}$ to ${O_{\alpha,r}}$, and $${\xi_{\alpha,r}}(z) = \frac{1}{q^{n-i}} \sum_{c \in {\mathcal{C}}} {\vartheta}_{f_{c}}(a_{z})$$ for all $z \in U$. Henceforth, we set ${\vartheta}_{c} = {\vartheta}_{f_{c}}$ for all $c \in {\mathcal{C}}$, and consider the function ${{\vartheta}\colon {{\mathfrak{u}}}\to {\mathbb{C}}}$ defined by $${\vartheta}(u) = \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(u)$$ for all $u \in {{\mathfrak{u}}}$. Since ${O_{\alpha,r}}^{\ast} {\subseteq}{{\mathfrak{u}}}^{\ast}$ is $U$-invariant (for the natural action given by conjugation), we clearly have ${\vartheta}(xux{^{-1}}) = {\vartheta}(u)$ for all $u \in {{\mathfrak{u}}}$. Let $u \in {O_{D,\phi}}$ be arbitrary, and $(k,l) \in {\mathcal{E}}$ be the smallest entry of ${\mathcal{E}}(D)$ (with respect to the order $\preceq$ defined in ); hence, we must have $1 \leq k \leq n$ and $k \prec l \preceq -k$. Let ${\beta}\in \Phi$ be such that $(k,l) \in {\mathcal{E}}({\beta})$ (hence, ${\beta}\in D$), and $s = \phi({\beta})$. By [Lemma \[lem:l3\]]{}, there exists $x \in U$ such that $v = xux{^{-1}}$ satisfies $v_{a,l} = v_{k,b} = 0$ for all $1 \preceq a, b \preceq -1$ with $a \neq k$ and $b \neq l$, and thus $${\vartheta}(u) = {\vartheta}(v) = \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(se_{{\beta}} + w) = \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(s e_{{\beta}}) {\vartheta}_{c}(w)$$ where $w = v - se_{{\beta}} \in {{\mathfrak{u}}}$. Next, we consider the relative positions of the entries $(k,l)$ and $(i,-i)$. There are four distinct cases. In this case, $f_{c}(e_{{\beta}}) = {\hat{f}}_{c}(e_{i,l} \pm e_{-l,-i}) = 2{\hat{f}}(e_{i,l}) = 2c_{l}$ for all $c \in {\mathcal{C}}$, and so $${\vartheta}(u) = \sum_{c \in {\mathcal{C}}} {\vartheta}(2c_{l}s) {\vartheta}_{c}(w).$$ On the other hand, we have $w_{{\gamma}} = w_{a,b}$ whenever ${\gamma}\in \Phi$ and $(a,b) \in {\mathcal{E}}^{+}({\gamma})$, and thus $$\label{eq:e6} f_{c}(w) = \sum_{{\gamma}\in \Phi} w_{{\gamma}} f_{c}(e_{{\gamma}}) = rw_{i,-i} + \sum_{i \prec b \prec -i} c_{b} w_{i,b} + \sum_{i \prec b \prec -i} \sum_{i \prec a \preceq -b} r{^{-1}}c_{-a} c_{b} w_{a,b}\,.$$ In fact, for all $c \in {\mathcal{C}}$ and all ${\beta}\in \Phi$, we have $$f_{c}(e_{{\beta}}) = {\hat{f}}_{c}(e_{{\beta}}) = {\begin{cases}}r, & \text{if ${\beta}= 2{\varepsilon}_{i}$,} \\ 2c_{b}, & \text{if $(i,b) \in {\mathcal{E}}^{+}({\beta})$ and $b \neq -i$,} \\ 2 r{^{-1}}c_{-a} c_{b}, & \text{if $(a,b) \in {\mathcal{E}}^{+}({\beta})$ for $i < a$,} \\ 0, & \text{otherwise.} {\end{cases}}$$ Since $w_{a,l} = v_{a,l} = 0$ for all $i \prec a \preceq -i$, the coordinate $c_{l}$ of any $c \in {\mathcal{C}}$ does not occur in the expression of $f_{c}(w)$ given by , and so $${\vartheta}(u) = {\left(}\sum_{t \in {\mathbb{F}_{q}}} {\vartheta}(2ts) {\right)}{\left(}\sum_{c \in {\mathcal{C}}_{1}} {\vartheta}_{c}(w) {\right)}= {\left(}\sum_{t \in {\mathbb{F}_{q}}} {\vartheta}_{t}(2s) {\right)}{\left(}\sum_{c \in {\mathcal{C}}_{1}} {\vartheta}_{c}(w) {\right)}$$ where ${\mathcal{C}}_{1} = {\{ c \in {\mathcal{C}}\colon c_{l} = 0 \}}$, and ${\vartheta}_{t}$, for $t \in {\mathbb{F}_{q}}$, denotes the character of ${\mathbb{F}_{q}}^{\;+}$ defined by ${\vartheta}_{t}(a) = {\vartheta}(ta)$ for all $a \in {\mathbb{F}_{q}}$. Since $\sum_{t \in {\mathbb{F}_{q}}} {\vartheta}_{t}$ is the regular character of ${\mathbb{F}_{q}}^{\;+}$, we conclude that ${\vartheta}(u) = 0$. Furthermore, we note that $$\sum_{c \in {\mathcal{C}}_{1}} {\vartheta}_{c}(w) = q{^{-1}}\sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w)$$ (by the same reason as above). We have $f_{c}(e_{{\beta}}) = f_{c}(e_{i,-i}) = r$ for all $c \in {\mathcal{C}}$, and so $${\vartheta}(u) = {\vartheta}(rs) \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w).$$ We have $f_{c}(e_{{\beta}}) = 0$ for all $c \in {\mathcal{C}}$, and so $${\vartheta}(u) = \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w).$$ In this case, we have $$f_{c}(e_{{\beta}}) = {\begin{cases}}r{^{-1}}c_{-k} c_{l}, & \text{if $l \neq -k$,} \\ r{^{-1}}c_{-k}^{\;2}, & \text{if $l = -k$,} {\end{cases}}$$ for all $c \in {\mathcal{C}}$. On the one hand, suppose that $l \neq -k$. Then, the entries $c_{-k}$ and $c_{l}$ of any $c \in {\mathcal{C}}$ do not occur in $f_{c}(w)$ (see the argument in case 1), hence we get $${\vartheta}(u) = {\left(}\sum_{t,t' \in {\mathbb{F}_{q}}} {\vartheta}(r{^{-1}}s tt') {\right)}{\left(}q^{-2} \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w) {\right)}.$$ Since $\sum_{t \in {\mathbb{F}_{q}}} {\vartheta}_{t}$ is the regular character of ${\mathbb{F}_{q}}^{\;+}$, we conclude that $$\sum_{t,t' \in {\mathbb{F}_{q}}} {\vartheta}(r{^{-1}}s tt') = \sum_{t,t' \in {\mathbb{F}_{q}}} {\vartheta}_{t}(r{^{-1}}s t') = q,$$ and thus $${\vartheta}(u) = q{^{-1}}\sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w).$$ On the other hand, suppose that $l = -k$. Then, the entry $c_{-k}$ of any $c \in {\mathcal{C}}$ do not occur in $f_{c}(w)$, hence we get $${\vartheta}(u) = {\left(}\sum_{t \in {\mathbb{F}_{q}}} {\vartheta}(r{^{-1}}s t^{2}) {\right)}{\left(}q{^{-1}}\sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w) {\right)}.$$ Now, we recall that the [*quadratic character*]{} of ${\mathbb{F}_{q}}$ is, by definition, the linear character $\eta$ of the multiplicative group ${{\mathbb{F}_{q}}^{\;\times}}$ defined by $$\eta(c) = {\begin{cases}}1, & \text{if $c \in ({{\mathbb{F}_{q}}^{\;\times}})^{2}$}, \\ -1, & \text{otherwise,} {\end{cases}}$$ for all $c \in {{\mathbb{F}_{q}}^{\;\times}}$. Moreover, given any linear character $\nu$ of ${{\mathbb{F}_{q}}^{\;\times}}$ and any linear character ${\vartheta}$ of ${\mathbb{F}_{q}}^{\;+}$, the [*Gauss sum*]{} of $\nu$ and ${\vartheta}$ is defined by $$G(\nu,{\vartheta}) = \sum_{c \in {{\mathbb{F}_{q}}^{\;\times}}} \nu(c) {\vartheta}(c).$$ The following result is Theorem 5.3.3 of the book [@LN]. \[thm:t5\] Let ${\vartheta}$ be a non-trivial linear character of ${\mathbb{F}_{q}}^{\;+}$, and $$h(T) = a_{2}T^{2} + a_{1}T + a_{0} \in {\mathbb{F}_{q}}[T]$$ be a polynomial over ${\mathbb{F}_{q}}$ with $a_{2} \neq 0$. Suppose that $q$ is odd. Then, $$\sum_{c \in {\mathbb{F}_{q}}} {\vartheta}(h(c)) = {\vartheta}(a_{0} - a_{1}^{\,2} (4a_{2}){^{-1}}) \eta(a_{2}) G(\eta,{\vartheta})$$ where $\eta$ is the quadratic character of ${\mathbb{F}_{q}}$. Applying this result to our situation (with $h(T) = r{^{-1}}s T^{2}$), we obtain $${\vartheta}(u) = q{^{-1}}\eta(r{^{-1}}s) G(\eta,{\vartheta}) \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w).$$ It follows that, in any case, we have $${\vartheta}(u) = c_{{\beta},s} \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w)$$ for some constant $c_{{\beta},s} \in {\mathbb{C}}$ depending only on the root ${\beta}\in D$ and on the value $s = \phi({\beta})$; in fact, $$\label{eq:e7} c_{{\beta},s} = {\begin{cases}}0, & \text{if $k = i$ and $l \prec -i$,} \\ {\vartheta}(rs), & \text{if $k = i$ and $l = -i$,} \\ 1, & \text{if $k < i$,} \\ q{^{-1}}, & \text{if $k > i$ and $l \neq -k$,} \\ q{^{-1}}\eta(r{^{-1}}s) G(\eta,{\vartheta}), & \text{if $k > i$ and $l = -k$.} {\end{cases}}$$ We are now able to conclude the proof of [Proposition \[prop:p2\]]{}. Let the notation be as above, and let $z' \in U$ be such that $a_{z'} = w$. Let $D' = D - \{{\beta}\}$, and ${\phi' \colon D' \to {{\mathbb{F}_{q}}^{\;\times}}}$ be the restriction on $\phi$ to $D'$. Then, it is easy to check that $w = v - se_{{\beta}} \in {O_{D',\phi'}}$, hence $z' \in {K_{D',\phi'}}$ (by [Lemma \[lem:l2\]]{}). By induction, we have ${\xi_{\alpha,r}}(z') = {\xi_{\alpha,r}}({z_{D',\phi'}})$. Since $${\xi_{\alpha,r}}(z') = \frac{1}{q^{n-i}} \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w)$$ (by [Lemma \[lem:l5\]]{}), we conclude that $${\xi_{\alpha,r}}(z) = \frac{c_{{\beta},s}}{q^{n-i}} \sum_{c \in {\mathcal{C}}} {\vartheta}_{c}(w) = c_{{\beta},s} {\xi_{\alpha,r}}(z') = c_{{\beta},s} {\xi_{\alpha,r}}({z_{D',\phi'}}).$$ Therefore, the value ${\xi_{\alpha,r}}(z)$ does not depend on $z \in {K_{D,\phi}}$, hence ${\xi_{\alpha,r}}(z) = {\xi_{\alpha,r}}({z_{D,\phi}})$ for all $z \in {K_{D,\phi}}$. We next proceed with the proof of [Theorem \[thm:t4\]]{}; a slightly different proof will be given later without reference to the results of [@AN2]. Since every supercharacter is a product of elementary characters, [Proposition \[prop:p2\]]{} implies that every supercharacter is a superclass function. By [@AN2 Theorem 4.2], the supercharacters are orthogonal, hence they are linearly independent functions of ${\operatorname{scf}}(U)$. Since the dimension of the vector space ${\operatorname{scf}}(U)$ equals the number of basic pairs $(D,\phi)$ for $U$, we conclude that the supercharacters form a basis of ${\operatorname{scf}}(U)$, and this completes the proof. We now observe that, since the regular character of $U$ is clearly a superclass function, [Theorem \[thm:t4\]]{} implies that it is a linear combination of supercharacters. In particular, we obtain the following result (and also an alternative proof of [@AN2 Theorem 3.2]). \[thm:t8\] Every irreducible character is a constituent of a (unique) supercharacter. It is enough to observe that every irreducible character of $U$ is a constituent of the regular character. (The unicity follows by the orthogonality of supercharacters; see [@AN2 Theorem 4.2].) Finally, an easy calculation proves the following result (and gives an alternative proof of [@AN2 Theorem 5.2]). \[thm:t9\] Let $\rho_{U}$ be the regular character of $U$. Then, $$\rho_{U} = \sum_{D,\phi} \frac{{\xi_{D,\phi}}(1)}{{\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}\; {\xi_{D,\phi}}$$ where the sum is over all basic pairs $(D,\phi)$. Let $\rho_{U} = \sum_{D,\phi} m_{D,\phi} {\xi_{D,\phi}}$ where $m_{D,\phi} \in {\mathbb{C}}$ for all basic pairs $(D,\phi)$. Since supercharacters are orthogonal, we obtain ${\langle \rho_{U} , {\xi_{D,\phi}}\rangle} = m_{D,\phi} {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}$. On the other hand, let ${\operatorname{Irr}}_{D,\phi}(U)$ denote the subset of ${\operatorname{Irr}}(U)$ consisting of all irreducible constituents of the supercharacter ${\xi_{D,\phi}}$; hence, we have a disjoint union ${\operatorname{Irr}}(U) = {\bigcup}_{D,\phi} {\operatorname{Irr}}_{D,\phi}(U)$, and $${\xi_{D,\phi}}= \sum_{\chi \in {\operatorname{Irr}}_{D,\phi}(U)} {\langle \chi , {\xi_{D,\phi}}\rangle} \chi.$$ Since $\rho_{U} = \sum_{\chi \in {\operatorname{Irr}}(U)} \chi(1) \chi$, we deduce that $$m_{D,\phi} {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle} = {\langle \rho_{U} , {\xi_{D,\phi}}\rangle} = \sum_{\chi \in {\operatorname{Irr}}_{D,\phi}(U)} \chi(1) {\langle \chi , {\xi_{D,\phi}}\rangle} = {\xi_{D,\phi}}(1),$$ and the result follows. Supercharacter values {#sec:value} ===================== In this section, we obtain explicit formulae that allows to determine the constant value ${\xi_{D,\phi}}({z_{D',\phi'}})$ of the supercharacter ${\xi_{D,\phi}}$ on the superclass ${K_{D',\phi'}}$. Since ${\xi_{D,\phi}}$ is a product of elementary characters, it is enough to determine the value of an arbitrary elementary character ${\xi_{\alpha,r}}$, for ${\alpha}\in \Phi$ and $r \in {{\mathbb{F}_{q}}^{\;\times}}$, on any superclass. Let $(i,j) \in {\mathcal{E}}$, and consider be the elementary character ${{\zeta}_{i,j,r}}$ of $U_{m}(q)$. By [@A3 Proposition 5.1] (see also [@ADS Theorem 2.2]), ${{\zeta}_{i,j,r}}$ is constant on the superclasses of $U_{m}(q)$, and its value on the superclass associated with a basic pair $({\mathcal{D}},{\varphi})$ equals $$\label{eq:e8} {{\zeta}_{i,j,r}}(1+{e_{{\mathcal{D}},{\varphi}}}) = {\begin{cases}}q^{-t} {{\zeta}_{i,j,r}}(1) {\vartheta}(r {\varphi}(i,j)), & \text{if $(i,j) \in {\mathcal{D}}$,} \\ q^{-t} {{\zeta}_{i,j,r}}(1) , & \text{if $(i,j) \in R({\mathcal{D}})-{\mathcal{D}}$,} \\ 0, & \text{otherwise,} {\end{cases}}$$ where $t = |{\{ (k,l) \in {\mathcal{D}}\colon i \prec k \prec l \prec j \}}|$. (We note that $q^{-t} {{\zeta}_{i,j,r}}(1) = q^{t'}$ where $t'$ is the number of ${\mathcal{D}}$-regular entries which are directly below the entry $(i,j)$.) Using [@AN2 Proposition 3.2], we easily deduce the following result. For simplicity of writing, for any basic subset $D {\subseteq}\Phi$, we define $$R(D) = {\{ {\beta}\in \Phi \colon {\mathcal{E}}({\beta}) {\subseteq}R({\mathcal{E}}(D)) \}},$$ and observe that, for any root ${\beta}\in \Phi$, we have $${\mathcal{E}}({\beta}) {\subseteq}R({\mathcal{E}}(D)) \iff {\mathcal{E}}({\beta}) \cap R({\mathcal{E}}(D)) \neq \emptyset;$$ in fact, an entry $(k,l) \in {\mathcal{E}}$ is $D$-regular if and only if $(-l,-k)$ is also $D$-regular. Further, for any root ${\alpha}\in \Phi$, we set $$D({\alpha}) = {\{ (k,l) \in {\mathcal{E}}(D) \colon i \prec k \prec l \prec j \}}$$ where $(i,j) \in {\mathcal{E}}^{+}({\alpha})$. \[prop:p3\] Let ${\alpha}\in \Phi$, and suppose that ${\alpha}\neq 2{\varepsilon}_{i}$ for $1 \leq i \leq n$ (in the case where $U \leq {Sp_{2n}(q)}$). Let $r \in {{\mathbb{F}_{q}}^{\;\times}}$, and $(D',\phi')$ be a basic pair for $U$. Then, $${\xi_{\alpha,r}}({z_{D',\phi'}}) = {\begin{cases}}q^{-t({\alpha},D')} {\xi_{\alpha,r}}(1) {\vartheta}(r \phi'({\alpha})), & \text{if ${\alpha}\in D'$,} \\ q^{-t({\alpha},D')} {\xi_{\alpha,r}}(1), & \text{if ${\alpha}\in R(D')-D'$,} \\ 0, & \text{otherwise,} {\end{cases}}$$ where $t({\alpha},D') = |D'({\alpha})|$. It is enough to observe that ${\xi_{\alpha,r}}= ({{\zeta}_{i,j,r}})_{U}$ for $(i,j) \in {\mathcal{E}}({\alpha})$ (by [@AN2 Proposition 3.2]), and thus $${\xi_{\alpha,r}}({z_{D,\phi}}) = {{\zeta}_{i,j,r}}({z_{D,\phi}}) = {{\zeta}_{i,j,r}}(1+{e_{D,\phi}}).$$ The result follows by because ${e_{D,\phi}}= e_{{\mathcal{E}}(D),{\varphi}}$ for a (uniquely determined) map ${{{\varphi}\colon {\mathcal{E}}(D) \to {{\mathbb{F}_{q}}^{\;\times}}}}$ (by [Proposition \[prop:p1\]]{}). Next, we consider the case where $U \leq {Sp_{2n}(q)}$ and ${\alpha}= 2{\varepsilon}_{i}$ for some $1 \leq i \leq n$. Let $(D,\phi)$ be a basic pair for $U$. Let $(k,l) \in {\mathcal{E}}$ be the smallest entry of ${\mathcal{E}}(D)$ (with respect to the order $\preceq$ defined in ), and ${\beta}\in D$ be such that $(k,l) \in {\mathcal{E}}({\beta})$. Let $s = \phi({\beta})$, $D' = D - \{{\beta}\}$, and ${{\phi' \colon D' \to {{\mathbb{F}_{q}}^{\;\times}}}}$ be the restriction of $\phi$ to $D'$. We recall from the proof of [Proposition \[prop:p2\]]{} that ${\xi_{\alpha,r}}({z_{D,\phi}}) = c_{{\beta},s} {\xi_{\alpha,r}}({z_{D',\phi'}})$ where $c_{{\beta},s} \in {\mathbb{C}}$ is given by . In particular, for $D = \{{\beta}\}$, we have ${z_{D',\phi'}}= 1$, hence ${\xi_{\alpha,r}}(z_{{\beta},s}) = c_{{\beta},s} {\xi_{\alpha,r}}(1) = q^{n-i} c_{{\beta},s}$. In the general situation, we get $${\xi_{\alpha,r}}({z_{D,\phi}}) = {\xi_{\alpha,r}}(1) \prod_{{\beta}\in D} c_{{\beta},\phi({\beta})} = q^{n-i} \prod_{{\beta}\in D} c_{{\beta},\phi({\beta})},$$ and so we obtain the following formulae. \[prop:p4\] Suppose that $U \leq {Sp_{2n}(q)}$, let ${\alpha}= 2{\varepsilon}_{i}$ for some $1 \leq i \leq n$, and let $(D',\phi')$ be a basic pair for $U$. Moreover, let $\eta$ be the quadratic character of ${\mathbb{F}_{q}}$, let $G(\eta,{\vartheta})$ be the Gauss sum of $\eta$ and ${\vartheta}$. Then, $${\xi_{\alpha,r}}({z_{D',\phi'}}) = {\begin{cases}}q^{-t({\alpha},D')} {\xi_{\alpha,r}}(1) c_{{\alpha},r}^{D',\phi'} {\vartheta}(r \phi'({\alpha})), & \text{if ${\alpha}\in D'$,} \\ q^{-t({\alpha},D')} {\xi_{\alpha,r}}(1) c_{{\alpha},r}^{D',\phi'}, & \text{if ${\alpha}\in R(D') - D'$,} \\ 0, & \text{otherwise,} {\end{cases}}$$ where $t({\alpha},D') = |D'({\alpha})|$, and $$c_{{\alpha},r}^{D',\phi'} = q^{\frac{1}{2}(t({\alpha},D')-t_{0}({\alpha},D'))} G(\eta,{\vartheta})^{t_{0}({\alpha},D')} \prod_{{\beta}\in D'_{0}({\alpha})} \eta(r{^{-1}}\phi'({\beta}))$$ for $D'_{0}({\alpha}) = D' \cap {\{ 2{\varepsilon}_{k} \colon i < k \leq n \}}$ and $t_{0}({\alpha},D') = |D'_{0}({\alpha})|$. As an immediate consequence of [Propositions \[prop:p3\] and \[prop:p4\]]{}, we obtain following general formula for the constant value ${\xi_{D,\phi}}^{D',\phi'} = {\xi_{D,\phi}}({z_{D',\phi'}})$ of the supercharacter ${\xi_{D,\phi}}$ on the superclass ${K_{D',\phi'}}$ (see [@ADS Theorem 2.2] for the corresponding result in the case of the unitriangular group). As in the previous proposition, given any basic subset $D {\subseteq}\Phi$, we define $$D_{0} = D \cap {\{ 2{\varepsilon}_{i} \colon 1 \leq i \leq n \}},\quad \text{and}\quad D_{0}({\alpha}) = D \cap {\{ 2{\varepsilon}_{k} \colon i < k \leq n \}}$$ whenever ${\alpha}= 2{\varepsilon}_{i} \in \Phi$ for $1 \leq i \leq n$. \[thm:t6\] Let $(D,\phi)$ and $(D',\phi')$ be basic pairs for $U$. Then, $${\xi_{D,\phi}}({z_{D',\phi'}}) = {\begin{cases}}q^{-t(D,D')} {\xi_{D,\phi}}(1) c_{D,\phi}^{D',\phi'} \prod_{{\alpha}\in D \cap D'} {\vartheta}(\phi({\alpha}) \phi'({\alpha})), & \text{if $D {\subseteq}R(D')$,} \\ 0, & \text{otherwise,} {\end{cases}}$$ where $t(D,D') = \sum_{{\alpha}\in D} |D'({\alpha})|$, and $$c_{D,\phi}^{D',\phi'} = q^{\frac{1}{2}(t(D,D')-t_{0}(D,D'))} G(\eta,{\vartheta})^{t_{0}(D,D')} \prod_{{\alpha}\in D_{0}} \prod_{{\beta}\in D'_{0}({\alpha})} \eta(\phi({\alpha}){^{-1}}\phi'({\beta}))$$ for $t_{0}(D,D') = \sum_{{\alpha}\in D_{0}} |D'_{0}({\alpha})|$. We observe that, in the case where, either $U \leq {O_{2n}(q)}$, or $U \leq {O_{2n+1}(q)}$, the set $D_{0}$ is empty, and thus $c_{D,\phi}^{D',\phi'} = 1$. Moreover, in any case, we have $q^{-t(D,D')} {\xi_{D,\phi}}(1) = q^{t'(D,D')}$ where $t(D,D')$ is the number of $D'$-regular entries which are directly below the entries in ${\mathcal{E}}^{+}(D)$. We conclude this section with a consequence of [Theorem \[thm:t6\]]{}, proving that every superclass factorizes uniquely as a product (in any order) of “elementary” superclasses; we recall that a similar factorization is valid for supercharacters. \[thm:t7\] Let $(D,\phi)$ be a basic pair for $U$. Then, $${K_{D,\phi}}= \prod_{{\alpha}\in D} K_{{\alpha},\phi({\alpha})}$$ where the product can be taken in any order. We order the roots according to the total order $\preceq$ defined as follows: given ${\alpha}, {\beta}\in \Phi$, let $(i,j) \in {\mathcal{E}}^{+}({\alpha})$ and $(k,l) \in {\mathcal{E}}^{+}({\beta})$, and define ${\alpha}\preceq {\beta}$ if and only if $(i,j) \preceq (k,l)$ (with respect to the order on ${\mathcal{E}}$ as defined in ). On the one hand, let $z \in \prod_{{\alpha}\in D} K_{{\alpha},\phi({\alpha})}$ be arbitrary, and $(D',\phi')$ be the unique basic pair with $z \in {K_{D',\phi'}}$. Suppose that $(D',\phi') \neq (D,\phi)$, and let ${\alpha}\in \Phi$ be the smallest root in $D \cup D'$ such that, either ${\alpha}\notin D \cap D'$, or $\phi({\alpha}) \neq \phi({\alpha}')$. We consider the elementary character ${\xi_{\alpha,r}}$ for any $r \in {{\mathbb{F}_{q}}^{\;\times}}$. By [Propositions \[prop:p3\] and \[prop:p4\]]{}, we have $${\xi_{\alpha,r}}(z) = {\begin{cases}}c_{D,\phi} {\vartheta}(r \phi({\alpha})), & \text{if ${\alpha}\in D$,} \\ c_{D',\phi'} {\vartheta}(r \phi'({\alpha})), & \text{if ${\alpha}\in D'$,} {\end{cases}}$$ where $c_{D,\phi}, c_{D',\phi'} \in {\mathbb{C}}$ are non-zero constants depending only on roots ${\gamma}\in D \cup D'$ with ${\gamma}\prec {\alpha}$; moreover, by the choice of ${\alpha}$, we have $c_{D,\phi} = c_{D',\phi'}$. Since ${\vartheta}(s) \neq 0$ for all $s \in {\mathbb{F}_{q}}$, we conclude that ${\xi_{\alpha,r}}(z) \neq 0$, and thus ${\alpha}\in R(D) \cap R(D')$ (again by [Propositions \[prop:p3\] and \[prop:p4\]]{}). Now, suppose that ${\alpha}\in D - D'$. Then, since ${\alpha}\in R(D')$, we have ${\xi_{\alpha,r}}(z) = c_{D',\phi'} = c_{D,\phi}$, and thus ${\vartheta}(r \phi({\alpha})) = 1$. Since $r \in {{\mathbb{F}_{q}}^{\;\times}}$ is arbitrary, we conclude that $$q = \sum_{r \in {\mathbb{F}_{q}}} {\vartheta}(r\phi({\alpha})) = \rho(\phi({\alpha}))$$ where $\rho = \sum_{r \in {\mathbb{F}_{q}}} {\vartheta}_{r}$ is the regular character of ${\mathbb{F}_{q}}^{\;+}$. It follows that $\phi({\alpha}) = 0$, a contradiction. Similarly, we obtain a contradiction assuming that ${\alpha}\in D' - D$, and thus ${\alpha}\in D \cap D'$. Thus, we get ${\vartheta}(r \phi({\alpha})) = {\vartheta}(r \phi'({\alpha}))$ for all $r \in {{\mathbb{F}_{q}}^{\;\times}}$, and the argument used above shows that $\phi({\alpha}) = \phi'({\alpha})$. This final contradiction implies that $(D',\phi') = (D,\phi)$, and thus $$\prod_{{\alpha}\in D} K_{{\alpha},\phi({\alpha})} {\subseteq}{K_{D,\phi}}.$$ On the other hand, for the reverse inclusion, we consider the complex group algebra ${\mathbb{C}}U$ of $U$, and the superclass sum $${\hat{K}}= \sum_{z \in K} z \in {\mathbb{C}}U$$ associated with a superclass $K {\subseteq}U$. By [@DI Corollary 2.3], the product $\prod_{{\alpha}\in D} {\hat{K}}_{{\alpha},\phi({\alpha})}$ is a linear combination with nonnegative integer coefficients of the superclass sums of $U$. By the above, we easily conclude that $\prod_{{\alpha}\in D} {\hat{K}}_{{\alpha},\phi({\alpha})}$ is an integer multiple of ${\hat{K}}_{D,\phi}$, and so $${K_{D,\phi}}{\subseteq}\prod_{{\alpha}\in D} K_{{\alpha},\phi({\alpha})},$$ as required. The supercharacter table {#sec:table} ======================== [Theorem \[thm:t4\]]{} allows the definition of the [*supercharacter table*]{} of $U$ as the square (complex) matrix ${\mathcal{T}}$ having rows and columns indexed by all the basic pairs for $U$, and where the coefficient corresponding to the basic pairs $(D,\phi)$ and $(D',\phi')$ is the constant value ${\xi_{D,\phi}}({z_{D',\phi'}})$ of the supercharacter ${\xi_{D,\phi}}$ on the superclass ${K_{D',\phi'}}$. Since supercharacters are orthogonal (by [@AN2 Theorem 5.4]), the rows of ${\mathcal{T}}$ are orthogonal (for the usual inner product). In fact, in what follows, we prove various orthogonality relations (which are similar to the well-known for irreducible characters and conjugacy classes). We start by considering the convolution product of supercharacters; we recall that, for any functions ${{\zeta}, \eta \colon U \to {\mathbb{C}}}$, the convolution product of ${\zeta}$ and $\eta$ is the function ${{\zeta}\star \eta \colon U \to {\mathbb{C}}}$ defined by $$(\xi \star {\zeta})(z) = \sum_{x \in U} \xi(x) {\zeta}(zx{^{-1}})$$ for all $z \in U$. As an example, it is well-know that $$\label{eq:e9} \chi \star \chi' = {\delta}_{\chi,\chi'} \frac{|U|}{\chi(1)} \, \chi$$ for all $\chi, \chi' \in {\operatorname{Irr}}(U)$ (see, for example, [@I Theorem 2.13]); we observe that this corresponds to the generalized orthogonality relations for irreducible characters: $$\frac{1}{|U|}\, \sum_{x \in U} \chi(x) \chi'(zx{^{-1}}) = {\delta}_{\chi,\chi'} \frac{\chi(z)}{\chi(1)}$$ for all $\chi, \chi' \in {\operatorname{Irr}}(U)$ and all $z \in U$. For supercharacters, we obtain the following similar result. \[thm:t10\] Let $(D,\phi)$ and $(D',\phi')$ be basic pairs for $U$. Then $${\xi_{D,\phi}}\star {\xi_{D',\phi'}}= {\delta}_{D,D'} {\delta}_{\phi,\phi'} \frac{|U|\, {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}{{\xi_{D,\phi}}(1)}\, {\xi_{D,\phi}}.$$ In other words, we have $$\frac{1}{|U|}\, \sum_{x \in U} {\xi_{D,\phi}}(x) {\xi_{D',\phi'}}(zx{^{-1}}) = {\delta}_{D,D'} {\delta}_{\phi,\phi'} \frac{{\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle} {\xi_{D,\phi}}(z)}{{\xi_{D,\phi}}(1)}$$ for all $z \in U$. By [Theorem \[thm:t7\]]{}, we deduce that $$\chi(1) = {\langle \chi , \rho_{U} \rangle} = \frac{{\xi_{D,\phi}}(1)}{{\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}\; {\langle \chi , {\xi_{D,\phi}}\rangle},$$ and thus $${\xi_{D,\phi}}= \frac{{\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}{{\xi_{D,\phi}}(1)} \sum_{\chi \in {\operatorname{Irr}}_{D,\phi}(U)} \chi(1) \chi$$ for all basic pairs $(D,\phi)$. Now, since supercharacters are orthogonal, implies that ${\xi_{D,\phi}}\star {\xi_{D',\phi'}}= 0$ for all basic pairs $(D,\phi)$ and $(D',\phi')$ with $(D,\phi) \neq (D',\phi')$. On the other hand, we obtain $${\xi_{D,\phi}}\star {\xi_{D,\phi}}= \frac{|U|\, {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}^{2}}{{\xi_{D,\phi}}(1)^{2}} \sum_{\chi \in {\operatorname{Irr}}_{D,\phi}} \chi(1) \chi = \frac{|U|\, {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}{{\xi_{D,\phi}}(1)}\, {\xi_{D,\phi}},$$ as required. In particular, we deduce that the rows of the supercharacter table ${\mathcal{T}}$ are orthogonal (an alternative proof can be easily obtained by evaluating the Frobenius scalar product). \[thm:t11\] Let $(D',\phi')$ and $(D'',\phi'')$ be basic pairs for $U$. Then, $$\sum_{D,\phi} \frac{|{K_{D,\phi}}|}{|U|}\, {\xi_{D',\phi'}}({z_{D,\phi}})\,{\overline}{\xi_{D'',\phi''}({z_{D,\phi}})} = {\delta}_{D',D''} {\delta}_{\phi',\phi''} {\langle {\xi_{D',\phi'}}, {\xi_{D',\phi'}}\rangle}$$ where the sum is over all basic pairs $(D,\phi)$ for $U$. Since $\xi_{D'',\phi''}(x{^{-1}}) = {\overline}{\xi_{D'',\phi''}(x)}$, the previous theorem gives $$\sum_{x \in U} {\xi_{D',\phi'}}(x)\,{\overline}{\xi_{D'',\phi''}(x)} = {\delta}_{D',D''} {\delta}_{\phi',\phi''} |U| {\langle {\xi_{D',\phi'}}, {\xi_{D',\phi'}}\rangle},$$ and the result now follows by [Theorem \[thm:t4\]]{}. Another consequence of the generalized orthogonality relations is the following result. \[thm:t12\] The space ${\operatorname{scf}}(U)$ of superclass functions is a commutative semisimple algebra with respect to the convolution product. By [Theorem \[thm:t8\]]{}, the convolution product of supercharacters is a superclass function (in fact, it is a multiple of a supercharacter), hence ${\operatorname{scf}}(U)$ is a commutative algebra. Furthermore, ${\operatorname{scf}}(U)$ has a basis of orthogonal idempotents, namely the functions $${\zeta}_{D,\phi} = \frac{{\xi_{D,\phi}}(1)}{|U|\, {\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}\; {\xi_{D,\phi}}$$ for the basic pairs $(D,\phi)$, and thus it is semisimple. Finally, we deduce that the columns of the supercharacter table ${\mathcal{T}}$ are also orthogonal. \[thm:t13\] Let $(D',\phi')$ and $(D'',\phi'')$ be basic pairs for $U$. Then, $$\sum_{D,\phi} \frac{1}{{\langle {\xi_{D,\phi}}, {\xi_{D,\phi}}\rangle}}\, {\xi_{D,\phi}}({z_{D',\phi'}})\,{\overline}{{\xi_{D,\phi}}(z_{D'',\phi''})} = {\delta}_{D',D''} {\delta}_{\phi',\phi''} \frac{|U|}{|{K_{D',\phi'}}|}$$ where the sum is over all basic pairs $(D,\phi)$ for $U$. For any basic pairs $(D,\phi)$ and $(D',\phi')$, let $$h^{D,\phi}_{D',\phi'} = \sqrt{\frac{|{K_{D,\phi}}|}{|U|\,{\langle {\xi_{D',\phi'}}, {\xi_{D',\phi'}}\rangle}}}\; {\xi_{D',\phi'}}({z_{D,\phi}}),$$ and consider the square matrix $H = (h^{D,\phi}_{D',\phi'})$ with rows and columns indexed by the basic pairs. By the previous theorem, we have $U \bar{U}^{T} = I$, and thus $\bar{U}^{T} U = I$. It follows that $$\sum_{D,\phi} h^{D',\phi'}_{D,\phi} {\overline}{h^{D'',\phi''}_{D,\phi}} = {\delta}_{D',D''} {\delta}_{\phi',\phi''},$$ as required. [10]{} . PhD thesis, University of Warwick, 1992. Basic characters of the unitriangular group. , 1 (1995), 287–319. Basic sums of coadjoint orbits of the unitriangular group. , 3 (1995), 959–1000. The basic character table of the unitriangular group. , 1 (2001), 437–471. Super-characters of finite unipotent groups of types [$B\sb n$]{}, [$C\sb n$]{} and [$D\sb n$]{}. , 1 (2006), 394–429. Supercharacters of the [S]{}ylow [$p$]{}-subgroups of the finite symplectic and orthogonal groups. Preprint, 2008. Available at [http://arxiv.org/abs/0804.4285]{}. A super-class walk on upper-triangular matrices. , 2 (2004), 739–765. . Pure and Applied Mathematics. Wiley, London, 1972. . Pure and Applied Mathematics. Wiley, New York, 1985. . Pure and Applied Mathematics. Wiley, New York, 1987. Supercharacters and superclasses for algebra groups. , 5 (2008), 2359–2392. . Dover, New York, 1994. Proof of [S]{}pringer’s hypothesis. (1977), 272–286. Variations on the triangular theme. In [*Lie groups and [L]{}ie algebras: [E]{}. [B]{}. [D]{}ynkin’s seminar*]{}, vol. 169 of [*Amer. Math. Soc. Transl. Ser. 2*]{}. Amer. Math. Soc., Providence RI, 1995, pp. 43–73. , 2nd ed., vol. 20 of [*Encyclopedia of Mathematics and its Applications*]{}. Cambridge University Press, Cambridge, 1997. . PhD thesis, University of Lisbon, 2006. . PhD thesis, University of Pennsylvannia, 2001. [^1]: This research was made within the activities of the Centro de Estruturas Lineares e Combinatórias (University of Lisbon, Portugal) and was partially supported by the Fundação para a Ciência e Tecnologia (Lisbon, Portugal) through the project POCTI-ISFL-1-1431. A large part of the research of the first author was made and concluded while he was visiting the University of Stanford (USA) and participating in the special program on “Combinatorial Representation Theory” at the MSRI (Berkeley, USA) whose hospitality is gratefully acknowledged, and was partially supported by the sabbatical research grant 4/2008 of the Fundação Luso-Americana para o Desenvolvimento (Lisbon, Portugal). The first author also expresses his sincere gratitude to Persi Diaconis for his invitation to visit the University of Stanford, and for many enlightening discussions regarding supercharacters and their applications.
--- abstract: 'We study the low temperature thermal conductivity of single-layer transition metal dichalcogenides. In the low temperature regime where heat is carried primarily through transport of electrons, thermal conductivity is linked to electrical conductivity through the Wiedemann-Franz law. Using a *k.p* Hamiltonian that describes the $ K $ and $ K^{''} $ valley edges, we compute the zero-frequency electric (Drude) conductivity using the Kubo formula to obtain a numerical estimate for the thermal conductivity. The impurity scattering determined transit time of electrons which enters the Drude expression is evaluated within the self-consistent Born approximation. The analytic expressions derived show that low temperature thermal conductivity 1) is determined by the band gap at the valley edges in monolayer TMDCs and 2) in presence of disorder which can give rise to the variable range hopping regime, there is a distinct reduction. Additionally, we compute the Mott thermopower and demonstrate that under a high frequency light beam that sets up a Floquet Hamiltonian, a valley-resolved thermopower can be obtained. A closing summary reviews the implications of results followed by a brief discussion on applicability of the Wiedemann-Franz law and its breakdown in context of the presented calculations.' author: - Parijat Sengupta title: 'Low-temperature thermal transport and thermopower of monolayer transition metal dichalcogenide semiconductors' --- Introduction ============ Transition metal dichalcogenides (TMDCs) which have the representative formula MX$_{2}$ where *M* is a transition metal element from group IV-VI and *X* belongs to the group of elements S, Se, and Te (collectively identified as chalcogens) are layered materials of covalently bonded atoms held together by weak van der Waals forces [@wilson1969transition; @wang2012electronics]. The underlying arrangement (see Fig. \[crys\]) consists of layers of the transition metal atom surrounded by a chalcogen in a trigonal prismatic arrangement[@ramakrishna2010mos2] giving the overall crystal a hexagonal or rhombohedral structure. TMDCs are known to exhibit a wide range of behaviour spanning the whole gamut from metals to semiconductors; however, attention has been directed at the recent progress in exfoliation of the layers in a semiconducting and indirect bulk TMDC which yields a layered two-dimensional (2D) configuration. The 2D monolayer TMDC which is direct band gap with remarkably different microscopic attributes [@chhowalla2013chemistry; @huang2013metal] compared to their bulk counterparts are being currently pursued for a diverse set of applications [@chhowalla2016two; @radisavljevic2011single] that includes harnessing of their optoelectronic and thermoelectric behaviour. Thin films of TMDCs are considered promising thermal materials [@fan2014mos2; @huang2013thermoelectric] with the possibility of a large figure of merit, $ ZT = S^{2}\sigma T/\kappa $, where $ S $ is the Seebeck coefficient while $ \sigma $ and $ \kappa $ denote the electrical and thermal conductivity, respectively. Classical models describe the temperature $\left(T\right)$ dependence of the heat capacity by the Debye theory which predicts a $ T^{3} $ relation when $ T \ll \Theta_{D} $, the Debye temperature. Deviations from this law, however, have been found [@gopal2012specific] and attributed to electronic excitations close to the Fermi surface. In this regard, we note that the study of thermoelectric behaviour and attendant transport processes, particularly at low-temperatures offer insight into elementary electronic processes that are usually swamped by the interaction of the lattice with the electron cloud in presence of active phonon modes, primarily through electron-phonon coupling. Further, elucidating the underlying behaviour for reliable information on the low-temperature thermal conductivity, a key measure of thermoelectrics, is crucial in driving the design of efficient devices in this regime, such as hot-electron bolometers, self-integrated Peltier cooling engines, and thermopower-assisted fuel cells. It is useful to recall that typically the total thermal conductivity $\left(\kappa\right)$ has an electronic $\left(\kappa_{e}\right) $ and lattice contribution $\left(\kappa_{ph}\right) $ with the former dominant at low-temperatures while a large number of phonons at elevated temperatures carry the heat current and also impede the electronic transport through multiple scattering mechanisms. [@ziman1960electrons] However, in the low temperature limit, in the absence of substantial phonon distribution, heat carrying electrons are scattered primarily by impurities and defects. In what follows, we will ignore any phonon contribution and the possibility of coupling between the vibrational and electronic modes in our analysis to establish the electronic contribution to the low-temperature thermal conductivity of carriers located at the bottom of the conduction band in the vicinity of the high symmetry valley edges, $ K $ and $ K^{'} $, of monolayer TMDCs. We employ the Wiedemann-Franz law (WFL) in deriving $\left(\kappa_{e}\right) $ by connecting to the Drude (zero-frequency intra-band) conductivity $\left( \sigma_{D}\right) $ which is determined by a direct application of the standard Kubo formalism. The eigen states (and corresponding eigen functions) for the Kubo calculation are obtained from a *k.p* description of energy states in a monolayer TMDC around the valley edges of the Brillouin zone. The electron scattering time in the Drude conductivity (and specific heat expression) is acquired from the imaginary part of the retarded self-energy of surface disorder and imperfections. The imaginary part is extracted by setting up the retarded Green function in a self-consistent Born approximation (SCBA) framework. Notice that in the low-temperature regime, phonons are suppressed and do not complement the electron scattering, the retarded self-energy contribution therefore solely involves the contribution of impurities and disorder. We primarily find that close to the conduction band edge the thermal conductivity is enhanced for a higher Fermi level and monolayer TMDCs with a smaller band gap. A notable example of intrinsically shrunken band gap because of a stronger spin-orbit splitting is the monolayer TMDC WSe$_{2}$, a consequence of which is diminished Drude conductivity reflected in its low-temperature thermal counterpart. As a useful ancillary result, a straightforward computation of the specific heat is possible by a simple insertion of the thermal conductivity (and electron transit time) in the kinetic theory of electrons. [@liboff2003kinetic] While measurements of $ \kappa_{e} $ serve as a useful probe of electronic behaviour and thermal management, an allied complementary quantity, the thermopower, also allows an examination of related transport characteristics. We use the Mott expression for thermopower which is valid in the low temperature regime, $ T \ll T_{F} $, where $ T_{F} $ is the Fermi temperature. The thermopower, in agreement with experimental observation, displays an inverse relationship to thermal conductivity; while the latter reports a reduction with a higher band gap, a drop is noticed in the former. The calculation of thermopower, in particular, is of significant interest as a higher value translates into better thermoelectric devices. Graphene, for instance, has high thermal conductivity but marked by low thermopower (Seebeck coefficient [@seol2010two]) which suggests their non-viability in design of thermoelectrics; however, Buscema [*et al*. ]{}were able to demonstrate a high thermopower for monolayer MoS$_{2}$ and further showed their tunability with an external gate field. [@buscema2013large] In this paper, unlike Ref. , an external gate field is not impressed to alter the Fermi position; rather, we irradiate the sample with a high-energy circularly-polarized beam that rearranges the electronic dispersion and the fundamental band gap. A circularly polarized beam gives rise to Floquet states [@tannor2007introduction] which in the *off-resonant* approximation [@kitagawa2011transport] generates frequency- and valley-dependent band gaps. Such tunable band gaps in a laser-driven setup substantially modulates the thermopower. The calculations presented here are in the low-temperature regime where the Wiedemann-Franz law holds; however possible sources of violation of the WFL exist and we briefly note instances of those in a closing summary. Additionally, the summary also points out the possibility of other methods such as mechanical strain for improved thermoelectric performance. Theory {#theo} ====== The basis for all calculations in this paper is the low-energy Hamiltonian shown in Eq. \[mos2ham\] for monolayer TMDCs (see upper panel of Fig. \[mldp\]). The material constants that appear in the Hamiltonian (Eq. \[mos2ham\]) for representative semiconducting TMDCs (see Fig. \[crys\]) are listed in Ref.[@xiao2012coupled] ![The bulk unit cell (left panel) of MoS$_{2}$, a typical semiconducting transition metal dichalcogenide (space group P6$_{3}$/mmc). The two red atoms denote molybdenum (Mo) while four sulfur (S) atoms are shown as blue spheres. Each Mo atom is coordinated to six sulfur atoms in a prismatic fashion. The vertical separation between intra- and inter-layer sulfur atoms is (0.5 - 2z)c and 2zc respectively. For MoS$_{2}$, $ z = 0.12 $ and $ c = 12.58\,\AA $[@wang2013mos2]. The right panel is the corresponding top view. The plots for arrangement of atoms were done using the VESTA software[@2008vesta].[]{data-label="crys"}](MoS2.eps){width=".4\linewidth"} ![The bulk unit cell (left panel) of MoS$_{2}$, a typical semiconducting transition metal dichalcogenide (space group P6$_{3}$/mmc). The two red atoms denote molybdenum (Mo) while four sulfur (S) atoms are shown as blue spheres. Each Mo atom is coordinated to six sulfur atoms in a prismatic fashion. The vertical separation between intra- and inter-layer sulfur atoms is (0.5 - 2z)c and 2zc respectively. For MoS$_{2}$, $ z = 0.12 $ and $ c = 12.58\,\AA $[@wang2013mos2]. The right panel is the corresponding top view. The plots for arrangement of atoms were done using the VESTA software[@2008vesta].[]{data-label="crys"}](MoS2_h.eps){width=".4\linewidth"} $$H_{\tau} = a\,t\left(\tau k_{x}\hat{\sigma}_{x} + k_{y}\hat{\sigma}_{y}\right)\otimes \mathbb{I} + \dfrac{\Delta}{2}\hat{\sigma}_{z}\otimes \mathbb{I} - \dfrac{\lambda\,\tau}{2}\left(\hat{\sigma}_{z}-1\right)\otimes\,\hat{s}_{z}. \label{mos2ham}$$ The Hamiltonian in Eq. \[mos2ham\] can be split in to a conduction and valence part by expanding the matrices. The 2 $\times $ 2 upper and lower blocks in Eq. \[mos2ham\] denote the two sets of spin conduction and valence bands. In this representation, the spin conduction bands are degenerate at the edges while the corresponding valence bands are separated by $ \lambda $, the spin-orbit splitting. For all calculations we use the $ K $ edge and therefore drop the subscript $ \tau $ and set it to unity everywhere. Note that we could have equally worked with the $ K^{'} $ edge $\left(\tau = -1\right)$ which is degenerate with $ K $ and is related to it through time reversal symmetry. To see this explicitly (calculations done with VASP [@kresse1996software]), notice the colour of spin-resolved bands (Fig. \[mldp\]) at the $ K $ and $ K^{'} $ edge; the spin-up and spin-down bands interchange order though the fundamental band gaps remains unchanged. ![The upper panel shows the monolayer of a TMDC (WS$_{2}$ in this case). The metal atom (red) is sandwiched between the sulphur atoms (blue). The tri-layered structure in principle constitutes a single layer for the TMDC. The dispersion of the monolayer was obtained from an ab-initio calculation. The choice of WS$_{2}$ as a representative TMDC is dictated by the fact that it has a significantly large spin-orbit coupling allowing the spin-split bands to be clearly distinguishable. Note the time reversal symmetry mandated flipping of the order of spin bands at the $ K $ and $ K^{'} = - K $ valley edges.[]{data-label="mldp"}](tmdc_ml.eps "fig:"){width=".7\linewidth"}\ ![The upper panel shows the monolayer of a TMDC (WS$_{2}$ in this case). The metal atom (red) is sandwiched between the sulphur atoms (blue). The tri-layered structure in principle constitutes a single layer for the TMDC. The dispersion of the monolayer was obtained from an ab-initio calculation. The choice of WS$_{2}$ as a representative TMDC is dictated by the fact that it has a significantly large spin-orbit coupling allowing the spin-split bands to be clearly distinguishable. Note the time reversal symmetry mandated flipping of the order of spin bands at the $ K $ and $ K^{'} = - K $ valley edges.[]{data-label="mldp"}](bs_tmdc_ml.eps){width=".9\linewidth"} For later use, we also note the eigen functions and eigen states of the Hamiltonian in Eq. \[mos2ham\]. The wave functions at the $ K $ valley edge for the spin-up conduction $\left(+\right)$ and valence $\left(-\right)$ states have the form $\left(\theta = -\tan^{-1}k_{y}/k_{x}\right)$ $$\Psi^{up}_{\pm} = \dfrac{1}{\sqrt{2}}\begin{pmatrix} \eta_{\pm}e^{i\theta} \\ \pm\,\eta_{\mp} \end{pmatrix}, \label{wfs1}$$ where $\eta_{\pm}^{up} = \sqrt{1 \pm \left( \Delta - \lambda\right)/\left(\sqrt{\left(\Delta - \lambda\right)^{2} + \left(2atk\right)^{2}}\right)}$. Note that we can derive an identical set of wave functions for the spin-down components by choosing the lower $ 2 \times 2 $ block. We only need to replace the $ \Delta - \lambda $ in the expression for $ \eta^{up} $ with $ \Delta + \lambda $ to yield the spin-down wave functions. The accompanying eigen functions for the spin-up branch can also be easily written as $$\varepsilon_{\pm} = \dfrac{1}{2}\biggl[\lambda \pm \sqrt{\left(\Delta - \lambda\right)^{2} + 4a^{2}t^{2}k^{2}}\biggr]. \label{eigval}$$ The $ +\left(-\right) $ in the eigen energy expressions correspond to the conduction (valence) band. Note that the finite spin-orbit coupling, $ 2\lambda $, splits the valence bands at $ K $ while the conduction states remain spin degenerate. Drude Conductivity ------------------ The main purpose of this letter is the determination of low temperature thermal conductivity of TMDCs via the law of Wiedemann and Franz (WFL). WFL states that if $ \kappa $ is the thermal conductivity ignoring lattice contributions and $ \sigma $ the corresponding electrical conductivity, the ratio $ \kappa/\sigma $ is $$\kappa/\sigma = \mathcal{L}T, \label{wfl}$$ where $ \mathcal{L} $ is the Lorentz ratio given as $ \pi^{2}k_{B}^{2}/3e^{2} $ and $ k_{B} $ is the Boltzmann constant. The absolute temperature is $ T $. The electrical conductivity in WFL is the zero-frequency intra-band (Drude) conductity. We evaluate the Drude conductivity using the Kubo expression[@bruus2004many] from linear response theory. For a non-interacting sample in 2D space, it is \^ = -i\_[n,n\^[’]{}]{}, \[kubof\] where $ \vert\,n\rangle $ and $ \vert\,n^{'}\rangle $ are eigen functions (Eq. \[wfs1\]) of the Hamiltonian in Eq. \[mos2ham\] and $ \eta $ represents a finite broadening (lifetime of the electron on the Fermi surface) of the eigen-states resulting from surface imperfections and impurity scattering. To clarify choice of notation, the superscripts on $ \sigma $ mean that upon application of an electric field along $ \hat{e}_{\beta} $, the electric conductivity tensor gives the current response along $ \hat{e}_{\alpha} $. We have also tacitly assumed that the wave functions retain their pristine form, the presence of impurities and defects notwithstanding. By a direct insertion of the wave functions and the appropriate velocity components in Eq. \[kubof\], we can now determine the longitudinal intra-band conductivity of a monolayer TMDC with sample area $ \mathcal{A} = L^{2} $. The velocity operator along the *x*-axis is $ \hat{v}_{x} = \left(at/\hbar\right)\hat{\sigma}_{x} $. Note that $ \hat{v}_{y} $ is identical since the Hamiltonian is isotropic in the plane. An explicit evaluation of the Drude conductivity begins by setting for the intra-band case, $ \vert\,n\rangle = \vert\,n^{'}\rangle$ in Eq. \[kubof\]; the matrix element, $ M = \langle\,n\vert\,\hat{v}_{x}\vert\,n\rangle $, is therefore $ -at\cos\theta\left[2atk/\hbar\left(\sqrt{\Delta_{m}^{2} + \left(2atk\right)^{2}}\right)\right] $. As a shorthand notation, $ \Delta_{m} = \Delta - \lambda $. In obtaining the above expression, we have chosen the conduction band states to evaluate the matrix product; this choice is made by setting the Fermi level to bottom of the conduction band. Inserting the matrix element in Eq. \[kubof\], the Drude conductivity is $$\sigma_{D} = \dfrac{\left(eat\right)^{2}}{4\pi^{2}\hbar\eta}\int_{0}^{2\pi}\cos^{2}\theta d\theta\int kdk\dfrac{\left(2atk\right)^{2}}{\Delta_{m}^{2} + \left(2atk\right)^{2}}\dfrac{\partial f}{\partial \varepsilon}. \label{dr1}$$ In Eq. \[dr1\], we have replaced the summation over momentum states by the integral; additionally the term $f\left(\varepsilon_{n}\right)- f\left(\varepsilon_{n^{'}}\right)/\left(\varepsilon_{n} - \varepsilon_{n^{'}}\right) $ is approximated as $ \partial f/\partial \varepsilon = -\delta\left(\varepsilon_{f} - \varepsilon\right) $ by Taylor expanding the Fermi distribution, $ f\left(\varepsilon_{n^{'}}\right) = f\left(\varepsilon_{n} \right) + \left(\varepsilon_{n^{'}} - \varepsilon_{n}\right)\partial f/\partial \varepsilon $. Note that the relation $ \partial f/\partial \varepsilon = -\delta\left(\varepsilon_{f} - \varepsilon\right) $ holds exactly at $ T = 0 $. Converting the $ k $-space integral to energy space using Eq. \[eigval\] and integrating out the angular variable $ \left(\int_{0}^{2\pi}\cos^{2}\theta\,d\theta = \pi\right)$, we rewrite Eq. \[dr1\] (normalized to $ e^{2}/h $ ) as $$\sigma_{D} = \dfrac{1}{2\eta}\int d\varepsilon\dfrac{\left(2\varepsilon - \lambda\right)^{2}-\Delta_{m}^{2}}{\left(2\varepsilon - \lambda\right)}\delta\left(\varepsilon_{f} - \varepsilon\right) = \dfrac{\Omega}{2\eta}, \label{dr2}$$ where $ \Omega = \left[\left(2\varepsilon_{f} - \lambda\right)^{2}-\Delta_{m}^{2}\right]/\left(2\varepsilon_{f} - \lambda\right)$. Writing the broadening parameter, $ \eta = \hbar/\tau_{tr} $ reproduces the expression in form of Drude conductivity. We are now left with the determination $ \eta $ in Eq. \[dr2\]; this is obtained from a self-consistent Born approximation (SCBA) outlined in Section. \[scbo\]. Self-consistent Born approximation {#scbo} ---------------------------------- The broadening is considered as arising out of disorder on the surface and is modeled as an effective retarded self-energy within SCBA. [@bruus2004many] The pair of SCBA equations being: $$\begin{aligned} \label{scba1} G_{ks}\left(\epsilon\right) = \dfrac{1}{\epsilon - \epsilon_{ks} - \Sigma\left(\epsilon\right)}; \Sigma\left(\epsilon\right) = n_{i}v_{i}^{2}\int\,\dfrac{d^{2}k}{4\pi^{2}}G_{ks}\left(\epsilon\right), \end{aligned}$$ where $ n_{i} $ and $ v_{i} $ denote the density and strength of impurities, respectively and $ G_{ks}\left(\epsilon\right) $ is the retarded Green’s function diagonal with respect to the band index *s* ($ \langle\,s\vert\,G_{k}\left(\epsilon\right)\vert\,s\rangle = \delta_{ss^{'}}G_{ks}\left(\epsilon\right) $). The self-energy $ \Sigma $ which is also diagonal with respect to the band index *s* and independent of ***k*** in SCBA is averaged over impurity distributions (see Fig. \[feyn1\]). The unperturbed retarded Green’s function for the $ 2 \times 2 $ upper block of the Hamiltonian in Eq. \[mos2ham\] is $ G_{0, R} = \left(E - H^{2 \times 2} + i\delta\right)^{-1} $. Inserting $ G_{0,R} $ in the self-energy expression (Eq. \[scba1\]) and recasting to the form $ \dfrac{1}{x \pm i0^{+}} $ to separate the real and imaginary parts using the standard expression $ \dfrac{1}{x \pm i0^{+}} = \mathbb{P}\dfrac{1}{x} \mp i\pi\delta\left(x\right) $, we approximately arrive at: Im&= n\_[i]{}v\_[i]{}\^[2]{},\ &n\_[i]{}v\_[i]{}\^[2]{}(/2). \[imself2\] The imaginary retarded self-energy term is linked to scattering time, $\tau_{tr} $, by the relation $ \hbar/\tau_{tr} = 2Im\Sigma $. The real part simply of the self-energy renormalizes the Fermi energy and is absorbed in the chemical potential. We have neglected the $ a^{2}t^{2}k^{2} $ terms in Eq. \[imself2\] since close to the valley edge $ k $ is a small number and the product $ atk $ can be ignored. Notice that the energy arguments of the two $ \delta\left(\cdot\right) $ functions in Eq. \[imself2\], $ \lambda - \Delta/2 $ and $ \Delta/2 $, happen to be aligned to the top and bottom of the valence and conduction band, respectively. Since we carry out calculations around the conduction band minimum, the argument $ \lambda - \Delta/2 $ is discarded. ![The self energy $ \left(\Sigma_{scba}\right) $ in the Born approximation averaged over impurity distributions. The Matsubara frequency is unchanged since collisions are assumed to be elastic. The dashed line is the average of the two impurity locations marked as $ x $ and $ x^{'} $ while the $ \times $ represents a scattering event.[]{data-label="feyn1"}](feynman.eps) Thermal conductivity and thermopower ==================================== From the general expression for the Drude conductivity, the low temperature thermal conductivity $\left(\kappa_{e}\right)$ ignoring phonon contribution can be established by a simple application of the Wiedemann-Franz law (WFL) as briefly noted (Eq. \[wfl\]) in the preceding section. A correct application of WFL is incumbent on weak elastic scattering of electrons and negligible electron-electron correlation, [*i*.*e*., ]{}the electrons move independent of one another. Assuming that the ensemble of electrons for a monolayer TMDC located in the vicinity of the conduction band minimum fulfill the criteria set forth by WFL, we simply substitute the Drude conductivity expression from Eq. \[dr2\] in Eq. \[wfl\] to obtain $ \kappa_{e} $. The expression takes the form $$\kappa_{e} = \dfrac{\pi^{2}k_{B}^{2}T}{3e^{2}}\dfrac{\Omega}{2\eta}. \label{kappawfl}$$ The thermal conductivity from Eq. \[kappawfl\] evidently depends on the broadening parameter $\left(\eta\right)$ since it directly controls the electric conductivity. We have ignored any correction to the conductivity, however, arising from any weak localization present on the surface due to the assumed impurity concentration. The WFL has been verified for numerous cases and has been proven correct and is generally regarded as a defining proof of the Fermi liquid theory of electrons. Violations to WFL exist (we discuss that in the summary section) but for our purpose where we apply it to a non-interacting body of electrons in monolayer TMDCS, it should suffice. Since thermal conductivity obtained with WFL is directly proportional to electric conductivity for a given temperature, we may easily infer that $ \kappa_{e} $ in monolayer TMDCs will exhibit the same trend as $ \sigma $, the Drude conductivity. Here we make note of a useful result on specific heat, a quantity that can be directly measured and is easily determined from the preceding thermal conductivity calculation. The kinetic theory of electron transport relates the thermal conductivity and specific heat as: $$\kappa_{e} = \dfrac{1}{3}C_{e}v_{f}\Lambda_{e}. \label{wflc}$$ In Eq. \[wflc\], the Fermi velocity is $ v_{f} $, the mean free path is $ \Lambda_{e} $, and $ C_{e} $ is the specific heat of electrons. Note that an analogous relation for the phonon contribution to overall thermal conductivity at elevated temperatures (when the phonon population is significant) exists but we ignore it here. The mean free path $ \Lambda_{e} = v_{f}\tau $, where $ \tau $ is the relevant scattering time. Mott’s expression for TMDC and laser-driven thermopower ------------------------------------------------------- Analogous to thermal conductivity calculations, we can also determine the thermopower $ \left(\mathcal{Q}\right) $ of a monolayer TMDC via the Mott formula, which is $$\mathcal{Q} = -\dfrac{\pi^{2}}{3e}\dfrac{k_{B}^{2}T}{\sigma}\dfrac{\partial \sigma}{\partial \varepsilon}. \label{mott}$$ Inserting Eq. \[dr2\] in Eq. \[mott\] and the expression for the derivative, the thermopower expression simplifies to $$\begin{aligned} \mathcal{Q} &= -\dfrac{\pi^{2}}{3e}\dfrac{2\eta\,k_{B}^{2}T\left(2\varepsilon_{f} - \lambda\right)}{\left(2\varepsilon_{f} - \lambda\right)^{2}-\Delta_{m}^{2}}\dfrac{\partial \sigma}{\partial \varepsilon}, \\ & = -\dfrac{2\pi^{2}}{3e}k_{B}^{2}T\dfrac{1+t}{1-t}\dfrac{\sqrt{t}}{\Delta_{m}}. \label{motttm} \end{aligned}$$ In Eq. \[motttm\], $ \sqrt{t} = \Delta_{m}/\left(\left(2\varepsilon_{f} - \lambda\right)\right) $. It is worthwhile to mention that the Mott thermopower expression holds good insofar as the approximation of representing the Fermi distribution as a step function. For cases, where considerable smearing of the bands is present, a large deviation between the result contained in Eq. \[motttm\] and experimental data must be expected. It is apparent from Eq. \[motttm\] that the overall band gap $ \left(\Delta_{m}\right) $ influences the Mott thermopower. In connection to the applicability of this result to the field of thermoelectrics at the nanoscale, it would be prudent to consider an approach that allows a measure of external control by virtue of alteration to the band energy description. In light of this, we examine the possibility of laser-driven periodic perturbation that engineers the energy dispersion of a monolayer TMDC. A periodic perturbation in quantum mechanics is dealt by invoking the Floquet theory that allows the construction of an effective time independent Hamiltonian. The theory is summarized in several published works. [@kitagawa2011transport; @cayssol2013floquet; @lopez2015photoinduced] We simply quote the result here that shows the change to the band gaps at the $ K $ and $ K^{'} $ edges when placed under a high-frequency light source, commonly known in literature as the *off-resonant* condition. The influence of the periodic *off-resonant* light on the TMDC monolayer is to the lowest order approximated by an effective Hamiltonian averaged over a complete cycle through the evolution operator $ U = \mathcal{T}exp\left(-i\int_{0}^{T}H\left(t\right)dt\right)$. [@kitagawa2011transport] Here $ \mathcal{T} $ is the time-ordering operator and $ T = 2\pi/\omega $. This approximate Hamiltonian, which in principle describes the behaviour of a system with time scales much longer than $ T $, rearranges the electron occupation number without modifying the bands. In the *off-resonant* state, the approximate Floquet Hamiltonian following Ref.  is $$H_{\mathcal{F}} = H_{\tau} + \dfrac{1}{\hbar\,\omega}\left[H_{-1}, H_{1}\right], \label{flham}$$ and $ H_{m} = \dfrac{1}{T}\int_{0}^{T}H\left(t\right)exp\left(-im\omega t\right)dt $. Note that $ H\left(t\right) $ is the time-dependent part obtained using the standard Peierl’s substitution $ \hbar\,k\rightarrow \hbar\,k - e\textbf{A}\left(t\right)$ in the TMDC monolayer Hamiltonian (Eq. \[mos2ham\]); this substitution gives $ H\left(t\right) = \dfrac{at}{\hbar}\,A\left(\sigma_{x}cos\,\omega t + \sigma_{y}sin\,\omega t\right) $, where the *off-resonant* light is right-circularly polarized and represented through the vector potential $ \textbf{A}\left(t\right) = A\left(cos\,\omega t\,\hat{e}_{x}, sin\,\omega t\,\hat{e}_{y}\right) $. The amplitude and frequency are denoted by $ A $ and $ \omega $, respectively. The desired Floquet Hamiltonian, $ H_{\mathcal{F}} $, by a direct evaluation of the respective Fourier components and using $ \left[\sigma_{x}, \sigma_{y}\right] = 2i\sigma_{z} $ therefore reads similar to Eq. \[mos2ham\] but with a different band gap. The change in band gap by evaluating the commutator in Eq. \[flham\] and inserting in Eq. \[mos2ham\] is expressed as $ \Delta_{m}\sigma_{z}\otimes\mathbb{I}\rightarrow \left(\Delta_{m} + \tau\,\Delta_{F}/2\right]\sigma_{z}\otimes\mathbb{I} $, where the Floquet-induced band gap modification gap is $$\Delta_{F} = 2e^{2}A^{2}a^{2}t^{2}/\hbar^{3}\omega. \label{flbg}$$ In Eq. \[flbg\], $ A = E_{0}/\omega $ where $ E_{0} $ is the amplitude of the electric field. A more convenient representation utilizing the relation $ at = \hbar\,v_{f} $ allows us to write this as $ 2\left(eAv_{f}\right)^{2}/\hbar\omega $. This light-induced band gap under *off-resonant* conditions is alterable through the intensity and frequency parameters by expressing the intensity of incident light as $ I = \left(eA\omega\right)^{2}/\left(8\pi\alpha\right)$, $ \alpha = 1/137 $ being the fine structure constant. [@cayssol2013floquet] The Floquet modulated band gap is therefore $ 16\pi\alpha\,Iv_{f}^{2}/\omega^{3} $. The dispersion diagram when right-circularly polarized light (under *off-resonant* conditions) shines on a monolayer of MoS$_{2}$ with altered band gaps is shown in Fig. \[disp\_altered\]. Notice that the band gap at $ K $ is increased to $ 3.11\, eV $ from the pristine $ 1.66\,eV $ while its time-reversed counterpart at $ K^{'} $ sees a reduction to $ 0.074\,eV $ for right-circularly polarized light. The enhancement and reduction at the valley edges is reversed for a left-circularly polarized beam. The new band gap $ \left(\Delta_{m} + \Delta_{F}/2\right) $ can be substituted in Eq. \[motttm\] to obtain a driving frequency-reliant thermopower. ![The dispersion of monolayer MoS$_{2}$ under *off-resonant* light condition. The sub-figure on the left (right) plots the band dispersion around the $ K (K') $ point. The energy of the light beam was assumed to be $ eAv_{f} = 2.9\, eV $. This result is in qualitative agreement with Ref. .[]{data-label="disp_altered"}](tmdc_mod_bs.eps) Thermal power in the variable range hopping regime -------------------------------------------------- In an earlier section, the use of SCBA in presence of impurity disorder supplied us with a finite broadening of the density of states; however, material constants were left unchanged, a tacit set of assumptions that isn’t necessarily true. Disorder-induced localization, in addition to serving as an agent for tangible changes to electron transport also reduces the electrostatic screening to enhance the long-range Coulombic interaction and rearranges the distribution of energy states, a clear expression of which is mirrored in a changed set of material parameters. While the intrinsic spin-orbit coupling, a key material parameter in monolayer semiconducting TMDCs, is normally invariant and unlikely to be influenced through external perturbations, numerical calculations do show that $ `t' $, the hopping parameter (see Eq. \[mos2ham\]) can indeed be altered. As a matter of fact, strain, embedded impurities, positional disorder etc., all of which have been shown to be present on the surface of a monolayer TMDC can contribute to the probability of altered hopping. [@qiu2013hopping] A quantitative assessment of their influence can be gauged from the empirical relationship for the probability of electron hopping in a disordered 2D system. [@tessler2009charge] The model uses the electron wave function localization for a specific disorder strength and the hopping radius to predict the following expression $$P \sim \exp\left(-\dfrac{2R}{\xi} - \dfrac{1}{\pi R^{2}D\left(E\right)kT}\right), \label{prob}$$ where $ \xi $ is the localization length, $ R $ is the hopping radius, and $ D\left(E\right) $ is the density of states. The genesis of Eq. \[prob\] lies in Mott’s variable range hopping (VRH) model; this model advanced by Mott contends that at low temperatures an electron does not always hop to the nearest neighbour but to a state with the lowest activation energy and the shortest hopping distance. For an optimum hopping distance $ `r' $, the maximum hopping probability is expressed by Eq. \[prob\]. Since the electric conductivity is linked to the strength of the hopping parameter $ `t' $ which undergoes an adjustment in the Mott model, the thermal conductivity in the WFL regime must therefore manifestly exhibit an identical behaviour. It can be shown [@mott1993conduction; @park2015hopping] that a functional dependence of the electrical conductivity within the Mott-VRH framework can be expressed as $$\sigma_{\alpha\beta} = \sigma_{\alpha\beta}^{0}\exp\left(-\dfrac{\Lambda}{T}\right)^{\nu}, \label{condt}$$ where $ \nu = 1/3 $ for 2D systems, $ \sigma_{\alpha\beta}^{0} $ is the conductivity at $ T = 0 $ and $ \Lambda $ is an experimentally determined constant, dependent on the radius of hopping/localization length and the density of states close to the Fermi level. Numerical results for thermal conductivity and thermopower centred around the expressions derived here are presented in Section. \[s3\]. As a useful addendum to the thermal conductivity calculations, it is also possible - bearing in mind the preceding discussion on the variability of electric conductivity in presence of disorder and other imperfections - to express the thermal power density of a system at low temperatures as a function of material constants. To carry out this task, we write down the standard heat equation $$Q_{h} = \kappa_{e}A\dfrac{dT}{dx}, \label{hq1}$$ where $ Q_{h} $ is the heat flowing through the system, $ A $ is the area of cross-section, and $ dT/dx $ is the temperature gradient. Substituting for $ \kappa_{e} $ using the WFL (Eq. \[wfl\]) in Eq. \[hq1\] and the modified conductivity (Eq. \[condt\]) gives $$\int_{0}^{x}\mathcal{P}dx = L\sigma_{\alpha\beta}^{0}\int_{T_{c}}^{T}\exp\left(-\dfrac{\Lambda}{T}\right)^{1/3}T dT. \label{hq2}$$ The power density is denoted as $ \mathcal{P}\left(x\right) = Q_{h}/A $ and $ T_{c} $ is the constant temperature maintained at one end of the channel. For no spatial dependence of the power density $ \mathcal{P} $, Eq. \[hq2\] is $$\mathcal{P} = \dfrac{L\sigma_{\alpha\beta}^{0}}{l}\int_{T_{c}}^{T}\exp\left(-\dfrac{\Lambda}{T}\right)^{1/3}T dT, \label{hq3}$$ where $ l $ is the length of the one-dimensional channel of heat flow. The indefinite integral on the R.H.S can be recast as $ \mathcal{P} = -3\Lambda^{2}\dfrac{L\sigma_{\alpha\beta}^{0}}{l}\int z^{-7}\exp\left(z\right)dz $, where $ z = -\Lambda/T $. This integral can be either numerically evaluated or using the relation $ \int \dfrac{\exp\left(z\right)}{z^{n}}dz = \dfrac{1}{n-1}\biggl(-\dfrac{\exp\left(z\right)}{z^{n-1}} + \int \dfrac{\exp\left(z\right)}{z^{n-1}}dz\biggr) $ analytically determined through successive integration by parts to furnish the power density. Numerical results {#s3} ================= We have now gathered all the information for a quantitative determination of the low-temperature thermal conductivity $\left(\kappa\right)$. As a first step, we use Eq. \[imself2\] to obtain the energy broadening of the states; setting the impurity concentration to $ 2.5 \times 10^{10}\,cm^{-2} $ and attendant impurity potential [@adam2009theory] as $ 0.1\, keV\,\AA^{2} $, the imaginary contribution of the self-energy is approximately equal to $ 4.6\, meV $ and $ 8.0\, meV $ for WSe$_{2}$ and MoS$_{2}$, respectively. A plot of $ \kappa_{e} $ for these two TMDCs as a function temperature for electrons in the vicinity of the bottom of conduction band (note that the conduction band minimum in each case is $ \Delta/2 $, where $ \Delta $ is the fundamental gap at the valley edges) is displayed in Fig. \[therm\]. We wish to point out that of the two semiconducting monolayer TMDCs chosen, MoS$_{2}$ and WSe$_{2}$, the latter has greater thermal conductivity at low-temperatures. This is in agreement with their intrinsic Drude conductivities; observe from Eq. \[dr2\] that a lower band gap translates into higher Drude conductivity which is the case with WSe$_{2}$. To understand this better, the material parameters in Table. \[table1\] reveals a nearly three-fold larger spin-orbit splitting $\left(\lambda\right)$ in WSe$_{2}$ in comparison to MoS$_{2}$ while the other parameters are nearly identical. This large spin splitting (on account of the heavier metal, tungsten) effectively contracts the band gap $\left(\Delta - \lambda\right) $ from which the pattern displayed by the Drude and thermal conductivity (in Fig. \[therm\]) follows. There are other physical situations, for instance, the strength of inter-band (valence to conduction state jumps) tranistion rates where the lower band gap of WSe$_{2}$ would again appear as a determining factor; we do not consider such cases here, for a clear example of this, see Ref. . When variable range hopping dominates with electrons close to the Fermi level hopping from one localized site to another, the adjusted conductivity (Eq. \[condt\]) is pared, an illustration of which is the degrading of attendant thermal conductivity in Fig. \[therm\]. For a numerical calculation, the constant $ \Lambda $ was set to $ 17.4\, K $ obtained from a fitting analysis presented in Ref.  for temperatures under $ 20\, K $. A lower thermal conductivity in absence of pristine crystalline order such as in a nanocrystal may be of value in applications that target thermopower generation. A more detailed note on this point appears in Section. \[summ\]. The quantitative determination of the thermal conductivity, by virtue of Eq. \[wflc\] also permits an estimation of the specific heat. For a numerical answer, we assign values to the following quantities: the thermal conductivity for a 1.0 cm$^{2}$ sample of MoS$_{2}$ monolayer (which is $ 6.0\, \AA $ thick[@splendiani2010emerging]) is set to $ \kappa_{e} = 1\, W/m\,K $, the transit time using the imaginary part of the self-energy computed (the imaginary retarded self-energy term is linked to scattering time, $\tau_{sc} $, by the relation $ \hbar/\tau_{sc} = 2Im\Sigma $) above is roughly $ 2.0 ps $ and the Fermi velocity $ \left(at/\hbar\right) $ is given a value of $ 5.33 \times 10^{5}\, m/s $. Inserting all of them in Eq. \[wflc\], the specific heat for the MoS$_{2} $ slab is $ 3.2 \times 10^{5}\, eV/K $. The temperature for this calculation was set to $ 10 \, K $. Notice that is the specific heat at low-temperatures ($ T \ll \Theta_{D} $) where electrons primarily carry the heat and lattice contribution via dominant phonon modes is negligible. \[table1\] [lcccc]{} Parameters & MoS$_{2}$ & WSe$_{2}$\ a(Å) & 3.193 & 3.310\ $ \Delta\,(eV) $ & 1.66 & 1.60\ $ t\,(eV) $ & 1.10 & 1.19\ $ 2\lambda\,(eV) $ & 0.15 & 0.46\ We next turn our attention to low-temperature thermopower result derived (Eq. \[motttm\]) using the Mott formula. First of all, note that the thermopower (or the Seebeck coefficient) exhibits a dependence on the intrinsic band gap $ \left(\Delta - \lambda\right) $ and is independent of the broadening parameter $ \left(\eta\right)$. Indeed, a comparison of $ \mathcal{Q} $ in MoS$_{2}$ and WSe$_{2}$ shows it to be higher for a range of energies in the vicinity of the top of the valence band (Fig. \[therm\]). The calculation was done at $ T = 10\,K $. While we show the variation of $ \mathcal{Q} $ for two pristine semiconducting TMDCs here, for an enhanced low-temperature thermopower, methods that could possibly adjust (and lower) the band gap therefore are of interest. In this regard, it will be useful to mention that it is now also possible to synthetically fabricate (apart from exfoliation) single layer alloys of TMDCs. J. Mann [*et al*. ]{}report in Ref.[@mann20142] the fabrication of single layer Mo$_{1-x}$W$_{x}$S$_{2}$ and MoSe$_{2(1-x)}$S$_{2x}$ allowing for a continuous tuning of the band gap and optical properties by varying the alloy composition. The direct band gap of the alloy, MoSe$_{2(1-x)}$S$_{2x}$, for example, can lie between 1.66 $\mathrm{eV} $ (MoS$_{2}$) and 1.47 $\mathrm{eV} $ (MoSe$_{2}$), assuming the rule of virtual crystal approximation is reasonably valid. ![The low temperature thermal conductivity for two cases is shown. The curves marked as ‘WFL’ are obtained by a straightforward application of the Wiedemann-Franz law; the other group denoted by ‘VRH’ pertains to the state when variable range hopping is active and modifies the result of WFL as explained in the text (see Eq. \[condt\]). The two semiconducting TMDCs are MoS$_{2}$ and WSe$_{2}$ (dashed line).[]{data-label="therm"}](thermopower.eps) In passing we note that the expression for thermopower $ \left(\mathcal{Q}\right) $ in Eq. \[motttm\] shows a functional independence the broadening parameter. It is reasonable to expect, however, that surface impurities and dopants will influence the thermopower generated; this apparent non-dependence can be explained by noting that $ \eta $ is an energy-independent quantity that we obtained from a self-consistent Born approximation by assigning an impurity concentration and potential in the dilute limit. In a real experimental setup, the broadening parameter $ \eta = \hbar/\tau $, ($ \tau $ is the transit time) is not a fixed quantity and must change as a function of carrier energy. An alternative approach to incorporate the energy dependence would be to use an expression for conductivity in the diffusive limit; in equation form, it should read as $$\sigma = \Phi D\left(\varepsilon\right)\tau\left(\varepsilon\right)/2. \label{condiff}$$ This conductivity expression ($\Phi $ is material dependent and D($\varepsilon$) is the density of states) can now be inserted in Eq. \[motttm\] for an evaluation of the thermopower. In general, as was shown by Hwang [*et al*. ]{}in Ref. , the energy dependence can be of the form $ \tau \propto \varepsilon^{m} $ with varying values of $ m $ corresponding to different scattering mechanisms. We have only considered an energy-independent impurity scattering here. In the last section, we quantitatively show the influence of the *off-resonant* circularly polarized light that introduces a photo-induced energy band gap through the Floquet dressed states. In the brief discussion presented in Section. \[theo\], the band gaps at the time reversed $ K $ and $ K^{'} $ valleys were enlarged and shrunken by shining right-circularly polarized light (see Fig. \[disp\_altered\]) which in principle could regulate the thermopower, a gap-dependent quantity. Plugging in the altered band gaps in Eq. \[motttm\], we plot (Fig. \[floq\]) the photo-controlled thermopower for a range of frequencies. The thermopower follows the well-defined trend and exhibits an upward tick when the band gap is increased. For our case, under a right-circularly-polarized light beam, the band gap at $ K $ is higher than its intrinsic value and therefore furnishes a higher thermopower while the reduced band gaps at $ K^{'} $ for both WSe$_{2}$ and MoS$_{2}$ display a correspondingly lower value. At the $ K^{'} $ edge, the band gap reduction is smaller for MoS$_{2}$, which indicates a higher thermopower over WSe$_{2}$. Notice that the frequencies must satisfy the condition, $ \hbar\omega \gg H $, that is energy contained in the incident beam is far greater than the energy scales of the static problem (typified in the Hamiltonian, $ H $). In passing we note that as a matter of fact, in graphene, the earliest 2D material, such *off-resonant* conditions have been fulfilled by using photon energies that lie in the soft X-ray regime. By simulating identical conditions in monolayer TMDC which can be likened to spinful gapped graphene, the thermopower plot (Fig. \[floq\]) shows a clear enhancement in case of the $ K^{'} $ valley which has a reduced band gap in contrast to the $ K $ valley edge; the change tailing off as the frequency of the incident light increases. While we have demonstrated a valley-resolved thermopower with right-circularly polarized light, note that results do not qualitatively change under a left-circularly polarized light; as opposed to an enhancement at $ K^{'} $, the $ K $ valley edge now exhibits the same trend. This is simply a consequence of the time reversal symmetry that exists in the system. In any case, regardless of the chirality of the irradiating beam, the valley-resolved thermopower is maintained. ![The numerically computed valley-resolved low temperature thermopower $ \left(\mathcal{Q}\right)$ of monolayer TMDCs MoS$_{2}$ and WSe$_{2}$ under a high frequency right-circularly polarized light beam is shown. The temperature was set to $ T = 10\, K $. Under *off-resonant* conditions, the enlargement of the band gap at $ K $ provides a higher $ \mathcal{Q} $ compared to its time reversed counterpart at $ K^{'} $. In general, the intrinsically lower band gap for WSe$_{2}$ is expected to provide a smaller $\mathcal{Q} $. The inset shows the progression of the band gaps at the $ K $ and $ K^{'} $ valley edges with incident right-circularly polarized light.[]{data-label="floq"}](floquet_thermopower.eps) Summary {#summ} ======= In this work we have carried out an evaluation of the low-temperature thermal conductivity of monolayer semiconducting transition metal dichalcogenides (TMDCs). We calculated the Drude conductivity and related it to the low-temperature thermal conductivity using the Wiedemann-Franz law. Specifically, we established the dependence of thermal conductivity and thermopower (Seebeck coefficient) on the dispersion of the monolayer TMDC. TMDCs with higher band gaps have a larger Seebeck coefficient (and a lower thermal conductivity) which is further tunable under a high-energy circularly-polarized light beam. However, an important remark about the results derived is in order; most importantly, we have tacitly assumed a free-standing monolayer of TMDC while experimental setups may utilize a substrate. The presence of a substrate can modify the results, at least quantitatively, an instance of which can be found in theoretical results reported in Ref. . Substrate-grown monolayer MoS$_{2} $ sheets revealed a poor thermoelectric power factor over their freely-suspended counterparts. A chief purpose of this work was to present a description of conditions that are promising to easily adjust the thermal conductivity and thermopower for a wide spectrum of applications. [@wang2012electronics; @jariwala2014emerging] In this regard, we note that applications that desire a faster transport of heat to lower ambient temperatures, such as in nano-sized devices that suffer from self-heating, the mechanism of Peltier cooling with a higher thermal conductivity is a necessary condition; on the contrary thermoelectric power generation needs a larger thermopower/Seebeck coefficient. Notice that the two quantities exhibit opposite trends with respect to the intrinsic band gap. Lastly, it is useful to remark that we suggested a laser-driven tuning of the band gap and the thermoelectric behaviour; in addition, optimally straining the monolayer TMDC can also yield the sought characteristics. A promising thermoelectric figure of merit $ \left( ZT \right) $ in case of strained ZrS$_{2} $ monolayer has already been achieved. [@lv2016strain] TMDC films also carry defects, vacancies, clusters, and dislocations from the growth process which affect the electronic and chemical behaviour; testimony to which lies in the large body of work/data available from optoelectronic characterization of TMDCs thin films. These measurements clearly show the presence of defect-induced traps that give rise to additional photoemission peaks and distinct photoluminescence intensity.[@tao2014strain; @mouri2013tunable] These imperfections, rather than being severely detrimental to their device prospects can be turned in to efficient ‘knobs’ by leveraging their influence on the overall thermal attributes of TMDCs; the thermal conductivity, in fact, has been shown to be modulated through defects in silicene[@li2012vacancy; @liu2014thermal]. We utilized Mott’s variable-range-hopping model to describe the change in Drude conductivity and how a modulation of its thermal equivalent could be accomplished. It is pertinent to state that the analysis presented here involves a Hamiltonian (Eq. \[mos2ham\]) that describes massive Dirac fermions around the valley edges; in principle this study could also be extended [@rostami2014intrinsic] to gapped topological insulators. Such topological insulators host massive Dirac fermions and the gap opening of the surfaces states could be a result of inter-surface hybridization in thin films or the presence of an out-of-plane magnetic field. Before closing, we wish to remark about the validity of using the Wiedemann-Franz law (WFL) to calculate the thermal conductivity. WFL tacitly assumes that the ensemble of electrons do not undergo inelastic electron-phonon scattering and electron-electron interaction is negligible. However, violations to WFL appear for strongly interacting systems such as heavy fermion metals,[@tanatar2007anisotropic] Luttinger liquids, and ferromagnets and is normally considered as the hallmark of non-Fermi liquid behaviour. A recently reported work[@crossno2016observation] also predicts the violation of WFL in two-dimensional graphene in vicinity of the charge neutral point that hosts a quasi-relativistic electron-hole plasma known as the Dirac fluid. [45]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , ** (, ). , ** (, ). , ** (, ). , , , , , , , , , , , ****, (). , , , , , , ****, (). , ** (, ). , , , , , ****, (). , , , , , ****, (). , ** (, ). , ****, (). , ****, (). , ** (, ). , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , , , , , , , , , , , **** (). , , , , ****, (). , (). , , , , , , , , , , ****, (). , , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , ****, (). , ****, (). , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , , , , , ****, ().
--- abstract: 'The post-enrolment course timetabling (PE-CTT) is one of the most studied timetabling problems, for which many instances and results are available. In this work we design a metaheuristic approach based on Simulated Annealing to solve the PE-CTT. We consider all the different variants of the problem that have been proposed in the literature and we perform a comprehensive experimental analysis on all the public instances available. The outcome is that our solver, properly engineered and tuned, performs very well on all cases, providing the new best known results on many instances and state-of-the-art values for the others.' address: | DIEGM, University of Udine\ via delle Scienze 208, I-33100, Udine, Italy author: - 'Sara Ceschia, Luca Di Gaspero, Andrea Schaerf' bibliography: - 'strings.bib' - 'timetabling.bib' - 'statistics.bib' title: 'Design, Engineering, and Experimental Analysis of a Simulated Annealing Approach to the Post-Enrolment Course Timetabling Problem' --- Course Timetabling ,Simulated Annealing ,Metaheuristics Introduction ============ Timetabling problems are widespread in many human activities and their solution is a hard optimisation task that can be profitably tackled by Operations Research methods. Educational timetabling is a sub-field of timetabling that considers the scheduling of meetings between teachers and students. A large number of variants of educational timetabling problems have been proposed in the literature, which differ from each other based on the type of institution involved (university, school, or other), the type of meeting (course lectures, exams, seminars, …), and the constraints imposed. The *university course timetabling* (CTT) problem is one of the most studied educational timetabling problems and consists in scheduling a sequence of events or lectures of university courses in a prefixed period of time (typically a week), satisfying a set of various constraints on rooms and students. Many formulations have been proposed for the CTT problem over the years. Indeed, it is impossible to write a single problem formulation that suits all cases since every institution has its own rules, features, costs, and fixations. Nevertheless, two formulations have recently received more attention than others, mainly thanks to the two timetabling competitions, ITC 2002 and ITC 2007 [@MSPM10], which have used them as competition ground. These are the so-called *curriculum-based course timetabling* (CB-CTT) and *post-enrolment course timetabling* (PE-CTT). The main difference between the two formulations is that in the CB-CTT all constraints and objectives are related to the concept of *curriculum*, which is a set of courses that form the complete workload for a set of students. On the contrary, in PE-CTT this concept is absent and the constraints and objectives are based on the student enrolments to the courses. In this work we focus on the PE-CTT problem and we design a single-step metaheuristic approach based on Simulated Annealing (SA), working on a composite neighbourhood composed of moves that reschedule one event or swap two events. The solver is able to deal with all the different variants of the PE-CTT problem proposed in the literature. We experiment our solver on all the instances that have been made publicly available (up to our knowledge). The outcome of our experimental analysis is that our general solver, properly engineered and tuned, is able to outperform most of the solvers specifically designed and tuned for a single specific formulation and/or a specific set of instance. Problem Definition ================== Over the years, different versions of the PE-CTT problem have been defined. We first illustrate (Section \[sec:definition\]) the most general version, which is the one that has been used for ITC 2007 and is described by @LePM07. The other versions are obtained from this one by removing some of the features, and they are described in Section \[sec:instances\] along with a presentation of the available instances. General Definition of PE-CTT {#sec:definition} ---------------------------- In the PE-CTT problem it is given a set $\mathcal{E} = \{ 1, \dots, E\}$ of events, a set $\mathcal{T} = \{ 1, \dots, T\}$ of timeslots, and a set $\mathcal{R} = \{ 1, \dots, R\}$ of rooms. It is also defined a set of days $\mathcal{D} = \{ 1, \dots, D\}$, such that each timeslot belongs to one day and each day is composed by $T/D$ timeslots. It is also given a set of students $\mathcal{S}$ and an enrolment relation $\mathcal{M} \subseteq \mathcal{E} \times \mathcal{S}$, such that $(e,s)\in \mathcal{M}$ if student $s$ attends event $e$. Furthermore, it is given a set of features $\mathcal{F}$ that may be available in rooms and are required by events. More precisely, we are given two relations $\Phi_R \subseteq \mathcal{R} \times \mathcal{F}$ and $\Phi_E \subseteq \mathcal{E} \times \mathcal{F}$ such that $(r,f)\in \Phi_R$ if room $r$ has feature $f$ and $(e,f)\in \Phi_E$ if event $e$ requires feature $f$, respectively. Each room $r\in \mathcal{R}$ has a fixed capacity $C_r$, expressed in terms of seats for students. In addition, it is defined a precedence relation $\Pi \subseteq \mathcal{E} \times \mathcal{E}$, such that if $(e_1,e_2) \in \Pi$, events $e_1$ and $e_2$ must be scheduled in timeslots $t_1$ and $t_2$ such that $t_1 < t_2$. Finally, there is an availability relation $\mathcal{A} \subseteq \mathcal{E} \times \mathcal{T}$, stating that event $e$ can be scheduled in timeslot $t$ only if $(e,t) \in \mathcal{A} $. The (hard) constraints of the problem are the following ones: 1. Events that share common students cannot be scheduled in the same timeslot. 2. An event cannot be allocated in a room that is missing one of the features needed by the event, or in a room whose capacity is less than the number of students attending the event. 3. No more than one event per room per timeslot is allowed. 4. Timeslots must be assigned to events according to the availability relation $\mathcal{A}$. 5. Timeslots must be assigned to events according to the precedence relation $\Pi$. Since reaching feasibility could be non trivial for some instances, the definition given for ITC 2007 includes the distinction between *valid* and *feasible* timetables [see @LePM07]. In a valid timetable all hard constraints must be satisfied, but it is allowed to leave some events unscheduled (i.e., they have no timeslot assigned). A feasible timetable is a valid one in which all events are scheduled. The prescription of the ITC 2007 rules require all solutions to be valid, but they do allow also infeasible solutions. In formal terms, this means that the problem consists in finding an assignment $\mathcal{E} \rightarrow \mathcal{T} \times \mathcal{R} \cup \{(t_{\delta}, r_{\delta})\}$, where $t_\delta$ and $r_\delta$ are a *dummy timeslot* and a *dummy room*. The assignment of an event to these special entities identifies the unscheduled events. In addition, we introduce a new hard constraint type, that accounts for the unscheduled events, which can be violated to some extent: 1. Events cannot be unscheduled.\[con:unscheduled\] The (integer-valued) objective function is the sum of the following soft constraints. Each violation of any of the three kinds accounts as one point in the objective function. 1. A student should not attend an event in the last timeslot of a day. For each event in the last timeslot, we compute the sum of the number of students that have to attend it.\[obj:late-event\] 2. A student should not attend more than two consecutive events in a day (i.e., the last timeslot of a day and the first one of the following day are not considered as consecutive). For each day and for each student, we compute the sum of the consecutive events subsequent to the second. For instance, if 3 students have to attend 4 consecutive events in a day, the penalty is $3\cdot(4-2) = 6$. \[obj:consecutive-event\] 3. A student should not attend only one single event in the whole day. For each day, we sum the number of students that have to attend isolated events.\[obj:isolated-event\] In conclusion, the quality of the solution is evaluated with an evaluation function that is composed by two measures: the *distance to feasibility* (H6) and the *objective function* (S1 + S2 + S3). The distance to feasibility is computed as the sum of the numbers of students that require unscheduled events. The evaluation function is hierarchical, in the sense that valid solutions with the lower distance to feasibility are better solution. If two valid solutions have the same distance to feasibility, then the solution with the minimum value of the objective function is preferred. Problem Variants and Available Instances for PE-CTT {#sec:instances} --------------------------------------------------- The problem formulation presented above is the one defined by @LePM07 and used in the ITC 2007. Two other versions have been considered in the literature, which are obtained from the above one by removing some of the constraints. In particular, the first one is the original formulation, proposed by the Metaheuristics Network [@RSBC03] and used for the ITC 2002. This formulation does not consider and . In addition, since for the ITC 2002 instances the feasibility was easy to be obtained for all instances, the possibility to leave some events unscheduled was not taken into account. [|l|\*[6]{}[c]{}|ccc|]{} Formulation & H1 & H2 & H3 & H4 & H5 & H6 & S1 & S2 & S3\ <span style="font-variant:small-caps;">Full</span> (ITC 2007) & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$\ <span style="font-variant:small-caps;">Original</span> (ITC 2002) & $\surd$ & $\surd$ & $\surd$ & — & — & — & $\surd$ & $\surd$ & $\surd$\ <span style="font-variant:small-caps;">Hard-Only</span> & $\surd$ & $\surd$ & $\surd$ & — & — & $\surd$ & — & — & —\ The other formulation has been proposed by [@LePa05], and is a further simplification, as it does not include and and it discards all soft constraints. Differently from the previous formulation, however, it considers the possibility of having unscheduled events. The formulations considered are summarised in Table \[tab:formulations\]. Four sets of instances are publicly available and have been used in the experimental analyses reported in the scientific literature so far. Table \[tab:instances\] lists for each set of instances the origin, the web site from which they can be downloaded, the formulation considered, the number of instances that compose the data set, and the year of publication. [|lp[0.4]{}cc|]{} Instance Family & Formulation & \# Instances & Year\ ITC 2007 & <span style="font-variant:small-caps;">Full</span> & 24 & 2007\ \ Lewis & Paechter & <span style="font-variant:small-caps;">Hard-Only</span> & 60 & 2005\ \ ITC 2002 & <span style="font-variant:small-caps;">Original</span> & 20 & 2002\ \ Metaheuristics Network & <span style="font-variant:small-caps;">Original</span> & 12 & 2001\ \ All instances are artificial, as they are created by a random generator, based on realistic bounds for the problem features. For all of them, the set of timeslots is fixed to $T = 45$, split in 5 days $D = 5$ of $9$ timeslots each, such that timeslots $\{1,\dots,9\}$ belong to day 1, timeslots $\{10,\dots,18\}$ belong to day 2, and so on. Each instance is available in a single text-only file (for the sake of brevity, we do not report the format here). Two different file formats are used: one for the <span style="font-variant:small-caps;">Full</span> formulation, which includes and , and the other for the <span style="font-variant:small-caps;">Original</span> and <span style="font-variant:small-caps;">Hard-Only</span> formulations, without them. In addition, the proposers have released a validator for both the <span style="font-variant:small-caps;">Original</span> and the <span style="font-variant:small-caps;">Full</span> formulations. We have used it for certifying the solution quality of all the results we have found in the experimental phase. This means that for example the instances of @LePa05 could be used also for the <span style="font-variant:small-caps;">Original</span> formulation. However, we consider only the pairs Instance/Formulation that have been investigated in the past, so that we can compare with previous work. Related Work ============ In the last forty years, starting with @Gotl63, many papers related to educational timetabling have been published and several applications have been developed and employed in practise. In addition, many research surveys have been published, going from @Werr85, to @Scha99, to the most recent ones by [@BuPe02] and [@Lewi08]. We refer to them for an introduction to educational timetabling. With specific regard to course timetabling, the most seminal works on course timetabling are those by @Hert91 [@Hert92], who uses a Tabu Search approach to solve two different versions of the problem. More recently, @MuMR07 tackle a very complex formulation of the problem and solve it by decomposition and constraint-based local search. However, most of the recent work on course timetabling, besides the one on PE-CTT, has focused on the other “standard” formulation, namely CB-CTT. To this regard, @LuHa09 solve the CB-CTT problem by Tabu Search on a large neighbourhood. @LaLu10 and @BMPR10 both use a IP approach and find several lower bounds along with a few optimal solutions. @HaBe11 use a decomposition approach to improve on the lower bounds obtained with the model of @LaLu10. [@Mull09] uses a multi-step local search approach to find good solutions of the problem. Finally, our research group [@BeDS11] has proposed a hybrid Tabu Search/Simulated Annealing approach for this problem. The initial work on PE-CTT has been carried out inside the Metaheuristics Network by @RSBC03 who compare several metaheuristic techniques for the <span style="font-variant:small-caps;">Original</span> formulation on the set of instances specifically designed for their study. That work has been extended by @CBSR06 that apply the same techniques, suitably refined and tuned, to the instances defined for the ITC 2002. The same formulation has been tackled by @Kost04 using a multi-stage metaheuristic approach. Both @Kost04 and @CBSR06 consider a search space composed by assignments of events to timeslots only, leaving the rooms unassigned. The room assignment is performed by a specialised sub-solver that applies a matching algorithm. Finally, the ITC 2002 instances have been tested also by @BBNP03 and @DiSc06, who also used local search based techniques. Moving the <span style="font-variant:small-caps;">Full</span> ITC 2007 formulation, @ChFH08 propose a solver, built on the previous work for ITC 2002 [@CBSR06], which consists of several heuristic modules in a two phase solution process (dealing with hard and soft constraints, respectively). The modules have been assembled and tuned using the automated algorithm configuration procedure ParamILS [@HHLS09]. design a deterministic heuristic approach that builds a LP solution using column generation and then tries to improve it by solving ILP subproblems. employs a three stages strategy in which a constructive phase is followed by two separate phases of Simulated Annealing. The idea behind this method is to arrange constraints corresponding to different levels of importance in the different phases of the solution process. applies a constraint-based framework incorporating a series of algorithms based on local search techniques, that operates over feasible (but not necessarily complete) solutions. Finally, @CHOP10 proposes both a constraint-based technique and a multi-stage local search one. This latter method has been the winner of the PE-CTT track of the ITC 2007. A few authors considered the <span style="font-variant:small-caps;">Hard-Only</span> formulation and the corresponding instances. The first works by @LePa05 [@LePa07] use an evolutionary algorithms to tackle the problem. Subsequently, @TuBM07 use a graph-based heuristic to construct a feasible solution of the relaxed problem (where constraint H2 is partially relaxed) and then apply a SA-approach relying on a Kempe chains neighbourhood. Finally, @LiZC11 propose a clique-based heuristic that tries to identify cliques as the set of events that can be scheduled in the same timeslot. Local Search for Post-Enrolment Course Timetabling ================================================== We describe our local search technique in six stages by highlighting the different components of our solution method. Namely these components are: preprocessing and constraint reformulation, search space, initial solution, neighbourhood relations, cost function, and the Simulated Annealing metaheuristic. Preprocessing and constraint reformulation ------------------------------------------ By a careful analysis of the features and the constraints of the problem it is possible to identify some preliminary preprocessing and reformulation steps that can significantly improve the efficiency of the local search phase. This stage is composed by five steps. The first three steps have already been proposed and employed in previous works [see, e.g. @Kost04]. Instead, the remaining two steps are our original ideas and they have a substantial impact on the search strategy of our solver. 1. Creation of auxiliary matrices: : According to the features held by the rooms, the room capacities, and the features requested by the events, we create a Boolean-valued *event-room compatibility matrix* $\Theta_R$, which states whether a room is suitable for an event. The data about features and capacities can then be discarded and replaced completely by the $\Theta_R$ matrix. Similarly, according to the student enrolment data we create a symmetric Boolean-valued *event-event conflict matrix* $\Theta_E$, which accounts for the presence of common students between pair of events. 2. Propagation of precedences: : Given the precedence relation $\Pi$, we perform a preliminary constraint propagation in order to restrict the availability for the events. For any pair of events $e_1$ and $e_2$ such that $(e_1,e_2)\in \Pi$, we mark as unavailable period $T$ for $e_1$, and period $1$ for $e_2$. Pursuing further this idea, we consider all chains of events (also longer than two) in the graph obtained by the transitive closure of the precedence relation. Based on this process (known as arc-consistency in the constraint programming community), we determine for each event $e$ a minimum and a maximum assignable timeslot. The values outside this interval are considered unavailable for $e$, and thus removed from the availability relation $\mathcal{A}$. 3. Identification of 1-room events: : Looking at the $\Theta_R$ matrix, it is possible to identify events that are compatible only with a single room. We call these events *1-room events*. Obviously, two 1-room events that share the same compatible room $r$ cannot be scheduled in the same timeslot. We thus update the $\Theta_E$ matrix adding these new conflicts. 4. Identification of all-room events: : A further look at the $\Theta_R$ matrix allows us to identify also the events that are compatible with all the rooms. We call these events *all-room events*. For this kind of events it is not necessary to assign a room during search, and the actual room can be assigned in a simple post-processing phase. Therefore, these events are always assigned to the dummy room $r_{\delta}$. However, through the search it is still necessary to guarantee that the number of events assigned to each timeslot does not exceed the number of rooms $R$. 5. Sorting rooms by the number of compatible events: : In this step, we count for each room the number of events that are compatible with it (all-room events are not considered in this phase). This value represents a sort of “attractiveness” of the room. We create a list of rooms sorted in ascending order of attractiveness. This list will be used in the search in order to assign rooms in such a way to leave during search the most attractive rooms available for further events to be added. Search Space ------------ As already mentioned, the *solution space* is composed by all the assignments of timeslots and rooms to events extended with the pair composed by dummy timeslot and the dummy room: $\mathcal{E} \rightarrow \left( \mathcal{T} \times \mathcal{R} \cup \{(t_\delta,r_\delta)\}\right)$. The *search space* employed by our algorithm is the solution space itself, with some restrictions. First, only available timeslots and compatible rooms can be assigned to each event. In addition, assignments are included in the search space only if no pair of events are assigned to the same timeslot and room, and the total number of events assigned to a timeslot is less than or equal to the number of rooms. Summarising, all assignments in the search space do not violate the constraints , , and . On the other hand, , , and can be violated, and thus they are included in the cost function. Finally, in the search space all-room events are always assigned to the dummy room $r_\delta$, and actual rooms will be assigned during the post-processing phase. Initial Solution ---------------- For the construction of the initial solution, we propose two different methods. The first one, denoted by $I_0$, is a greedy procedure that assigns each event $e$ to a random timeslot $t$, which is available for $e$ and is not already assigned to $R$ events. If a room $r$ compatible with $e$ is free in $t$, the pair $(t,r)$ is assigned to $e$, otherwise the event is assigned to $(t_\delta,r_\delta)$. Compatible rooms are visited in order of ascending attractiveness. The second method, denoted by $I_1$, is based on the same idea but it tries to leave unscheduled as few events as possible. It proceeds in the same way, but when no room is available in $t$ for $e$, $I_1$ draws a new random timeslot. However, being a greedy procedure, it might happen in a given stage that there is no room compatible with $e$ in any timeslot. In order to avoid an infinite loop in such a situation, we stop the procedure after a finite number of draws and assigns $e$ to $(t_\delta,r_\delta)$. For example, for the ITC 2007 instances, the number of unscheduled events of a state generated with $I_1$ is most of the times 0, and occasionally it is 1 or 2. On the contrary, for $I_0$ up to 25% of the events might be left unscheduled in the most difficult instances. Neighbourhood relations ----------------------- Two different neighbourhood relations are considered in this work: : : Move one event $e\in \mathcal{E}$ from its currently assigned timeslot to timeslot $t\in \mathcal{T}\cup \{t_\delta\}$. The move $\mathsf{ME}(e,t)$ is admissible if $t$ is available for $e$ and there is a compatible free room $r$ for $e$ in $t$. The pair $(t,r)$ is assigned to $e$. : : Swap the timeslots $t_1,t_2\in \mathcal{T}\cup \{t_\delta\}$ assigned to two events $e_1,e_2 \in \mathcal{E}$. The move $\mathsf{SE}(e_1,e_2)$ is admissible if $t_1 \neq t_2$ and $t_1$ (resp. $t_2$) is available for $e_2$ (resp. $e_1$) and there is a compatible free room $r_1$ (resp. $r_2$) for $e_2$ (resp. $e_1$) in $t_1$ (resp. $t_2$). The pair $(t_2,r_2)$ is assigned to $e_1$ and the pair $(t_1,r_1)$ is assigned to $e_2$. For both neighbourhoods, rooms are explored in ascending order of attractiveness. For events in timeslot $t_\delta$ and for all-room events the only room considered compatible is $r_\delta$. For the neighbourhood we also consider a restricted version that we call $\mathsf{ME}^-$. The move $\mathsf{ME^-}(e,t)$ is admissible only if $t \neq t_\delta$. Intuitively, $\mathsf{ME}^-$ excludes the moves that increase the number of unscheduled events in the current state. Cost Function {#sec:cost-function-pectt} ------------- The cost function that guides the search is a combination of the soft constraint penalty and the violation of hard constraints. In detail, , , and are always satisfied in the search space, whereas , , and can be violated, and therefore they are included in the cost function. In case of violation of the constraint, the formulation prescribes to count the number of students that are enrolled in the event. Consequently, in order to have comparable values also for the other hard constraints components, in case of a violation of the and constraints we count the minimum between the number of students of the two events involved. However, for the purpose of having at the end of the run only possible violations of the component (as required), in the last few iterations of the search the cost of and is doubled. This proved experimentally to be sufficient to ensure that there are no violations of a type different from . In conclusion, the cost function $F$ is the composition of two terms: the distance to feasibility, multiplied by a suitable high weight $W$, and the objective function $f$. Given that we make one single step and that the move acceptance is based on $\Delta F$, the value of $W$ is crucial of the performances of our solver. In fact, if $W$ is too high the start temperature needs to be set to a very high value, which would result if a waste of time for the search. On the other hand, if $W$ is too small it is possible that the solver follows trajectories that “prefer” infeasible solutions to feasible ones, if they have lower objective cost. In conclusion, $W$ needs to be set experimentally, as discussed in Section \[sec:exper-analys\]. Simulated Annealing ------------------- Many versions of SA have been proposed in the literature [see, e.g., @KiGV83; @Egle90; @AaLe97; @HoSt05]. The version used here is the one with probabilistic acceptance and geometric cooling. In detail, at each iteration of the search process a random neighbour is selected. The move is performed either if it is an improving one or according to an exponential time-decreasing probability. If the cost of the move is $\Delta F > 0$, the move is accepted with probability $e^{-\Delta F/ T}$, where $T$ is a time-decreasing parameter called *temperature*. At each temperature level a number $N$ of neighbours of the current solution is sampled and the new solution is accepted according the above mentioned probability distribution. The value of $T$ is modified using a *geometric* schedule, i.e., $T_{i+1} = \beta \cdot T_i$, in which the parameter $\beta < 1$ is called the *cooling rate*. The search starts at temperature $T_0$ and stops when it reaches $T_{min}$. Different settings of the parameters of SA would result in different running times. Instead, we want to compare them in a fair setting, giving to all of them the same amount of computational time. To this aim, we let the three parameters $T_0$, $T_{min}$, and $\beta$ vary and we fix $N$ in such a way to have exactly the same number of total iterations. Calling $I$ the fixed total number of iterations, we compute $N$ from the following formula. $$N = I / \log_{\beta}{(T_0 / T_{min})}$$ In this way, the total running time is approximately the same, for all combinations of parameters. We experiment with three solvers, that differ from each other based on the neighbourhood used and the initial solution procedure. The first solver we consider is SA using as neighbourhood the union of $\mathsf{ME}$ and $\mathsf{SE}$, and as the initial state method $I_0$. We denote it as $SA(I_{0}, \mathsf{ME} \oplus \mathsf{SE})$. Using a similar notation, the other two solvers are denoted by $SA(I_{0}, \mathsf{ME^-}\oplus\mathsf{SE})$ and $SA(I_{1}, \mathsf{ME^-}\oplus \mathsf{SE})$. Intuitively, the first one explores freely the full search space, composed also by unscheduled events. The second one starts with a state with unscheduled events, but leads the search as much as possible in the direction of feasible solutions. The third one focuses on the space in which all events are scheduled. The total number of iterations is set to $I = 1.14\cdot 10^8$, which corresponds approximately to the time granted for ITC 2007 and which results in a running time of about 300s on our PC, an Intel Core i7 @1.6 GHz PC (64 bit). We prefer to set the number of iterations, rather than using a real timeout because, as advocated by @John96, the use of the timeout makes the experiments less reproducible. The software is written in C++ language, it uses the framework <span style="font-variant:small-caps;">EasyLocal++</span> [@DiSc03b], and it is compiled using the GNU C/C++ compiler, v. 4.4.3, under Ubuntu Linux. Experimental Analysis {#sec:exper-analys} ===================== For tuning the three solvers, we first select the parameters to be evaluated. To this regard, we decide to use $T_0$, $\beta$, and $\rho = T_0/T_{min}$, which turned out to provide a better selection of the configurations than using $T_{min}$ directly. Given that we use two different types of moves, namely $\mathsf{ME}$ (or $\mathsf{ME^-}$) and $\mathsf{SE}$, we add an additional parameter, called $sr$ (for swap rate) which is the probability of drawing a move of type $\mathsf{SE}$. Finally, the parameter $W$ needs to be set. Preliminary Screening --------------------- In order to perform an effective tuning it is useful to have a screening based on preliminary experiments, that allows us to eliminate some of the five parameters and to focus on the most important ones. Preliminary experiments show that $\beta$ is not significant. This is not surprising, because in our setting $N$ is a function of the other parameters, and therefore $\beta$ only determines the entity of the single step in the temperature and not the actual slope of the cooling trajectory, which is determined by $\rho$. We therefore set $\beta$ to the fixed value $0.9999$. Preliminary experiments show also that $sr$ is not significant, as long as it is set within the range $[0.1,0.5]$. Consequently, we fixed $sr$ to the value $0.4$, which provided marginally better results. Regarding $W$, it turned out that the value $W = 1$ is big enough to ensure that the solver prefers feasible solutions to infeasible ones. Therefore this parameter is set to 1 for all experiments. This surprising finding is explained by the observation that a hard violations has a cost equal the number of students involved, whereas only a fraction of the students involved in the move contributes to the soft constraints. Experimental Design and Tuning ------------------------------ For the remaining two parameters ($T_0$ and $\rho$), we have to select the configurations to be tested. Instead of using a classical *full factorial* design, which consists in a regular sampling of the range for each parameter and testing all combinations, we resort to the *Nearly Orthogonal Latin Hypercubes (NOLH)* proposed by @CiLu07, that allow us to fill the space using much less configurations. To generate the actual configurations we use the NOLH spreadsheet made available by @Sanc05, using the design with 33 points, within the ranges $T_0 \in [1,100]$ and $\rho \in [10,1000]$. For the comparison of the 33 configurations we resort to $F$-Race [@Bira05], which is a sequential testing procedure that uses the Friedman two-way analysis of variance by ranks to decide upon the elimination of inferior candidates. At each stage, a new instance is selected, all remaining configurations run on it, and weaker configurations are discarded if enough statistical evidence has arisen against them. We use the canonical value $0.05$ as significance level in the tests. The transformation of results in ranks prescribed in $F$-Race guarantees that in the statistical procedure the aggregation of results over the instances is not influenced by the differences in the scale of the cost function values that depends on the instance at hand. Considering that each set of instances has different features and they refer to different problem formulations (see Table \[tab:instances\]), we decide to tune separately the parameters for each instance family. A tuning process directed to a general and unique parameter setting is also possible and it leads to only slightly inferior results, proving that the algorithm is robust enough. For each instance family and for each solver we firstly compare the 33 configurations resulting from the NOLH analysis, obtaining the best parameter configuration for each solver. Then, for each instance family, we compare the three solvers using their best configuration by means of the *Wilcoxon rank-sum test*. Table \[tab:comp\_solvers\] reports, for each family, the best solver, along with its best configuration. There are cases in which the difference between solvers or configurations is not statistically significant. In these situations, we consider as the best the one with minimum average rank. In order to get close to the setting of the ITC 2007, for the <span style="font-variant:small-caps;">Full</span> formulation we use the instances 1-16 for tuning the parameters, and the instances 17-24 for validation. In fact, for the competition the last 8 instances (Hidden Instances) where not given to the participants, but used by the organisers for evaluating the solvers submitted. -- ---------------------------------------------- ------------------------------------------- ------- -------- $SA(I_{0}, \mathsf{ME^-}\oplus\mathsf{SE})$ 20.41 33.88    Med $SA(I_{0}, \mathsf{ME}\oplus\mathsf{SE})$ 31.62 257.63    Big $SA(I_{0}, \mathsf{ME}\oplus\mathsf{SE})$ 36.30 295.12 $SA(I_{1}, \mathsf{ME^-}\oplus \mathsf{SE})$ 3.89 31.62 $SA(I_{0}, \mathsf{ME}\oplus\mathsf{SE})$ 3.89 31.62 -- ---------------------------------------------- ------------------------------------------- ------- -------- : Best settings of the SA equipped solvers.[]{data-label="tab:comp_solvers"} Comparison with Best Known Results ---------------------------------- We now compare the solvers emerged from the tuning phase (Table \[tab:comp\_solvers\]) with the best results in the literature. Table \[tab:solvers\] summarises the solvers with which we compare. For each solver, we report the reference, the techniques used and the family of instances it solves. Notice that no solver previously presented in the literature has dealt with more than one family of instances. ### ITC 2007 instances Solver Reference Technique Family of instances -------- ----------- ------------------------------------------------------- ------------------------ A @ChFH08 Local Search + Matching ITC 2007 B @Mull09 Constructive + Local Search (HC, GD, SA) ITC 2007 C1 @CHOP10 Local Search (SA) ITC 2007 C2 @CHOP10 Local Search (SA) ITC 2007 D @Lewi10 Constructive + Iterated Heuristic + Local Search (SA) ITC 2007 E @BrHu10 Column Generation + ILP ITC 2007 F @MNCR08 Ant Colony Optimisation ITC 2007, ITC 2002 G1 @LePa07 Genetic Algorithm @LePa07 G2 @LePa07 Genetic Algorithm @LePa07 G3 @LePa07 Genetic Algorithm @LePa07 H @TuBM07 Constructive + Local Search (SA) @LePa07 I @LiZC11 Constructive @LePa07 J @BBNP03 Local Search (GD) ITC 2002 K @DiSc06 Local Search ITC 2002 L @Kost04 Constructive + Local Search (SA) ITC 2002 M @CBSR06 Constructive + Local Search (TS, SA) ITC 2002 N @SoKS02 Ant Colony Optimisation Metaheuristics Network O @AbBM07b Memetic algorithm Metaheuristics Network P @McMu07 Constructive + Local Search (GD) Metaheuristics Network Q @LaOb08 Local Search (GD) Metaheuristics Network R @TuSM09 Local Search (GD) Metaheuristics Network We first consider the ITC 2007 instances. Table \[tab:ITC2007\_results\] reports the values obtained by our method in 30 runs for instances 1–16, along with a comparison with respect to the available results reported in the literature (in bold the best results). The presence of the dash symbol means that no feasible solution has been found. The columns %Feas show the percentage of feasible solutions obtained. Notice that, for solver D, the paper in some cases reports only that this percentage is greater than 95%, instead of the precise value (it reports instead the average number of violations). The average values are computed considering all the solutions obtained in the experiments, including the infeasible ones. Obviously, the value of the objective function for the infeasible solutions is not very meaningful. However, for our solver the number of infeasible solutions is very small, therefore the average of the value of the objective function is still the most meaningful index. For instances 17–24, values are not reported in the cited papers [except for @BrHu10], therefore we compare our solver with the results extracted from the spreadsheet available from the ITC 2007 website. As mentioned above, our results on these instances are obtained with the best parameter configuration used for instances 1–16. From the results it is possible to see that our method outperforms all other solvers on 9 out of 24 instances, it is second to @CHOP10 on 11 instances, and second to @MNCR08 in the remaining 4. This positive result is confirmed by applying the ranking method of ITC 2007. The first row of Table \[tab:rankITC2007\] shows the average of the ranks obtained by each finalist of the competition (available from the ITC 2007 website), from which it results that Cambazard *et al.* won the competition. Adding a-posteriori our solver to the final of the competition[^1], we obtain the ranks of the second row, from which we see that our solver would have won the competition. Atsuta *et al.* C1 A F B Us ----------------- ----------- ------- ------- ------- ----------- 24.43 **13.90** 28.34 29.52 31.31 31.41 19.98 36.85 37.33 40.70 **16.73** : Comparison using the ITC 2007 ranking system.[]{data-label="tab:rankITC2007"} ### @LePa05 instances Moving to the @LePa05 instances, Tables \[tab:Lewis\_media\_results\] and \[tab:Lewis\_big\_results\] report the results for the 20 medium and the 20 big instances (we do not report here results on the 20 small ones because they are not challenging, given that we solve all of them to optimality in all runs). Following @LePa07, for these instances in Tables \[tab:Lewis\_results\] we report the number of unscheduled events, rather than the total number of students attending them. However, the solver, similarly to @TuBM07, still uses the number of students as the distance to feasibility. This version of the cost function proved experimentally to be more effective. Also for these instances we have been able to obtain new best results and to be relatively close to the best known results in all the other cases. It is worth mentioning that these instances have very different structure with respect to the other data sets, and in these cases it is not always possible to find a feasible solution. Indeed, the authors who considered these instances used *ad hoc* techniques, which are rather different from those used by the authors who worked on the other instance families. ### ITC 2002 instances ------- ------- ----- -------- ------- -------- ----- -------- ------- Inst. 1 85 63 **16** 45 82 55 57.05 45 2 42 46 **2** 14 64 43 33.20 20 3 84 96 **17** 45 92 61 53.20 43 4 119 166 **34** 71 208 134 109.90 87 5 77 203 **42** 59 185 134 91.70 71 6 6 92 **0** 1 59 32 14.05 2 7 12 118 **2** 3 138 52 13.70 **2** 8 32 66 **0** 1 107 48 20.00 9 9 184 51 **1** 8 70 39 21.90 15 10 90 81 **21** 52 118 77 60.70 41 11 73 65 **5** 30 75 39 38.20 24 12 79 119 **55** 75 143 102 83.65 62 13 91 160 **31** 55 156 94 77.95 59 14 36 197 **11** 18 175 109 34.20 21 15 27 114 **2** 8 89 47 11.80 6 16 300 38 **0** 55 45 26 16.70 6 17 79 212 **37** 46 143 78 56.45 42 18 39 40 **4** 24 59 35 25.85 11 19 86 185 **7** 33 187 119 72.95 56 20 **0** 17 **0** **0** 38 19 1.75 **0** Avg 111.65 44.75 ------- ------- ----- -------- ------- -------- ----- -------- ------- : Results on ITC 2002 instances for 20 runs.[]{data-label="tab:ITC2002_results"} ---------- ------- -------- ----- -------- ----- ----- ----- ----------- ----- Instance s1 1 **0** 0 0.8 0 3 0 **0** 0 s2 3 **0** 0 2 0 4 0 0.03 0 s3 1 **0** 0 1.3 0 6 0 **0** 0 s4 1 **0** 0 1 0 6 0 0.06 0 s5 0 **0** 0 0.2 0 0 0 **0** 0 m1 195 224.8 221 101.4 80 140 175 **26.46** 9 m2 184 150.6 147 116.9 105 130 197 **25.86** 15 m3 248 252 246 162.1 139 189 216 **49.03** 36 m4 164.5 167.8 165 108.8 88 112 149 **23.83** 12 m5 219.5 135.4 130 119.7 88 141 190 **10.86** 3 l1 851.5 552.4 529 834.1 730 876 912 **259.8** 208 Avg 134.82 131.66 **35.99** l2 – – – – – – – 224.4 170 ---------- ------- -------- ----- -------- ----- ----- ----- ----------- ----- : Results on instances of the Metaheuristics Network for 30 runs.[]{data-label="tab:socha_results"} The results of the experiments on the ITC 2002 instances are reported in Table \[tab:ITC2002\_results\]. Unfortunately, for all papers but the one by @MNCR08 only best results are available from the literature, therefore a fair comparison is not possible. Nevertheless, it is clear from the table that the results of @Kost04 are the overall bests. Regarding the comparison with @MNCR08, our solver clearly outperforms their one in all 20 cases. ### Metaheuristics Network instances We finally move the the Metaheuristics Network instances, which have been tackled by yet different authors. For these instances, our solver greatly outperforms all the others on all the medium-size (m1–m5) and on the first large instance (l1), while it is only marginally inferior on the small ones. Instance l2 has not been tested recently in the literature (because of the use of an incomplete copy of the dataset). Discussion, Conclusions, and Future Work ======================================== We have presented a Simulated Annealing approach for a classical well-studied timetabling problem, namely the PE-CTT problem. The comprehensive comparison with the literature shows that our solver is able to outperform most of the previous approaches to the problem. This result is obtained despite the fact that our solution is based on a relatively simple single-step algorithm, whereas most of the previous solvers use complex multi-step solution methods. In addition, the method proved to be quite robust w.r.t. the parameter values. In our opinion, the key ingredients for these good results are the following. First of all, the preprocessing and constraint reformulation steps improve the efficacy of the local search. In particular, the identification of the all-room events allows us to leave more space for placing the other events. Secondly, the room assignment procedure based on attractiveness allows us to refrain from using the matching algorithm, which is computationally expensive. Finally, the use of a single-step procedure that takes into account the soft constraints from the very beginning allowed us to save computational time later on during the search. Only for the ITC 2002 instances the results are inferior to the best ones reported in the literature. Unfortunately, the reliability of this comparison is limited since it is based only on the best results found by the other authors. For the future, we plan to extend our work in various directions: 1. Investigate on the use of different versions of Simulated Annealing, for example using different cooling schemes and acceptance criteria. 2. Improve our use of the tuning tools, mainly NOLHs and RACE, with the twofold objective of making them more effective and to automatise part of the experimental process. 3. Apply the same technique in different contexts, such as CB-CTT and other timetabling problems, in order to have a confirmation of its applicability. 4. Analyse the relevant features of the problem instances, in the spirit of [@KoSo04; @SMLo11], with the aim of obtaining an adaptive tuning. The idea would be to set the specific parameters based on the analysis of the specific instance, and the extraction of the values of specific features. [^1]: Using the spreadsheet downloaded from the ITC 2007 website.
--- abstract: 'The existence of a counterexample to the infinite-dimensional Carleson embedding theorem has been established by Nazarov, Pisier, Treil, and Volberg. We provide an explicit construction of such an example. We also obtain a non-constructive example of particularly simple form; the density function of the measure (with respect to a certain weighted area measure) is the tensor-square of a Hilbert space-valued analytic function. This special structure of the measure has implications for Hankel-like operators appearing in control theory.' author: - Eskil Rydhe bibliography: - 'paper04bib.bib' title: Two more counterexamples to the infinite dimensional Carleson embedding theorem --- Introduction ============ Let ${\mathcal{H}}$ denote a separable Hilbert space with norm $\|\cdot\|_{\mathcal{H}}$ and inner product $\langle\cdot,\cdot\rangle_{\mathcal{H}}$. We use $N\in[1,\infty]$ to denote the dimension of ${\mathcal{H}}$, and ${\mathcal{L}}_+({\mathcal{H}})$ the set of positive (bounded) linear operators on ${\mathcal{H}}$. We let $L^2({\mathbb{T}},{\mathcal{H}})$ denote the standard space of $2$-Lebesgue–Bochner integrable ${\mathcal{H}}$-valued functions defined on the unit circle ${\mathbb{T}}$, and $H^2({\mathbb{T}},{\mathcal{H}})$ the subspace of analytic functions in $L^2({\mathbb{T}},{\mathcal{H}})$. Throughout this paper, we let $\mu$ be an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure on the open unit disc ${\mathbb{D}}$. By $L^2({\mathbb{D}},{\mathcal{H}},d\mu)$ we denote the space of strongly measurable functions $f:{\mathbb{D}}\to{\mathcal{H}}$ such that $$\int_{{\mathbb{D}}}\langle d\mu\, f,f\rangle_{\mathcal{H}}<\infty.$$ Given an arc $I\subset {\mathbb{T}}$, the corresponding Carleson square is the set $Q_I=\{w\in{\mathbb{D}};\frac{w}{|w|}\in I,1-|I|<|w|<1\}$. Here $|I|$ denotes the normalized Lebesgue measure of $I$, i.e. $|{\mathbb{T}}|=1$. The Carleson intensity of $\mu$ is defined as $$\|\mu \|_{\mathcal{I}}=\sup_{I\subset {\mathbb{T}},\|e\|_{\mathcal{H}}=1} \langle \mu(Q_I)e,e\rangle_{\mathcal{H}}.$$ Note that in order to obtain essentially the same quantity, it suffices to consider dyadic arcs. We define the harmonic extension operator for integrable functions $f:{\mathbb{T}}\to{\mathcal{H}}$ by $${\mathcal{P}}f(re^{2\pi i x})=\int_0^1f(e^{2\pi it}) P_r(x-t)\, dt,\quad w=re^{2\pi i x}\in {\mathbb{D}},$$ where $$P_r(t)=\frac{1-r^2}{1-2r\cos(2\pi t)+r^2},\quad t\in{\mathbb{R}},$$ is the usual Poisson kernel for ${\mathbb{D}}$. In the sequel, we shall typically write $f(w)$ as shorthand for ${\mathcal{P}}f(w)$. The top tile of $Q_I$ is the set $T_I=\{w\in Q_I;|w|<1-\frac{|I|}{2}\}$. We define the dyadic extension operator by $${\mathcal{P}}^d f(w)=\sum_{I\in{\mathcal{D}}} {\mathbbm{1}}_{T_I}(w)\frac{1}{|I|}\int_I f\, dm,\quad w\in {\mathbb{D}},$$ where ${\mathcal{D}}$ denotes the collection of dyadic arcs in ${\mathbb{T}}$, and ${\mathbbm{1}}_{T_I}$ is the characteristic function of $T_I$. Given $\mu$, we refer to ${\mathcal{P}}: H^2({\mathcal{H}})\to L^2({\mathbb{D}},{\mathcal{H}},d\mu)$ as a (harmonic) Carleson embedding. It is of interest to characterize the class of measures $\mu$ for which such embeddings are bounded. In the scalar-valued case, i.e. $N=1$, such measures are characterized by having finite Carleson intensity. Moreover, the corresponding norms are comparable. This characterization scales badly with the dimension of ${\mathcal{H}}$. A first result in this direction is due to Nazarov, Treil, and Volberg [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm]: \[Proposition:NTV\] There exists a universal constant $c>0$ with the following property: If ${\mathcal{H}}$ is a Hilbert space of dimension $N<\infty$, then there exists an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure $\mu$ on ${\mathbb{D}}$, such that $$\frac{\|{\mathcal{P}}^d\|_{L^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)}}{\|\mu\|_{{\mathcal{I}}}}\ge c (\log N)^{1/2}.$$ Proposition \[Proposition:NTV\] was proved using a rather sophisticated, yet explicit, construction. A corollary to this result is that if ${\mathcal{H}}$ is infinite-dimensional, then there exists an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure on ${\mathbb{D}}$, such that $\|\mu\|_{\mathcal{I}}<\infty$, while ${\mathcal{P}}^d:L^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)$ is unbounded. It has later been observed by Pott and Sadosky [@Pott-Sadosky2002:BMOBi-discOpBMO] that this result may be deduced from a geometric construction due to Carleson [@Carleson1974:CounterExMeasuresBddOnHpBi-disc]. A corresponding result for harmonic embeddings, along with a sharp estimate of the dimensional growth, was obtained by Nazarov, Pisier, Treil, and Volberg [@Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods]: \[Proposition:NPTV\] There exists a universal constant $c>0$ with the following property: If ${\mathcal{H}}$ is a Hilbert space of dimension $N<\infty$, then there exists an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure on ${\mathbb{D}}$, such that $$\frac{\|{\mathcal{P}}\|_{H^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)}}{\|\mu\|_{{\mathcal{I}}}}\ge c \log N.$$ The methods used in [@Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods] yields the existence of such measures, but not an explicit representation. The goal of this note is to adapt the explicit construction from [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm] to the setting of harmonic embeddings: \[Theorem:Main\] There exists a universal constant $c>0$ with the following property: If ${\mathcal{H}}$ is a Hilbert space of dimension $N<\infty$, then there exists an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure $\mu$ on ${\mathbb{D}}$, such that $$\frac{\|{\mathcal{P}}\|_{H^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)}}{\|\mu\|_{{\mathcal{I}}}}\ge c (\log N)^{1/2}.$$ The measure $\mu$ may be explicitly constructed in such a way that $$d\mu(w)=\phi(w)\otimes \phi(w)(1-|w|^2)\, dA(w),$$ where $\phi:{\mathbb{D}}\to{\mathcal{H}}$ is analytic. Note that Theorem \[Theorem:Main\] asserts a smaller estimate of dimensional growth than Proposition \[Proposition:NPTV\]. It may still be that Theorem \[Theorem:Main\] is sharp for measures with rank one-valued density function. Apart from the explicit construction, the main novelty of this paper (compared to [@Nazarov-Pisier-Treil-Volberg2002:EstsVecCarlesonEmbThmVecParaprods]) is that the measure in Theorem \[Theorem:Main\] has a very simple form. The original motivation for this paper was to study a certain class of Hankel-like operators appearing naturally in control theory, see . In that setting, the particular form of the measure in Theorem \[Theorem:Main\] is indeed critical. We demonstrate two different ways of transferring Theorem \[Theorem:Main\] to the case where ${\mathcal{H}}$ is infinite-dimensional. The first one gives an explicit construction of the corresponding measure. We leave the proof as an exercise. \[Corollary:Explicit\] Let $c>0$ be the universal constant, whose existence is guaranteed by Theorem \[Theorem:Main\]. For each $N\in{\mathbb{N}}$, let ${\mathcal{H}}_N$ denote a Hilbert space of dimension $N$, and let $\mu_N$ be a measure such that $\|\mu_N\|_{{\mathcal{I}}}=1$, and $$\|{\mathcal{P}}\|_{H^2({\mathbb{T}},{\mathcal{H}}_N ) \to L^2({\mathbb{D}},{\mathcal{H}}_N,d\mu_N)}\ge c (\log N)^{1/2}.$$ Let ${\mathcal{H}}=\oplus_{N=1}^\infty {\mathcal{H}}_N$, and $\mu=\oplus_{N=1}^\infty \mu_N$. Then $\|\mu\|_{\mathcal{I}}=1$, while ${\mathcal{P}}:H^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)$ is unbounded. A feature of Theorem \[Theorem:Main\] which is lost in Corollary \[Corollary:Explicit\] is the simple form of the measure. We can preserve this feature, at the cost of losing the explicit representation. \[Corollary:Existence\] If ${\mathcal{H}}$ is infinite dimensional, then there exists an ${\mathcal{L}}_+({\mathcal{H}})$-valued measure $\mu$ on ${\mathbb{D}}$, such that $\|\mu\|_{\mathcal{I}}<\infty$, while ${\mathcal{P}}:H^2({\mathbb{T}},{\mathcal{H}}) \to L^2({\mathbb{D}},{\mathcal{H}},d\mu)$ is unbounded. Furthermore, $\mu$ has the property that $$d\mu(w)=\phi(w)\otimes \phi(w)(1-|w|^2)\, dA(w),$$ where $\phi:{\mathbb{D}}\to{\mathcal{H}}$ is analytic. The paper is structured as follows: In Section \[Sec:Notation\] we fix some further notation. In Section \[Sec:BMOA\] we discuss how our results relate to a certain class of Hankel-type operators appearing naturally in control theory, and to some vector-valued generalizations of bounded mean oscillation. The discussion incidentally leads to a proof of Corollary \[Corollary:Existence\]. In Section \[Sec:Proof\] we present the proof of Theorem \[Theorem:Main\]. Some parts of the paper are quite technical, and it is therefor written with the intention that the level of technicality should roughly be an increasing function of page number. Notation {#Sec:Notation} ======== We use the standard notation ${\mathbb{Z}}$, ${\mathbb{R}}$, and ${\mathbb{C}}$ for the respective rings of integers, real numbers, and complex numbers. By ${\mathbb{N}}$ we denote the set of strictly positive elements of ${\mathbb{Z}}$, while ${\mathbb{N}}\cup\{0\}$ is denoted by ${\mathbb{N}}_0$. We let ${\mathbb{D}}=\{w\in{\mathbb{C}};|w|<1\}$, ${\mathbb{T}}=\{w\in{\mathbb{C}};|w|=1\}$, and ${\mathbb{C}}_+=\{z=x+iy\in{\mathbb{C}};y>0\}$. We identify ${\mathbb{C}}_+/{\mathbb{Z}}$ with ${\mathbb{D}}$ (and ${\mathbb{R}}/{\mathbb{Z}}$ with ${\mathbb{T}}$) using the map $z\mapsto e^{2\pi i z}$. Throughout this paper, we use the generic notation $z=x+iy$ for points in ${\mathbb{C}}_+$, and $w=e^{2\pi i z}$ for points in ${\mathbb{D}}$. The respective Lebesgue measures on ${\mathbb{R}}$ and ${\mathbb{C}}$ are denoted by $m$ and $A$. It will be convenient to define the weighted area measure $A_1$ on ${\mathbb{D}}$ by $dA_1(w)=(1-|w|^2)\, dA(w)$. Given two parametrized sets of nonnegative numbers $\{A_i\}_{i\in I}$ and $\{B_i\}_{i\in I}$, we use the notation $A_i\lesssim B_i$, $i\in I$ to indicate the existence of a positive constant $C$ such that $A_i\le CB_i$ whenever $i\in I$. We then say that $A_i$ is bounded by $B_i$, and refer to $C$ as a bound. Sometimes we allow ourselves to not mention the index set $I$ and instead let it be implicit from the context. If $A_i\lesssim B_i$ and $B_i\lesssim A_i$, then we write $A_i\approx B_i$. We then say that $A_i$ and $B_i$ are comparable. We let ${\mathcal{D}}({\mathbb{R}})$ denote the set $\{[2^{-j}k,2^{-j}(k+1));j,k\in{\mathbb{Z}}\}$ of dyadic sub intervals of ${\mathbb{R}}$. The set of dyadic sub intervals of $I\in{\mathcal{D}}({\mathbb{R}})$ is denoted by ${\mathcal{D}}(I)$. With the identification $[0,1)\simeq{\mathbb{T}}$ described above, ${\mathcal{D}}([0,1))$ is identified with the set ${\mathcal{D}}({\mathbb{T}})$ of dyadic sub arcs of ${\mathbb{T}}$. The Lebesgue measure of $I\in{\mathcal{D}}({\mathbb{R}})$ is denoted by $|I|$. The center point, left endpoint and right endpoint of $I\in{\mathcal{D}}({\mathbb{R}})$ are denoted by $C_I,L_I$ and $R_I$ respectively. The rank of $I\in{\mathcal{D}}({\mathbb{R}})$ is defined as rk$(I)=-\log _2(|I|)$. The $k$th generation of $I\in{\mathcal{D}}({\mathbb{R}})$ is defined as ${\mathcal{D}}_k(I)=\{J\in{\mathcal{D}}(I);|J|=2^{-k}|I|\}$. If $I,J\in{\mathcal{D}}({\mathbb{R}})$, and $|I|\le |J|$, then we define the relative distance between $I$ and $J$ as rd$(I,J)=|n|$, where $n$ is the unique number such that $I\subset J+n|J|$. Given $I\in{\mathcal{D}}({\mathbb{T}})$, the corresponding Carleson square is given by $$Q_I=\left\{w=e^{2\pi i (x+iy)}\in{\mathbb{D}};x\in I, 0\le y\le -\frac{\log (1-|I|)}{2\pi}\right\}.$$ We also define its half plane correspondent $$\tilde Q_I=\left\{x+iy\in{\mathbb{C}}_+;x\in I, 0\le y\le -\frac{\log (1-|I|)}{2\pi}\right\}.$$ The Poisson kernel for ${\mathbb{C}}_+$ is the function $$P_y^{{\mathbb{C}}_+}(t)=\frac{1}{\pi}\frac{y}{t^2+y^2},\quad y>0,\ t\in{\mathbb{R}}.$$ We define the Poisson extension (to ${\mathbb{C}}_+$) of a suitable function $f:{\mathbb{R}}\to{\mathbb{C}}$ as $$f(z)=\int_{\mathbb{R}}f(t) P_y^{{\mathbb{C}}_+}(x-t)\, dt,\quad z=x+iy\in{\mathbb{C}}_+.$$ The Fourier transform of an integrable function $f:{\mathbb{R}}\to{\mathbb{C}}$ is given by $${\mathcal{F}}f(\xi)=\hat f(\xi)=\int_{\mathbb{R}}f(x)e^{-2\pi i x\xi}\, dx,\quad \xi\in{\mathbb{R}}.$$ We recall that ${\mathcal{F}}P_y^{{\mathbb{C}}_+}(\xi)=e^{-2\pi |\xi|y}$. Let ${\mathcal{S}}$ denote the Schwartz class of functions defined on ${\mathbb{R}}$. For $f\in{\mathcal{S}}$, we define the analytic and the anti-analytic projections of $f$ as $f^+=P_+f={\mathcal{F}}^{-1}({\mathbbm{1}}_{\xi>0}\hat f)$ and $f^-=P_-f={\mathcal{F}}^{-1}({\mathbbm{1}}_{\xi<0}\hat f)$. As one might guess, the respective Poisson extensions of $f^+$ and $f^-$ are analytic and anti-analytic. We also define the Hilbert transform $Hf=-if^++if^-$. We define the Wirtinger type differential operators $\partial =\partial_x-i\partial_y$, $\bar \partial =\partial_x+i\partial_y$, and the Laplacian $\Delta=\partial\bar{\partial}$. If $f$ is the Poisson extension of a Schwartz function, then we define $Df=-i\partial f^++i\bar{\partial}f^-={\mathcal{F}}^{-1}(\xi\mapsto 4\pi |\xi| \hat f(\xi))$. Given a function $f:{\mathbb{R}}\to{\mathbb{C}}$, we define the periodization $g:{\mathbb{R}}\to{\mathbb{C}}$ by $$g(x)=\sum_{k\in{\mathbb{Z}}}f(x-k),\quad x\in{\mathbb{R}}.$$ If $f$ is integrable, then it holds that $$\int_0^1g(x)e^{-2\pi ixn}\, dx=\hat f(n),\quad n\in{\mathbb{Z}}.$$ This implies in particular that $P_{e^{-2\pi y}}$ is the periodization of $P_y^{{\mathbb{C}}_+}$. Thus, for the Poisson extension of $g$ it holds that $$\label{Eq:Periodization} g(w)=\sum_{n=0}^\infty \hat f(n) w^n+\sum_{n=-\infty}^{-1}\hat f(n)\bar w^n=\sum_{k\in{\mathbb{Z}}}f(z-k),\quad w=e^{2\pi i z}\in{\mathbb{D}}.$$ We will use this in plenty. Given $x,y\in{\mathcal{H}}$, we define the linear rank one operator $x\otimes y:{\mathcal{H}}\ni z\mapsto x\langle z,y\rangle_{\mathcal{H}}\in{\mathcal{H}}$. Note that the tensor product defined in this way has anti-linear dependence on its second factor. Hankel-type operators, and $BMOA$ {#Sec:BMOA} ================================= Let $\phi:{\mathbb{D}}\to{\mathcal{H}}$ be an analytic function, with Taylor series representation $\phi(w)=\sum_{n=0}^\infty \hat \phi(n) w^n$, $w\in{\mathbb{D}}$. Given $\alpha>0$, we define the corresponding order fractional derivative of $\phi$ by $D^\alpha \phi(w)=\sum_{n=0}^\infty (1+n)^\alpha\hat \phi(n) w^n$, $w\in{\mathbb{D}}$. Also, we define the Hankel operator $\Gamma_\phi$ by the action $$\Gamma_\phi f(w)=\sum_{n=0}^\infty \left(\sum_{m=0}^\infty \hat \phi(m+n)\hat f(n)\right) w^n,\quad w\in{\mathbb{D}},$$ where $f$ is ${\mathbb{C}}$-valued, and analytic in a neighborhood of $\overline {\mathbb{D}}$. Operators of the type $\Gamma_\phi D^\alpha$ appear naturally in control theory, specifically in the study of weighted admissibility, e.g. . Hardy space boundedness properties of $\Gamma_\phi D^\alpha$ have been studied in [@Rydhe2016:VecHankOpsCarlesonEmbsBMOA] (see also [@Janson-Peetre1988:Paracomms] for the case ${\mathcal{H}}={\mathbb{C}}$). This study lead to different notions of $BMOA$, bounded mean oscillation of analytic functions. Consider the following three conditions: - There exists $C>0$ such that for all $w_0\in{\mathbb{D}}$ $$\label{Eq:BMOAC} \int_{\mathbb{D}}\|(D\phi)(w)\|_{\mathcal{H}}^2\frac{(1-|w|^2)}{|1-\bar w w_0|^2}\, dA(w)\le \frac{C^2}{1-|w_0|^2}.$$ If condition $(i)$ is satisfied, then we say that $\phi\in BMOA_{\mathcal{C}}({\mathcal{H}})$. The space $BMOA_{\mathcal{C}}({\mathcal{H}})$ is equipped with the norm $\|\phi\|_{{\mathcal{C}}}=\inf\{C;\textnormal{ \eqref{Eq:BMOAC} holds}\}$. - There exists $C>0$ such that for all $f\in H^2({\mathcal{H}})$ it holds that $$\label{Eq:BMOAC*} \int_{\mathbb{D}}|\langle f(w),(D\phi)(w) \rangle_{\mathcal{H}}|^2(1-|w|^2)\, dA(w)\le C^2\|f\|_{H^2({\mathcal{H}})}^2.$$ If condition $(ii)$ is satisfied, then we say that $\phi\in BMOA_{{\mathcal{C}}^*}({\mathcal{H}})$. The space $BMOA_{{\mathcal{C}}^*}({\mathcal{H}})$ is equipped with the norm $\|\phi\|_{{\mathcal{C}}^*}=\inf\{C;\textnormal{ \eqref{Eq:BMOAC*} holds}\}$. - There exists $C>0$ such that for all $x\in {\mathcal{H}}$ and $w_0\in{\mathbb{D}}$ it holds that $$\label{Eq:BMOAW} \int_{\mathbb{D}}|\langle x,(D\phi)(w) \rangle_{\mathcal{H}}|^2\frac{(1-|w|^2)}{|1-\bar w w_0|^2}\, dA(w)\le \frac{C^2\|x\|_{\mathcal{H}}^2}{1-|w_0|^2}.$$ If condition $(iii)$ is satisfied, then we say that $\phi\in BMOA_{\mathcal{W}}({\mathcal{H}})$. We equip the space $BMOA_{\mathcal{W}}({\mathcal{H}})$ with the norm $\|\phi\|_{{\mathcal{W}}}=\inf\{C;\textnormal{ \eqref{Eq:BMOAW} holds}\}$. It is well-known, e.g. [@Garnett2007:BddAnalFcnsBook], that $$BMOA_{\mathcal{C}}({\mathbb{C}})= BMOA_{{\mathcal{C}}^*}({\mathbb{C}})=BMOA_{\mathcal{W}}({\mathbb{C}}),$$ with equivalent norms, and, moreover, that $\Gamma_\phi D^\alpha$ is bounded on $H^2({\mathbb{T}},{\mathbb{C}})$ if and only if $D^\alpha \phi\in BMOA_{\mathcal{C}}({\mathbb{C}})$, e.g. [@Peller2003:HankOpsBook]. If ${\mathcal{H}}$ is infinite-dimensional, then we obtain instead the following chain of strict inclusions: $$BMOA_{\mathcal{C}}({\mathcal{H}})\subsetneq BMOA_{{\mathcal{C}}^*}({\mathcal{H}})\subsetneq BMOA_{\mathcal{W}}({\mathcal{H}}).$$ The first inclusion was obtained in [@Rydhe2016:VecHankOpsCarlesonEmbsBMOA]. We now justify the second inclusion: It holds that $$\|\phi\|_{{\mathcal{C}}^*}=\|{\mathcal{P}}\|_{H^2({\mathbb{T}},{\mathcal{H}})\to L^2({\mathbb{D}},{\mathcal{H}},(D\phi)\otimes (D\phi)\, dA_1)},$$ and $$\|\phi\|_{{\mathcal{W}}^*}\approx \|(D\phi) \otimes (D\phi)\, dA_1\|_{\mathcal{I}}.$$ The first is merely an algebraic reformulation, while the second is a typical exercise, cf. [@Garnett2007:BddAnalFcnsBook Lemma VI.3.3]. Furthermore, condition $(iii)$ just means that for some $C>0$, is satisfied for the class of functions $\{k_{w_0}x\}_{w_0\in{\mathbb{D}},x\in{\mathcal{H}}}$, where $k_{w_0}(w)=\frac{1}{1-\overline{w_0}w}$ are the reproducing kernels for $H^2$. We thus obtain that $BMOA_{{\mathcal{C}}^*}({\mathcal{H}})\subseteq BMOA_{\mathcal{W}}({\mathcal{H}})$. Strictness of the inclusion follows from Theorem \[Theorem:Main\]. Indeed, if the inclusion was not strict, then the open mapping theorem would imply that the identity operator from $BMOA_{\mathcal{W}}({\mathcal{H}})$ into $BMOA_{{\mathcal{C}}^*}({\mathcal{H}})$ is bounded. This would contradict Theorem \[Theorem:Main\]. As a result we obtain Corollary \[Corollary:Existence\]. The above results also have implications on the existence of so called reproducing kernel theses (RKT) for Hankel-like operators; another concept appearing naturally in control theory. We refer to for details, but point out that by results in [@Jacob-Rydhe-Wynn2014:WeightWeissConjRKTGenHankOps], $D^\alpha \Gamma_\phi$ has an RKT while $\Gamma_\phi D^\alpha$ does not. The inclusion $BMOA_{{\mathcal{C}}^*}({\mathcal{H}})\subsetneq BMOA_{\mathcal{W}}({\mathcal{H}})$ implies, via results in [@Rydhe2016:VecHankOpsCarlesonEmbsBMOA], that the adjoint operator $(\Gamma_\phi D^\alpha)^*$ also does not have an RKT. Proof of Theorem \[Theorem:Main\] {#Sec:Proof} ================================= The heuristics of the proof is as follows: Let $\delta_{w}$ denote a point mass at $w\in{\mathbb{D}}$. The measure constructed in [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm] is of the form $d\mu = \sum_{I\in{\mathcal{D}}}\delta_{w_I} \langle \cdot , \omega_I \rangle \varphi_I$, for some points $\{w_I\}_{I\in{\mathcal{D}}({\mathbb{T}})}$ and vectors $\{\omega_I\}_{I\in{\mathcal{D}}({\mathbb{T}})}$. If we formally define the function $F=\sum_{I\in{\mathcal{D}}({\mathbb{T}})}\delta_{w_I}^{1/2} \omega_I$ then $d\mu=F\otimes F\, dA$. The idea behind the construction to follow is to find functions that behave like “square roots of point masses” in the sense that they are well localized, and essentially orthogonal. Our examples of such functions are given by smooth wavelets. We give an outline of the proof: Let $N$ denote the dimension of ${\mathcal{H}}$. In Subsection \[Subsec:Harmonic\], we construct a measure $d\nu=\varphi \otimes \varphi\, dA_1$, where $\varphi:{\mathbb{D}}\to {\mathcal{H}}$ is harmonic. In Subsection \[Subsec:HarmIntensity\], we state three lemmas, and use these to prove that $\|\nu\|_{\mathcal{I}}$ is uniformly bounded in $N$. In Subsection \[Subsec:Analytic\], we use $\nu$ to construct $\mu$ such that $d\mu=\phi \otimes \phi\, dA_1$, where $\phi:{\mathbb{D}}\to{\mathcal{H}}$ is analytic. It will follow easily that $\|\mu\|_{\mathcal{I}}$ is uniformly bounded in $N$. In Subsection \[Subsec:Embedding\], we prove that the corresponding embeddings are bounded below by $(\log N)^{1/2}$. In Subsection \[Subsec:Lemmata\], we prove the three lemmas used in Subsection \[Subsec:HarmIntensity\]. The harmonic construction {#Subsec:Harmonic} ------------------------- A Littlewood-Paley wavelet $\{\psi_I\}_{I\in{\mathcal{D}}({\mathbb{R}})}$ is an orthonormal basis for $L^2({\mathbb{R}})$ satisfying the dilation translation relation $$\psi_I(x)=\frac{1}{|I|^{1/2}}\psi\Big(\frac{x-C_I}{|I|}\Big) \label{Eq:DilationTranslation},$$ where $\psi$ is an even Schwartz function such that $\hat\psi\ge 0$, $\hat \psi$ has support on $[-\frac{4}{3},-\frac{1}{3}]\cup[\frac{1}{3},\frac{4}{3}]$, and $\hat\psi>0$ on $[\frac{3}{8},\frac{5}{4}]$. Such a wavelet is constructed in [@Meyer1992:WaveletsOpsBook Chapter 3]. For $I\in{\mathcal{D}}({\mathbb{R}})$, we define the functions $f_I=|I|^{1/2}(D\psi_I)$. For $I\in{\mathcal{D}}({\mathbb{T}})$, we define the corresponding periodizations $g_I$ by $$g_I(e^{2\pi i x})=\sum_{k\in{\mathbb{Z}}} f_I(x-k),\qquad x\in{\mathbb{R}}.$$ The family $\{g_I\}_{I\in{\mathcal{D}}({\mathbb{T}})}$ is the first of two main ingredients in the construction. The second ingredient is a family of vectors which is constructed as follows: Let $\{e_l\}_{l=1}^{N}$ be an orthonormal basis for ${\mathcal{H}}$, and define the numbers $a_l=\frac{1}{l(\log N)^{1/2}}$, where $1\le l\le N$. For $I\in{\mathcal{D}}({\mathbb{T}})$ with ${\textnormal{rk}}(I)=j\in[1,N]$, we define $$\omega_I=\sum_{l=0}^{j-1}a_{j-l}e_le^{2\pi i 2^lC_I}.$$ For intervals of other ranks we let $\omega_I=0$. The function that we want is now given by $$\varphi=\sum_{I\in{\mathcal{D}}({\mathbb{T}})}g_I\omega_I.$$ The Carleson intensity is good {#Subsec:HarmIntensity} ------------------------------ To prove that $\|\nu \|_{\mathcal{I}}$ is uniformly bounded in $N$, we need three properties of the functions $g_I$. The first one is just a description of how the norms of these functions scale with the size of $I$, and follows more or less by a change of variables: \[Lemma:Scaling\] $$\int_{{\mathbb{D}}}|g_I|^2\, dA_1\lesssim |I|,\quad I\in{\mathcal{D}}({\mathbb{T}}).$$ Now consider the measure given by $d\nu=\varphi \otimes \varphi\, dA_1$. Note that $$\label{Eq:TensorSquare} \varphi \otimes \varphi =\sum_{I,J\in{\mathcal{D}}({\mathbb{T}})}g_I\overline{g_J} (\omega_I\otimes \omega_J ).$$ The diagonal terms of this sum can be estimated using the following: \[Lemma:DiagonalTerms\] $$\sum_{I\in{\mathcal{D}}(K)}|\langle \omega_I,e\rangle_{\mathcal{H}}|^2|I|\lesssim |K|\|e\|^2,\quad K\in{\mathcal{D}}({\mathbb{T}}),\ e\in{\mathcal{H}}.$$ In [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm], uniform boundedness of the Carleson intensity is essentially a dyadic version of Lemma \[Lemma:DiagonalTerms\]. Since the functions $\{g_I\}$ do not have disjoint supports, we will also need to estimate the off-diagonal terms in . This is the main technical complication of this paper: \[Lemma:OffDiagonalTerms\] $$\sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{T}})\\ \neg (I=J\in{\mathcal{D}}(K))}}|\int_{Q_K} g_I\overline{g_J}\, dA_1|\lesssim |K|,\quad K\in {\mathcal{D}}({\mathbb{T}}).$$ These lemmas yield a short proof that $\|\nu \|_{\mathcal{I}}$ is uniformly bounded: Assume that $\|e\|_{\mathcal{H}}\le 1$. Then $$\begin{aligned} \int_{Q_K}\langle d\mu\, e,e\rangle_{\mathcal{H}}= {} & \underbrace{\sum_{I\in{\mathcal{D}}(K)}\int_{Q_K}|g_I|^2|\langle \omega_I,e\rangle_{\mathcal{H}}|^2\, dA_1}_{=:I_1} \\ &+ \underbrace{\sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{T}}) \\ \neg (I=J\in{\mathcal{D}}(K))}}\int_{Q_K}g_I\overline{g_J}\langle \omega_I,e\rangle_{\mathcal{H}}\langle e,\omega_J \rangle_{\mathcal{H}}\, dA_1}_{=:I_2}.\end{aligned}$$ By Lemma \[Lemma:Scaling\] and Lemma \[Lemma:DiagonalTerms\] $$\begin{aligned} I_1\le \sum_{I\in{\mathcal{D}}(K)}\int_{{\mathbb{D}}}|g_I|^2|\langle \omega_I,e\rangle_{\mathcal{H}}|^2\, dA_1 \lesssim \sum_{I\in{\mathcal{D}}(K)}|\langle \omega_I,e\rangle_{\mathcal{H}}|^2|I|\lesssim |K|.\end{aligned}$$ The vectors $\{\omega_I\}_{I\in{\mathcal{D}}({\mathbb{T}})}$ are easily seen to have less that unit norm, so by Lemma \[Lemma:OffDiagonalTerms\] $$I_2\le \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{T}}) \\ \neg (I=J\in{\mathcal{D}}(K))}}|\int_{Q_K}g_I\overline{g_J}\, dA_1|\lesssim |K|.$$ Making things analytic {#Subsec:Analytic} ---------------------- Once we have the harmonic construction, the analytic ditto is obtained quite easily. The proof that $\|\nu \|_{CM}$ is uniformly bounded relies on orthogonality and localization of the functions $\{f_I\}_{I\in {\mathcal{D}}({\mathbb{R}})}$. The localization in turn is obtained by the translation dilation relation , combined with the fact that $\hat \psi$ vanishes in a neighborhood of $0$. The Hilbert transform preserves both orthogonality, and Fourier supports. Let $\tilde f_I=Hf_I$, and $\tilde g_I$ be the corresponding periodization. Repeating the proofs from section \[Subsec:HarmIntensity\], with $\{\tilde f_I\}_{I\in{\mathcal{D}}({\mathbb{R}})}$ in place of $\{f_I\}_{I\in{\mathcal{D}}({\mathbb{R}})}$, one sees that the measures $$\bigg(\sum_{I\in{\mathcal{D}}({\mathbb{T}})}\tilde g_{I}\omega_{I} \bigg)\otimes \bigg(\sum_{I\in{\mathcal{D}}({\mathbb{T}})}\tilde g_{I}\omega_{I} \bigg)\, dA_1$$ have uniformly bounded Carleson intensity. We now define the analytic functions $f_I^+=\frac{1}{2}(f_I+i\tilde f_I)$, the corresponding periodizations $g_I^+$, $$\phi=\sum_{I\in{\mathcal{D}}({\mathbb{T}})}g_I^+\omega_I,$$ and $d\mu=\phi\otimes \phi dA_1$. The functions $f_I^+$ are analytic and well localized, but not orthogonal. However, for an arbitrary unit vector $e\in{\mathcal{H}}$ we have that $$\begin{aligned} |\langle \phi, e\rangle_{\mathcal{H}}|^2 \lesssim |\langle \sum_{I\in{\mathcal{D}}({\mathbb{T}})}g_I\omega_I, e\rangle_{\mathcal{H}}|^2+ |\langle \sum_{I\in{\mathcal{D}}({\mathbb{T}})}\tilde g_I\omega_I, e\rangle_{\mathcal{H}}|^2.\end{aligned}$$ It immediately follows that $\|\mu\|_{CM}$ is uniformly bounded in $N$. It may seem odd to the reader that we do not simply let $\{f_I\}_{I\in {\mathcal{D}}({\mathbb{R}})}$ be a family of analytic functions to begin with. The reason for this is that no analytic family, satisfying the additional regularity conditions that we need, is an orthonormal wavelet basis for $H^2({\mathbb{R}})$, as was demonstrated by Auscher [@Auscher1995:SolOfTwoProblsOnWavelets]. Breaking the embedding {#Subsec:Embedding} ---------------------- To prove that the embedding is bad, we follow closely [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm]. Consider the function $E(w)=\sum_{l=1}^Nw^{2^l}e_l$, $w\in{\mathbb{D}}$. Obviously $\|E\|_{H^2({\mathbb{T}},{\mathcal{H}})}^2=N$. Now $$\begin{aligned} \int_{{\mathbb{D}}}\langle d\mu\, E,E\rangle_{\mathcal{H}}&= \sum_{I_1,I_2\in{\mathcal{D}}({\mathbb{T}})}\int_{{\mathbb{D}}}g_{I_1}^+\overline{g_{I_2}^+}\langle \omega_{I_1} ,E\rangle_{\mathcal{H}}\langle E,\omega_{I_2} \rangle_{\mathcal{H}}\, dA_1 \\ &= \sum_{\substack{0\le l_1 < j_1\le N \\0\le l_2 < j_2\le N \\ I_1\in{\mathcal{D}}_{j_1}({\mathbb{T}})\\I_2\in{\mathcal{D}}_{j_2}({\mathbb{T}})}}a_{j_1-l_1}a_{j_2-l_2}e^{2\pi i(2^{l_1}C_{I_1}-2^{l_2}C_{I_2})}\int_{\mathbb{D}}g_{I_1}^+\overline{g_{I_2}^+}\bar w^{2^{l_1}} w^{2^{l_2}}\, dA_1.\end{aligned}$$ The integrals are easily computed in terms of Taylor coefficients: $$\begin{aligned} \int g_{I_1}^+\overline{g_{I_2}^+}\bar w^{2^{l_1}} w^{2^{l_2}}\, dA_1 = \pi\sum_{m=-2^{l_1}}^\infty\frac{\hat g_{I_1}^+(m+2^{l_1})\overline{\hat g_{I_2}^+(m+2^{l_2})}}{(m+2^{l_1}+2^{l_2}+1)(m+2^{l_1}+2^{l_2}+2)}.\end{aligned}$$ We consider fixed $j_1,j_2,l_1,l_2$, and use that $\hat g_I(n)=4\pi n|I|\hat\psi(n|I|)e^{-2\pi i nC_I}$ to compute $$\label{Eq:Sum} \sum_{\substack{I_1\in{\mathcal{D}}_{j_1}({\mathbb{T}})\\I_2\in{\mathcal{D}}_{j_2}({\mathbb{T}})}}e^{2\pi i(2^{l_1}C_{I_1}-2^{l_2}C_{I_2})}\int_{\mathbb{D}}g_{I_1}^+\overline{g_{I_2}^+}\bar w^{2^{l_1}} w^{2^{l_2}}\, dA_1 = \sum_{m=-2^{l_1}}^\infty\alpha_m\beta_m.$$ where $$\alpha_m=\frac{16\pi^32^{-j_1-j_2}(m+2^{l_1})(m+2^{l_2})\hat \psi^+ (\frac{m+2^{l_1}}{2^{j_1}})\overline{\hat \psi^+ (\frac{(m+2^{l_2})}{2^{j_2}})}}{(m+2^{l_1}+2^{l_2}+1)(m+2^{l_1}+2^{l_2}+2)},$$ and $$\beta_m=\sum_{\substack{I_1\in{\mathcal{D}}_{j_1}({\mathbb{T}})\\I_2\in{\mathcal{D}}_{j_2}({\mathbb{T}})}}e^{-2\pi im(C_{I_1}-C_{I_2})}.$$ We parametrize $I\in{\mathcal{D}}_j({\mathbb{T}})$ by $C_I=(\frac{1}{2}+n)$, $0\le n\le 2^j-1$, and by geometric summation $$\label{Eq:SumOverGeneration} \beta_m= \left\{ \begin{array}{rl} 2^{j_1+j_2}e^{-i\pi\big(\frac{m}{2^{j_1}}-\frac{m}{2^{j_2}}\big)}, &\textnormal{if }m\in 2^{j_1}{\mathbb{Z}}\cap 2^{j_2}{\mathbb{Z}}, \\ 0,&\text{otherwise}. \end{array} \right.$$ This shows that the terms in the right-hand side of vanish, unless $m=k_12^{j_1}=k_22^{j_2}$ for some $k_1,k_2\in{\mathbb{Z}}$. Assuming this restriction, we now consider $\alpha_m$. Exploiting the support of $\hat \psi^+$, we see that $\alpha_m$ vanishes, unless $\frac{1}{3}<k_1+2^{l_1-j_1},k_2+2^{l_2-j_2}<\frac{4}{3}$. Since $l< j$, this is only possible if $k_1,k_2\in\{0,1\}$. If $k_1=k_2=0$, then non-vanishing terms are precisely those for which $l_1= j_1-1$ and $l_2= j_2-1$. If $k_1=1$, and $k_2=0$, then $m=2^{j_1}=0$, which is of course impossible. Similarly, if $k_1=0$, and $k_2=1$, then all terms vanish. If $k_1=k_2=1$, then the terms vanish, unless $j_1=j_2=j$ and $l_1,l_2\le j-2$. Tracing back the calculations we have computed that $$\begin{aligned} \frac{1}{16\pi^3}&\int_{{\mathbb{D}}}\langle d\mu\, E,E\rangle_{\mathcal{H}}\\ = {} &\sum_{\substack{0\le l_1 < j_1 \le N\\ 0\le l_2 < j_2 \le N\\ l_1= j_1-1 \\ l_2= j_2-1}}a_{j_1-l_1}a_{j_2-l_2} \frac{2^{l_1+l_2}\hat \psi^+ (2^{l_1-j_1})\overline{\hat \psi^+ (2^{l_2-j_2})}}{(2^{l_1}+2^{l_2}+1)(2^{l_1}+2^{l_2}+2)} \\ &+ \sum_{\substack{1\le j\le N\\ 0\le l_1,l_2\le j-2}}a_{j-l_1}a_{j-l_2} \frac{(2^{j}+2^{l_1})(2^{j}+2^{l_2})\hat \psi^+ (1+2^{l_1-j})\overline{\hat \psi^+ (1+2^{l_2-j})}}{(2^j+2^{l_1}+2^{l_2}+1)(2^j+2^{l_1}+2^{l_2}+2)} \\ \gtrsim {} & \sum_{\substack{0\le j\le N\\0\le l_1,l_2\le j-2}}a_{j-l_1}a_{j-l_2}\gtrsim N\log N,\end{aligned}$$ where the last estimate is an elementary calculation. Assuming the validity of Lemma \[Lemma:Scaling\] through \[Lemma:OffDiagonalTerms\], this completes the proof of Theorem \[Theorem:Main\]. Properties of $g_I$ {#Subsec:Lemmata} ------------------- Before proving Lemma \[Lemma:Scaling\] through \[Lemma:OffDiagonalTerms\], we establish the following Littlewood-Paley type identity: \[Lemma:Littlewood-Paley\] $$\label{Eq:Littlewood-Paley} \int_{{\mathbb{C}}_+}f_I(x+iy)\overline{f_J(x+iy)}y\, dxdy=|I|\delta_{IJ}.$$ Note that $\Delta |\psi_I|^2=|\partial f_I^+|^2+|\bar \partial f_I^-|^2$. By Cauchy’s theorem, $\partial f_I^+$ and $\bar \partial f_I^-$ are orthogonal with respect to the inner product $$(f,g)\mapsto \int_{{\mathbb{C}}_+}f(x+iy)\overline{g(x+iy)}y\, dxdy.$$ Applying Green’s formula, $$\int_\Omega (v\Delta u-u\Delta v )\, dV=\int_{\partial \Omega}(v\nabla u-u\nabla v )\, d\vec{S},$$ with $\Omega={\mathbb{C}}_+$, $u=|\psi_I|^2$ and $v=y$ yields $$\int_{{\mathbb{C}}_+}|D\psi_I(x+iy)|^2y\, dxdy=\int_{{\mathbb{R}}}|\psi_I(x)|^2\, dx.$$ Now follows by polarization, and orthogonality of the system $\{\psi_I\}_{I\in{\mathcal{D}}({\mathbb{R}})}$. ### Proof of Lemma \[Lemma:Scaling\] This step is completely elementary. We use , together with the change of variables $w=e^{2\pi i z}$, and Lemma \[Lemma:Littlewood-Paley\]: $$\begin{aligned} \int_{{\mathbb{D}}}|g_I|^2\, dA_1 &= 4\pi ^2\int_{\tilde Q_{{\mathbb{T}}}} |\sum_{k\in{\mathbb{Z}}} f_I(z-k)|^2(1-e^{-4\pi y})e^{-2\pi y}\, dxdy \\ &\lesssim \int_{\tilde Q_{{\mathbb{T}}}} |\sum_{k\in{\mathbb{Z}}} f_I(z-k)|^2y\, dxdy \\ &= \sum_{k,l\in{\mathbb{Z}}}\int_{\tilde Q_{{\mathbb{T}}}} f_I(z-k)\overline{f_I(z-l)}y\, dxdy \\ &= \sum_{l\in{\mathbb{Z}}}\int_{{\mathbb{C}}_+} f_I(z)\overline{f_I(z-l)}y\, dxdy \\ &= \int_{{\mathbb{C}}_+} |f_I(z)|^2y\, dxdy = |I|.\end{aligned}$$ ### Proof of Lemma \[Lemma:DiagonalTerms\] Once again, we follow closely [@Nazarov-Treil-Volberg1997:CounterExInfDimCarlesonEmbThm]. Let $e=\sum_{l=1}^N b_le_l$ be a unit vector in ${\mathcal{H}}$, and $K\in{\mathcal{D}}({\mathbb{T}})$ with ${\textnormal{rk}}(K)=k$. We begin by choosing $j\ge k$, and sum over ${\mathcal{D}}(K)\cap{\mathcal{D}}_j({\mathbb{T}})$. $$\sum_{I\in{\mathcal{D}}(K)\cap {\mathcal{D}}_j({\mathbb{T}})}|\langle \omega_I,e\rangle|^2=\sum_{l_1,l_2=0}^{j-1}a_{j-l_1}a_{j-l_2}\overline{b_{l_1}}b_{l_2}\sum_{I\in{\mathcal{D}}(K)\cap {\mathcal{D}}_j({\mathbb{T}})}e^{2\pi i(2^{l_1}-2^{l_2})C_I}$$ If $l_1=l_2$, then $$\sum_{I\in{\mathcal{D}}(K)\cap {\mathcal{D}}_j({\mathbb{T}})}e^{2\pi i(2^{l_1}-2^{l_2})C_I}=2^{j-k}=\frac{|K|}{|I|}.$$ If $l_1\ne l_2$, then, like in the calculation of , we obtain $$\sum_{I\in{\mathcal{D}}_j({\mathbb{T}})\cap {\mathcal{D}}(K)}e^{2\pi i(2^{l_1}-2^{l_2})C_I} =\frac{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-k}}}{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-j}}}$$ The above right-hand side will be approximated using the elementary estimate $$|1-e^{2\pi i x}|\approx |x|,\quad|x|\le \frac{1}{2}.$$ By symmetry, it suffices to consider the case $l_1>l_2$. If $l_1,l_2\ge k$, then $1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-k}}=0$, so any such terms vanish. If $j>l_1\ge k>l_2$, then $$|\frac{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-k}}}{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-j}}}| \lesssim |\frac{2^{l_2-k}}{(2^{l_1}-2^{l_2})2^{-j}}|\lesssim 2^{l_2-l_1}\frac{|K|}{|I|}.$$ If $l_1,l_2 < k$, then $$|\frac{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-k}}}{1-e^{2\pi i (2^{l_1}-2^{l_2})2^{-j}}}| \lesssim \frac{|K|}{|I|}.$$ With these results $$\sum_{I\in{\mathcal{D}}(K)}|I||\langle \omega_I, e\rangle|^2 = \sum_{j=k}^N\sum_{I\in{\mathcal{D}}_j(K)\cap {\mathcal{D}}({\mathbb{T}})}|I||\langle \omega_I, e\rangle|^2 \lesssim C|K|,$$ where $$\begin{aligned} C=\sum_{j=k}^N\left(\sum_{l=0}^{j-1}a_{j-l}^2|b_{l}|^2 + \sum_{l_1=k}^{j-1}\sum_{l_2=0}^{k-1}a_{j-l_1}a_{j-l_2}|b_{l_1}||b_{l_2}|2^{l_2-l_1}\right. \\ + \sum_{\substack{l_1,l_2=0 \\l_1\ne l_2}}^{k-1}a_{j-l_1}a_{j-l_2}|b_{l_1}||b_{l_2}|\Bigg). \end{aligned}$$ We now make good use of Cauchy–Schwarz’s inequality, and rearrangement of terms: First $$\sum_{j=k}^N\sum_{l=0}^ja_{j-l}^2|b_{l}|^2\le \sum_{l=0}^N\sum_{j=l}^Na_{j-l}^2|b_l|^2 \lesssim \sum_{l=0}^N|b_l|^2 = 1.$$ Second $$\begin{aligned} \sum_{l_1=k}^{j-1}\sum_{l_2=0}^{k-1}a_{j-l_1}a_{j-l_2}|b_{l_1}||b_{l_2}|2^{l_2-l_1} &= \Bigg(\sum_{l_1=k}^{j-1}a_{j-l_1}|b_{l_1}|2^{-l_1}\Bigg)\Bigg(\sum_{l_2=0}^{k-1}a_{j-l_2}|b_{l_2}|2^{l_2}\Bigg) \\ &\le \Bigg(\sum_{l_1=k}^{j-1}a_{j-l_1}^24^{-l_1}\Bigg)^{1/2} \Bigg(\sum_{l_2=0}^{k-1}a_{j-l_2}^24^{l_2}\Bigg)^{1/2} \\ &= \Bigg(\sum_{l_1=k}^{j-1}a_{j-l_1}^24^{k-l_1}\Bigg)^{1/2} \Bigg(\sum_{l_2=0}^{k-1}a_{j-l_2}^24^{l_2-k}\Bigg)^{1/2}. \end{aligned}$$ Note that $\sum_{l_2=0}^{k-1}a_{j-l_2}^24^{l_2-k}\lesssim \frac{1}{\log N}$, while $$\sum_{l_1=k}^{j-1}a_{j-l_1}^24^{k-l_1} = \sum_{l=0}^{j-k-1}a_{j-k-l}^24^{-l} \lesssim \sup_{0\le l\le j-k-1}2^{-l}a_{j-k-l}^2 \lesssim \frac{1}{(1+j-k)^2\log N}.$$ Thus $$\begin{aligned} \sum_{j=k}^N \sum_{l_1=k}^{j-1}\sum_{l_2=0}^{k-1}a_{j-l_1}a_{j-l_2}|b_{l_1}||b_{l_2}|2^{l_2-l_1}\lesssim \frac{1}{\log N}\sum_{j=k}^N\frac{1}{1+j-k}\lesssim 1. \end{aligned}$$ Third $$\sum_{j=k}^N \sum_{\substack{l_1,l_2=0 \\l_1\ne l_2}}^{k-1}a_{j-l_1}a_{j-l_2}|b_{l_1}||b_{l_2}| \le \sum_{j=k}^N\sum_{l=j-k+1}^ja_l^2 \le \sum_{l=1}^Nla_l^2 \lesssim 1.$$ This completes the proof of Lemma \[Lemma:DiagonalTerms\]. ### Proof of Lemma \[Lemma:OffDiagonalTerms\] We now address the main technical difficulty of this paper. As a preliminary to Lemma \[Lemma:OffDiagonalTerms\], we prove the following result on localization of Poisson extensions for certain Schwartz functions: \[Lemma:Localization\] Let $\varphi\in\mathcal{S}$ such that $d=\textnormal{dist}(\textnormal{spt}(\hat\varphi)),0)>0$ and let $p$ be a polynomial of degree $n$. Then $$|p(x)(\varphi\ast P_y)(x)|\lesssim (1+y^n)\frac{e^{-2\pi dy}}{y^{1/2}},\quad x+iy\in{\mathbb{C}}_+.$$ By the Fourier inversion formula, and the Leibniz rule, $$\begin{aligned} p(x)(f\ast P_y)(x) &= \int\bigg(p\Big(\frac{1}{2\pi i}\frac{d}{d\xi}\Big)\big(\hat f(\xi) e^{-2\pi |\xi|y}\big)\bigg)e^{2\pi i x\xi}\, d\xi \\ &= \int \bigg(\sum_{k,l=0}^n a_{kl}\Big(\frac{y\xi}{|\xi|}\Big)^l\hat f^{(k)}(\xi ) e^{-2\pi |\xi|y}\bigg)e^{2\pi ix\xi}\, d\xi, \end{aligned}$$ for some numbers $(a_{kl})_{k,l=0}^n$. Using the decay of $\hat f$ (and its derivatives) along with Cauchy–Schwarz inequality one obtains $$\begin{aligned} |p(x)(f\ast P_y)(x)| &\lesssim \sum_{k,l=0}^n|a_{kl}||y|^l\int_{d}^\infty \frac{1}{\xi}e^{-2\pi \xi y}\, d\xi \\ &\lesssim (1+y^n)\bigg(\int_d^\infty e^{-4\pi \xi y}\, d\xi \bigg)^{1/2} =(1+y^n)\frac{e^{-2\pi dy}}{\sqrt{4\pi y}}. \end{aligned}$$ A few simple manipulations show that $$f_I(x+iy)=\frac{1}{|I|}(D\psi)\ast P_{y/{|I|}}\Big(\frac{x-C_I}{|I|}\Big).$$ Applying Lemma \[Lemma:Localization\], with $f=(D\psi)$, $p(x)=1+x^2$, and $d=\frac{1}{3}$, yields $$\label{Eq:Localization} |f_I(x+iy)|\lesssim \frac{1}{|I|^{1/2}}\frac{1+\big(\frac{y}{|I|}\big)^2}{1+\big(\frac{x-C_I}{|I|}\big)^2}\frac{e^{-\frac{2\pi y}{3}}}{y^{1/2}}.$$ As in the proof of Lemma \[Lemma:Scaling\] we obtain that $$\begin{aligned} \int_{Q_K}g_I\overline{g_J}\, dA_1 = 4\pi ^2\int_{\tilde Q_K} \sum_{k,l\in{\mathbb{Z}}} f_I(z-k)\overline{f_J(z-l)}(1-e^{-4\pi y})e^{-2\pi y}\, dxdy.\end{aligned}$$ By Taylor’s formula, $(1-e^{-4\pi y})e^{-2\pi y}=4\pi y+R(y)$, where $|R(y)|\lesssim y^2$. Applying the triangle inequality a few times we obtain that $$\begin{aligned} \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{T}}) \\ \neg (I=J\in{\mathcal{D}}(K))}} |\int_{Q_K}g_I\overline{g_J }\, dA_1| \lesssim {} & \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{R}}) \\ \neg (I=J\in{\mathcal{D}}(K)) \\ |I|,|J|\le 1}} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dxdy| \label{Eq:MainTerms} \\ &+ \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{R}}) \\ \neg (I=J\in{\mathcal{D}}(K)) \\ |I|,|J|\le 1}} \int_{\tilde Q_K}|f_I(z)\overline{f_J(z)}|y^2\, dxdy \label{Eq:RemainderTerms}\end{aligned}$$ The terms in the sum on the right-hand side of will be referred to as the main terms, and the terms in as the remainder terms. We prove that the main terms are controlled by $|K|$. Once this is done the remainder terms are easily handled. By symmetry we may assume that $|I|\le |J|$. We treat a number of different cases, roughly in the order of difficulty. #### Case $(i)$: $|K|<|I|\le|J|\le 1$. If $|K|=1$, then this case is trivial. If not, then $-\log (1-|K|)\lesssim |K|$. Using that the integrand is bounded $$\label{Eq:y-Estimate(i)} \int_0^{C|K|} \Big(1+\Big(\frac{y}{|I|}\Big)^2\Big)\Big(1+\Big(\frac{y}{|J|}\Big)^2\Big)e^{-\frac{2\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}\, dy\lesssim C|K|.$$ By the definition of relative distance $$\label{Eq:x-Estimate(i)} \int_{x\in K} \frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_J}{|J|}\big)^2\big)}\, dx\lesssim \frac{|K|}{(1+{\textnormal{rd}}(I,K)^2)(1+{\textnormal{rd}}(J,K)^2)}.$$ By , , and , $$\begin{aligned} &|\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &\lesssim \frac{1}{|I|^{1/2}|J|^{1/2}} \int_{\tilde Q_K}\frac{\big(1+\big(\frac{y}{|I|}\big)^2\big)\big(1+\big(\frac{y}{|J|}\big)^2\big)e^{-\frac{2\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_J}{|J|}\big)^2\big)}\, dxdy \\ &\lesssim \frac{|K|^2}{|I|^{1/2}|J|^{1/2}} \frac{1}{(1+{\textnormal{rd}}(I,K)^2)(1+{\textnormal{rd}}(J,K)^2)}.\end{aligned}$$ The lengths $|I|$ and $|J|$ are of the form $2^k|K|$ and $2^l|K|$, $k,l\ge 1$. Summing over all lengths and all relative distances one obtains $$\begin{aligned} &\sum_{\substack{ I,J\in{\mathcal{D}}({\mathbb{R}}) \\ |K|<|I|,|J|\le 1}} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &\lesssim \sum_{k,l=1}^\infty\sum_{m,n\in{\mathbb{Z}}}\frac{|K|}{2^{(k+l)/2}} \frac{1}{(1+m^2)(1+n^2)}\lesssim |K|.\end{aligned}$$ #### Case $(ii)$: $|I|\le|K|<|J|\le 1$. By the change of variables $\frac{|I|+|J|}{|I||J|}y\mapsto y$ $$\label{Eq:y-Estimate(ii)} \int_0^\infty \Big(1+\Big(\frac{y}{|I|}\Big)^2\Big)\Big(1+\Big(\frac{y}{|J|}\Big)^2\Big)e^{-\frac{2\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}\, dy\lesssim \frac{|I||J|}{|I|+|J|} \approx |I|.$$If ${\textnormal{rd}}(I,K)\le 1$, then $$\int_{x\in K}\frac{1}{1+\big(\frac{x-C_I}{|I|}\big)^2}\, dx\le \int_{{\mathbb{R}}}\frac{1}{1+\big(\frac{x}{|I|}\big)^2}\, dx=\pi|I|.$$ If ${\textnormal{rd}}(I,K)\ge 2$, then $$\int_{x\in K}\frac{1}{1+\big(\frac{x-C_I}{|I|}\big)^2}\, dx\le \int_{x\in K}\frac{|I|^2}{|x-C_I|^2}\, dx\le \frac{|I|^2}{|K|({\textnormal{rd}}(I,K)-1)^2}.$$ In either case $$\label{Eq:x-Estimate(ii)} \int_{x\in K}\frac{1}{1+\big(\frac{x-C_I}{|I|}\big)^2}\, dx\lesssim \frac{|I|}{1+{\textnormal{rd}}(I,K)^2}.$$ By , , the definition of relative distance, and $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dxdy| \lesssim \frac{|I|^{1/2}}{|J|^{1/2}} \frac{1}{1+{\textnormal{rd}}(J,K)^2}\frac{|I|}{1+{\textnormal{rd}}(I,K)^2}.\end{aligned}$$ Now $|J|=2^l|K|$, for $l\ge 1$, while $I\in {\mathcal{D}}_k\big(|K|+m|K|\big)$, $k\ge 0$, $m\in{\mathbb{Z}}$. $$\begin{aligned} &\sum_{\substack{ I,J\in{\mathcal{D}}({\mathbb{R}}) \\ |I|\le|K|<|J|\le 1}} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &\lesssim \sum_{k,l=0}^\infty\sum_{\substack{m,n\in{\mathbb{Z}}\\ I\in{\mathcal{D}}_k(K+m|K|)}} \frac{2^{-k/2}2^{-l/2}}{1+n^2}\frac{|I|}{1+{\textnormal{rd}}(I,K)^2} \\ &\lesssim \sum_{l=0}^\infty\sum_{m,n\in{\mathbb{Z}}}\sum_{k=0}^\infty 2^k\frac{2^{-k/2}2^{-l/2}}{1+n^2}\frac{2^{-k}|K|}{1+m^2} \lesssim |K|.\end{aligned}$$ #### Case $(iii)$: $|I|\le |J|\le |K|$, ${\textnormal{rd}}(J,K)\ge 2$. By , , the definition of relative distance, and $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| &\lesssim \frac{|I|^{1/2}}{|J|^{1/2}}\frac{1}{1+\frac{|K|^2}{|J|^2}{\textnormal{rd}}(J,K)^2}\frac{|I|}{1+{\textnormal{rd}}(I,K)^2} \\ &\le \frac{|I|^{3/2}|J|^{3/2}}{|K|^2{\textnormal{rd}}(J,K)^2(1+{\textnormal{rd}}(I,K)^2)}.\end{aligned}$$ Now $J\in {\mathcal{D}}_l\big(K+n|K|\big)$, $l\ge 0$, $|n|\ge 2$, while $I\in {\mathcal{D}}_k\big(K+m|K|\big)$, $k\ge l$, $m\in{\mathbb{Z}}$. $$\begin{aligned} \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{R}}) \\ |I|\le |J|\le |K| \\ {\textnormal{rd}}(J,K)\ge 2}} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| &\lesssim \sum_{k,l=0}^\infty\sum_{\substack{m\in{\mathbb{Z}}, |n|\ge 2 \\ I\in{\mathcal{D}}_k(K+m|K|) \\ J\in{\mathcal{D}}_l(K+n|K|)}} \frac{|I|^{3/2}|J|^{3/2}}{|K|^2n^2(1+m^2)} \\ &= \sum_{k,l=0}^\infty\sum_{\substack{m\in{\mathbb{Z}}\\ |n|\ge 2}} 2^k2^l \frac{2^{-3k/2}2^{-3l/2}|K|}{n^2(1+m^2)} \lesssim |K|.\end{aligned}$$ #### Case $(iv)$: $|I|\le |J|\le |K|$, ${\textnormal{rd}}(J,K)\le 1$, ${\textnormal{rd}}(I,K)\ge 2$. By , , the definition of relative distance, and , with $J$ in place of $I$, $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \lesssim \frac{|I|^{1/2}|J|^{1/2}}{1+\frac{|K|^2}{|I|^2}{\textnormal{rd}}(I,K)^2}\le \frac{|I|^{3/2}|J|^{3/2}}{|K|^2{\textnormal{rd}}(I,K)^2}.\end{aligned}$$ Summing over $J\in {\mathcal{D}}_l\big(|K|+n|K|\big)$, $l\ge 0$, $|n|\le 1$, and $I\in {\mathcal{D}}_k\big(|K|+m|K|\big)$, $k\ge l$, $|m|\ge 2$, $$\begin{aligned} \sum_{\substack{I,J\in{\mathcal{D}}({\mathbb{R}}) \\ |I|\le |J|\le |K| \\ {\textnormal{rd}}(J,K)\le 1 \\ {\textnormal{rd}}(I,K)\ge 2}} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| &\lesssim \sum_{k,l=0}^\infty\sum_{\substack{|m|\ge 2, |n|\le 1 \\ I\in{\mathcal{D}}_k(K+m|K|) \\ J\in{\mathcal{D}}_l(K+n|K|)}} \frac{|I|^{3/2}|J|^{3/2}}{|K|^2{\textnormal{rd}}(I,K)^2} \\ &= \sum_{k,l=0}^\infty \sum_{\substack{|m|\ge 2 \\ |n|\le 1}} 2^k2^l \frac{2^{-3k/2}2^{-3l/2}|K|} {m^2}\lesssim |K|.\end{aligned}$$ #### Case $(v)$: $I\in {\mathcal{D}}\big(K+m|K|\big)$, $|m|\le 1$, $J\in {\mathcal{D}}\big(K+n|K|\big)$, $|n|=1$. By symmetry, it suffices to handle the case $n=1$. We split this case into subcases. ##### Subcase $(v1)$: $I=J\in {\mathcal{D}}\big(K+|K|\big)$. By and $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \lesssim \int_{-\infty}^{R_K}\frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)^2}\, dx.\end{aligned}$$ By computation $$\begin{aligned} \int_{-\infty}^{R_K}\frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)^2}\, dx &= \frac{|I|}{8}\bigg(\arctan \Big(\frac{|I|}{C_I-R_K}\Big)-\frac{|I|(C_I-R_K)}{|I|^2+(C_I-R_K)^2}\bigg) \\ &\lesssim \frac{|I|^2}{C_I-R_K}.\end{aligned}$$ We parametrize $I\in{\mathcal{D}}_k\big(K+|K|\big)$ by $C_I=R_K+(m+\frac{1}{2})|I|$, where $0\le m\le 2^k-1$. $$\begin{aligned} \sum_{I\in{\mathcal{D}}(K+|K|)}\int_{\tilde Q_K}|f_I(z)|^2y\, dA(z) &\lesssim \sum_{k=0}^\infty \sum_{I\in{\mathcal{D}}_k(K+|K|)} \frac{|I|^2}{C_I-R_K} \\ &\lesssim \sum_{k=0}^\infty \sum_{m=0}^{2^k-1} 2^{-k}|K|\frac{1}{m+\frac{1}{2}} \\ &\lesssim \sum_{k=0}^\infty \log(2^k)2^{-k}|K| \lesssim |K|.\end{aligned}$$ ##### Subcase $(v2)$: $I\in {\mathcal{D}}(J)$, $I\ne J$. By and $$\label{Eq:HybridEstimate} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \lesssim \frac{|I|^{1/2}}{|J|^{1/2}}\int_{-\infty}^{R_K}\frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_J}{|J|}\big)^2\big)}\, dx.$$ By computation $$\label{Eq:x-Estimate(v2)} \int_{-\infty}^{R_K}\frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_J}{|J|}\big)^2\big)}\, dx=\sum_{k=1}^3s_k,$$ where $$\begin{aligned} s_1 &= \frac{|I||J|(|J|^2-|I|^2)\Big(|J|\arctan \big(\frac{|I|}{C_I-R_K}\big)-|I|\arctan \big(\frac{|J|}{C_J-R_K}\big)\Big)}{2(|I|^2+|J|^2)(C_I-C_J)^2+(C_I-C_J)^4+(|I|^2-|J|^2)^2}, \\ s_2 &= \frac{|I||J|(C_I-C_J)^2\Big(|J|\arctan \big(\frac{|I|}{C_I-R_K}\big)+|I|\arctan \big(\frac{|J|}{C_J-R_K}\big)\Big)}{2(|I|^2+|J|^2)(C_I-C_J)^2+(C_I-C_J)^4+(|I|^2-|J|^2)^2}, \\ s_3 &= \frac{(C_I-C_J)|I|^2|J|^2\log\big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\big)}{2(|I|^2+|J|^2)(C_I-C_J)^2+(C_I-C_J)^4+(|I|^2-|J|^2)^2}.\end{aligned}$$ Approximate the denominators of $s_1$ through $s_3$ by $|J|^4$, and $C_I-C_J$ appearing in the numerators by $|J|$. Then $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \lesssim {} & \frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_I-R_K} \\ &+ \frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_J-R_K} \\ & + \frac{|I|^{5/2}}{|J|^{3/2}}|\log\Big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\Big)|.\end{aligned}$$ We begin by summing the terms $\frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_I-R_K}$. If $J$ is adjacent to $K$, then we parametrize $I\in{\mathcal{D}}_k(J)$ by $C_I=R_K+(m+\frac{1}{2})|I|$, where $0\le m\le 2^k-1$. $$\begin{aligned} \sum_{\substack{I\in{\mathcal{D}}(J)\\ I\ne J}}\frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_I-R_K} = \sum_{k=1}^\infty \sum_{I\in{\mathcal{D}}_k(J)}\frac{2^{-5k/2}|J|^2}{C_I-R_K} \\ = |J|\sum_{k=1}^\infty 2^{-3k/2}\sum_{m=0}^{2^k-1}\frac{1}{m+\frac{1}{2}}\lesssim |J|\sum_{k=1}^\infty k2^{-3k/2}\lesssim |J|.\end{aligned}$$ Clearly the sum of $|J|$ for $J$ adjacent to $K$ is controlled by $|K|$. If $J$ is not adjacent to $K$, then $\frac{1}{C_I-R_K}\approx \frac{1}{C_J-R_K}$. We parametrize $J\in {\mathcal{D}}_l\big(K+|K|\big)$, with ${\textnormal{dist}}(K,J)>0$, by $C_J=R_K+(n+\frac{1}{2})|J|$, where $0\le n\le 2^l-1$. $$\begin{aligned} \sum_{\substack{J\in{\mathcal{D}}(K+|K|)\\ {\textnormal{dist}}(K,J)>0\\ I\in{\mathcal{D}}(J), I\ne J}}\frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_J-R_K} = |K|\sum_{l=0}^\infty \sum_{k=1}^\infty \sum_{n=0}^{2^l-1}2^{-5k/2}2^{-l}\frac{1}{n+\frac{1}{2}}\lesssim |K|.\end{aligned}$$ Summing the terms $\frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{C_I-R_K}$ is similar. In order to control the logarithmic terms, note that $$\begin{aligned} |\log\Big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\Big)| \lesssim \left\{ \begin{array}{rl} \log\big(\frac{|J|}{|I|}\big) & \text{if $J$ is adjacent to $K$,}\\ \frac{|J|^2}{(C_J-R_K)^2} & \text{if $J$ is not adjacent to $K$.} \end{array} \right.\end{aligned}$$ The terms may now be summed as before. ##### Subcase $(v3)$: $|I|\le|J|$, $I\in{\mathcal{D}}\big(K+|K|\big)\setminus{\mathcal{D}}(J)$. Again we use and , but we approximate the denominators of $s_1$ through $s_3$ by $(C_I-C_J)^4$ instead of $|J|^4$: $$\begin{aligned} |\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \lesssim {} & \frac{|I|^{5/2}|J|^{7/2}}{(C_I-C_J)^4}\frac{1}{C_I-R_K}\label{iii'1} \\ &+ \frac{|I|^{5/2}|J|^{7/2}}{(C_I-C_J)^4}\frac{1}{C_J-R_K}\label{iii'2} \\ &+ \frac{|I|^{5/2}|J|^{3/2}}{(C_I-C_J)^2}\frac{1}{C_I-R_K}\label{iii'3} \\ &+ \frac{|I|^{5/2}|J|^{3/2}}{(C_I-C_J)^2}\frac{1}{C_J-R_K}\label{iii'4} \\ &+ \frac{|I|^{5/2}|J|^{3/2}}{|C_I-C_J|^3}|\log \Big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\Big)|\label{iii'5}.\end{aligned}$$ We being with summing the right-hand side of . Let $J_0=[R_K,R_K+|J|)$. We use the parametrization $J\in{\mathcal{D}}_l\big(K+|K|\big)$, $C_J=R_K+(n+\frac{1}{2})|J|$, $0\le n\le 2^l-1$, $I\in{\mathcal{D}}_k\big(J_0+m|J|\big)$, $k\ge 0$, $0\le m\le 2^l-1$, $J_0+m|J|\ne J$. If $m\ge 1$, then $$\frac{|I|^{5/2}|J|^{7/2}}{(C_I-C_J)^4}\frac{1}{C_I-R_K} \approx\frac{2^{-5k/2}2^{-l}|K|}{m(n-m)^4}.$$ If $m=1$, then we use the additional parametrization $C_I=R_K+(p+\frac{1}{2})|I|$, $0\le p\le 2^k-1$, and $$\frac{|I|^{5/2}|J|^{7/2}}{(C_I-C_J)^4}\frac{1}{C_I-R_K} \approx \frac{2^{-3k/2}2^{-2l}|K|}{pn^4}.$$ Summing, $$\begin{aligned} &\sum_{\substack{J\in{\mathcal{D}}(K+|K|) \\ |I|\le |J| \\I\in{\mathcal{D}}(K+|K|)\setminus{\mathcal{D}}(J)}}\frac{|I|^{5/2}|J|^{7/2}}{(C_I-C_J)^4}\frac{1}{C_I-R_K} \\ &\lesssim \sum_{k,l=0}^\infty\bigg( \sum_{p=0}^{2^k-1}\sum_{n=1}^{2^l-1}\frac{2^{-3k/2}2^{-2l}|K|}{pn^4} + \sum_{m=1}^{2^l-1}\sum_{\substack{n=0\\ n\ne m}}^{2^l}\frac{2^{-3k/2}2^{-l}|K|}{m(n-m)^4}\bigg) \lesssim |K|.\end{aligned}$$ Summing the terms in , and , is similar. For the logarithmic terms , we have that $$\frac{|I|^{5/2}|J|^{3/2}}{|C_I-C_J|^3}|\log \Big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\Big)| \approx \frac{|I|^{5/2}}{|J|^{3/2}}\frac{|\log\big(\frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\big)|}{|m-n|^3}$$ If $m=0$, then $$|\log\Big(\frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\Big)|\le \log \Big(\big(1+(2^l-\frac{1}{2})^2\big)\frac{|J|^2}{|I|^2}\Big)\lesssim (1+k)(1+l).$$ Similarly, if $n=0$, then $$|\log\Big(\frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\Big)| = | \log \Big(\frac{\frac{5}{4}|J|^2}{|I|^2+(C_I-R_K)^2}\Big)|\lesssim (1+k)(1+l).$$ If $m,n\ge 1$, then $$1\le \frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\le \Big(\frac{n+1}{m}\Big)^2,$$ whenever $m\le n-1$, and $$\Big(\frac{n}{m+1}\Big)^2\le \frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\le 1,$$ whenever $m\ge n+1$. It follows that $$|\log\Big(\frac{|J|^2+(n+\frac{1}{2})^2|J|^2}{|I|^2+(C_I-R_K)^2}\Big)| \lesssim \frac{|n-m|}{\min\{m,n\}}.$$ We now compute the sum $$\begin{aligned} &\sum_{\substack{J\in{\mathcal{D}}(K+|K|) \\ |I|\le |J| \\I\in{\mathcal{D}}(K+|K|)\setminus{\mathcal{D}}(J)}}\frac{|I|^{5/2}|J|^{3/2}}{|C_I-C_J|^3}|\log \Big(\frac{|J|^2+(C_J-R_K)^2}{|I|^2+(C_I-R_K)^2}\Big)| \\ &\lesssim |K|\sum_{k,l=0}^\infty 2^{-3k/2-l}\bigg(\sum_{m=1}^{2^l-1}\frac{(1+k)(1+l)}{m^3}+\sum_{\substack{m,n=1\\m\ne n}}^{2^{l}-1}\frac{1}{|n-m|^2\min\{m,n\}}\bigg) \\ &\lesssim |K|\sum_{k,l=0}^\infty 2^{-3k/2-l}(1+k)(1+l)\lesssim |K|.\end{aligned}$$ ##### Subcase $(v4)$: $J\in{\mathcal{D}}\big(K+|K|\big)$, $I\in{\mathcal{D}}\big(K+m|K|\big)$, $m\in\{-1,0\}$. This case is similar to $(v3)$, but when $C_I<R_K$, we need to replace with $$\int_{-\infty}^{R_K}\frac{1}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_J}{|J|}\big)^2\big)}\, dx=\sum_{k=1}^4s_k,$$ with $s_1,s_2,s_3$ as before, and $$s_4 = \pi \frac{|I||J|^2(|J|^2-|I|^2+(C_I-C_J)^2)}{2(|I|^2+|J|^2)(C_I-C_J)^2+(C_I-C_J)^4+(|I|^2-|J|^2)^2},$$ The terms $s_1,s_2,s_3$ are summed as in subcase $(v3)$. To sum the terms $s_4$, we parametrize $J\in{\mathcal{D}}_k\big(K+|K|\big)$, $k\ge 0$, by $C_J=R_K+(n+\frac{1}{2})|J|$, $0\le n\le 2^{l}-1$, and let $I\in {\mathcal{D}}_l\big(J_0-p|J|\big)$, $l\ge 0$, $1\le p\le 2^{l+1}$. $$\begin{aligned} \sum_{J\in{\mathcal{D}}(K+|K|)}\sum_{m=-1}^0\sum_{\substack{I\in{\mathcal{D}}(K+m|K|)\\ |I|\le|J|}}s_4 &\lesssim \sum_{k,l=0}^\infty \sum_{p=1}^{2^{l+1}}\sum_{\substack{J\in{\mathcal{D}}_l(K+|K|)\\ I\in {\mathcal{D}}_k(J_0-p|J|)}}\frac{|I|^{3/2}|J|^{3/2}}{(C_I-C_J)^2} \\ &\lesssim \sum_{k,l=0}^\infty \sum_{p=1}^{2^{l+1}} \sum_{n=0}^{2^l-1}2^k\frac{2^{-3k/2}2^{-3l/2}|K|}{(n+p)^2}\lesssim |K|.\end{aligned}$$ #### Case $(vi)$: $J\in{\mathcal{D}}(K)$, $I\in{\mathcal{D}}\big(K+m|K|\big)$, $|m|=1$. This case is similar to case $(v)$. #### Case $(vii)$: $I,J\in{\mathcal{D}}(K)$, $I\ne J$. By Lemma \[Lemma:Littlewood-Paley\] we have that $$\begin{aligned} &\sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{\tilde Q_K}f_I(z)\overline{f_J(z)}y\, dA(z)| \nonumber \\ = {} & \sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{\tilde Q_K^c}f_I(z)\overline{f_J(z)}y \, dA(z)| \nonumber \\ \le {} & \sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{x\notin K}\int_{y=0}^{\infty}f_I(z)\overline{f_J(z)}y\, dA(z)| \label{v1} \\ &+ \sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{x\in K}\int_{y=|K|}^{\infty}f_I(z)\overline{f_J(z)}y\, dA(z)|.\label{v2}\end{aligned}$$ The sum is rewritten as $$\begin{aligned} &\sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{x\notin K}\int_{y=0}^{\infty}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &\le \sum_{n\ne 0}\sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}|\int_{x\in K+n|K|}\int_{y=0}^{\infty}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &= \sum_{n\ne 0}\sum_{\substack{I,J\in{\mathcal{D}}(K+n|K|) \\ I\ne J}}|\int_{x\in K}\int_{y=0}^{\infty}f_I(z)\overline{f_J(z)}y\, dA(z)| \\ &\le \sum_{n\ne 0}\sum_{m\in{\mathbb{Z}}}\sum_{\substack{J\in{\mathcal{D}}(K+n|K|) \\ I\in{\mathcal{D}}(K+m|K|)}}\int_{x\in K}\int_{y=0}^{\infty}|f_I(z)\overline{f_J(z)}|y\, dA(z) \lesssim |K|,\end{aligned}$$ as follows from the cases $(i)-(vi)$. The terms in are approximated by $$\begin{aligned} &\int_{x\in {\mathbb{R}}}\int_{y=|K|}^{\infty}\frac{\big(1+\big(\frac{y}{|I|}\big)^2\big)\big(1+\big(\frac{y}{|I|}\big)^2\big)e^{-\frac{2\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}}{\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)\big(1+\big(\frac{x-C_I}{|I|}\big)^2\big)}\, dydx \\ &\lesssim \frac{|I|^{3/2}}{|J|^{1/2}}e^{-\frac{\pi |K|}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}\int_{y=0}^\infty \Big(1+\Big(\frac{y}{|I|}\Big)^2\Big)\Big(1+\Big(\frac{y}{|I|}\Big)^2\Big)e^{-\frac{\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}\, dy \\ &\lesssim \frac{|I|^{3/2}}{|J|^{1/2}}e^{-\frac{\pi |K|}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}.\end{aligned}$$ Summing one now gets that $$\begin{aligned} \sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}}&\int_{x\in K}\int_{y=|K|}^{\infty}|f_I(z)\overline{f_J(z)}|y\, dA(z) \\ &\lesssim \sum_{k,l=0}^\infty 2^k2^l2^{-3k/2}2^{-l}e^{-\frac{\pi}{3}\left(2^k+2^l\right)}|K|\lesssim |K|.\end{aligned}$$ We have proved that the right-hand side of is controlled by $|K|$. This leaves us with . We’ve already done most of the computational work, except that we gain an additional factor $y$. In case $(i)$ we get instead $$\begin{aligned} \int_{y=0}^{2|K|}\Big(1+\Big(\frac{y}{|I|}\Big)^2\Big)\Big(1+\Big(\frac{y}{|J|}\Big)^2\Big)e^{-\frac{2\pi y}{3}\left(\frac{1}{|I|}+\frac{1}{|J|}\right)}y\, dy \lesssim |K|^2.\end{aligned}$$ This is an additional factor $|K|$ which does not give a worse estimate. In cases $(ii)-(vi)$ we instead gain an extra factor $\frac{|I||J|}{|I|+|J|}\approx |I|\le |K|$, so these cases are also fine. Only in case $(vii)$ we need to do a little bit of work: $$\begin{aligned} \sum_{\substack{I,J\in{\mathcal{D}}(K) \\ I\ne J}} \int_{\tilde Q_K}|f_I(z)\overline{f_J(z)}|y^2\, dA(z) &\lesssim \sum_{\substack{I,J\in{\mathcal{D}}(K)}} \frac{|I|^{5/2}}{|J|^{1/2}}\frac{1}{1+{\textnormal{rd}}(I,J)^2} \\ &\le \sum_{k,l=0}^\infty \sum_{n\in{\mathbb{Z}}}2^k2^l2^{-5k/2}2^{-2l}\frac{|K|^2}{1+n^2} \lesssim |K|.\end{aligned}$$ This completes the proof of Lemma \[Lemma:OffDiagonalTerms\], and thus of Theorem \[Theorem:Main\].
--- abstract: 'Diffusion on a diluted hypercube has been proposed as a model for glassy relaxation and is an example of the more general class of stochastic processes on graphs. In this article we determine numerically through large scale simulations the eigenvalue spectra for this stochastic process and calculate explicitly the time evolution for the autocorrelation function and for the return probability, all at criticality, with hypercube dimensions $N$ up to $N=28$. We show that at long times both relaxation functions can be described by stretched exponentials with exponent $1/3$ and a characteristic relaxation time which grows exponentially with dimension $N$. The numerical eigenvalue spectra are consistent with analytic predictions for a generic sparse network model.' author: - 'N. Lemke' - 'Ian A. Campbell' date: 'Received: date / Revised version: date' title: Stretched exponential behavior and random walks on diluted hypercubic lattices --- Introduction {#introduction .unnumbered} ============ In 1854 R. Kohlrausch used a phenomenological expression $$\label{kohl} q_{K}(t)=\exp(-(t/\tau)^\beta)$$ to parametrize the non-exponential decay of the electric polarization of Leyden jars (primitive capacitors)[@RK]; his son F. Kohlrausch later used the same expression to analyse creep in galvanometer suspensions [@FK]. A century later, in 1951 Weibull introduced [@weibull] the closely related Weibull function; this survival probability function [@eliazar] which is widely used in the engineering literature is strictly of the Kohlrausch form, Eqn. (\[kohl\]). In 1970 Williams and Watts re-discovered the Kohlrausch function in the context of dielectric relaxation[@WW]. Under the name of “stretched exponential” [@chamberlin] the KWW (Kohlrausch-Williams-Watts) function has become ubiquitous in phenomenological analyses of non-exponential relaxation data, experimental or numerical. In particular the KWW form was used by Ogielski in a phenomenological fit the decay of the autocorrelation function at equilibrium for a $3d$ Ising spin glass model [@ogi1985]. Many arguments have been given as to why under certain assumptions, specific systems should show KWW relaxation [@phil; @havl; @rasa; @dons1; @dons2; @gras; @gotz; @ian1; @ian2; @ian3], but there have always been lingering suspicions that for most cases the KWW expression is nothing more than a convenient fitting function of no fundamental significance. It was conjectured [@ian1] that KWW relaxation is the signature of a complex configuration space. Thus from the argument which follows it was suggested that random walks on a diluted hypercube (a hypercube with a fraction $p$ of vertices occupied at random) near the critical concentration for percolation $p_{c}$ [@erdos1979] would lead to an autocorrelation function decay of the form $q(t) \sim \exp[-(t/\tau)^{\beta}]$, with a specific value of the exponent, $\beta = 1/3$. For random walks at percolation threshold in a randomly occupied Euclidean (flat) space of dimension $d$ such as $\mathbb{Z}^{d}$, the familiar Fickian diffusion law $\langle R^2\rangle \sim t$ is replaced by a sub-linear diffusion $\langle R^2\rangle \sim t^{\beta_{d}}$, with $\beta_{d} \equiv 1/3$ for $d\geq 6$ [@alexander:82]. Random walks on the surface of a full \[hyper\]sphere $\mathbb{S}_{d-1}$ in any dimension $d$ are characterized by the generic law $\langle \cos(\theta)\rangle = \exp(-(t/\tau))$ where $\theta$ denotes the generalized angular displacement of the walker [@debye; @caillol]. It was argued [@ian1] that random walks on percolation clusters at threshold inscribed on \[hyper\]spheres would be characterized by relaxation of the form $\langle \cos(\theta)\rangle = \exp(-(t/\tau)^{\beta_{d}})$ with the same exponents $\beta_{d}$ as in the corresponding Euclidean space. This was demonstrated numerically for $d = 3$ to $8$ [@jund]. A hypercube being topologically equivalent to a hypersphere, for random walks on a diluted hypercube at threshold one then expects stretched exponential relaxation with exponent $\beta = 1/3$. The diluted hypercube at threshold can alternatively be considered as a specific example of a sparse graph. Remarkably, analytic expressions for diffusion on general sparse graphs [@bray1988; @samukhin2008] derived from a quite different line of argument also lead to stretched exponential relaxation expressions with the same specific value $\beta=1/3$ for the exponent. Here we present numerical data for random walks on the diluted hypercube at threshold up to dimension $N=28$ which are consistent with these conclusions. We argue that the KWW relaxation observed phenomenologically in numerous complex systems just above their respective critical temperatures is not an artifact, but is the signature of a universal form of coarse grained configuration space morphology which precedes a glass transition. Laplace transforms and random networks {#laplace-transforms-and-random-networks .unnumbered} ====================================== Quite generally, any relaxation function $q(t)$ can equivalently be characterized by its Laplacian, a relaxation mode density (or eigenvalue density) function $\rho(s)$ defined by: $$\label{eq:relaxationfunction} q(t) \equiv \int_0^\infty \rho(s)e^{-s t}ds$$ with the normalization condition $$\int_0^\infty \rho(s)ds =1$$ In model systems it can be possible to establish analytically or numerically the distribution $\rho(s)$ which can then be inverted to obtain $q(t)$. The inverse Laplace transform of a numerical or experimental $q_{K}(t)$ to obtain $\rho(s)$ is much more difficult unless $q(t)$ is known to very high precision over a wide range of $t$. This is an ill-conditioned problem as different $\rho(s)$ distributions can lead to almost indistinguishable $q(t)$. Pollard [@pollard] (see Berberan-Santos [@berberan2008]) provided an exact inversion of the pure stretched exponential relaxation function $q_K(t)$ \[kohl\] : $$\label{eq:laplace} \rho_{K,\beta}(s)=\frac{\tau}{\pi}\int_0^\infty \exp\left[ -u^\beta \cos\left( \frac{\beta \pi}{2} \right) \right] \cos\left[u^\beta\sin\left( \frac{\beta \pi}{2} \right) \right] \cos(s\tau u) \;du$$ For $\beta < 1$, $\rho_{K,\beta}(s)$ can be expressed in terms of elementary functions only for $\beta = 1/2$ [@pollard]; in that case $$\label{eq:half} \rho_{K,1/2}(s)= [\tau/2\pi^{1/2}(s\tau)^{3/2}]\exp(-1/4s\tau)]$$ To a good approximation, for general $\beta$ the large $s$ (short time) limit takes the form $\rho_{K,\beta} \sim s^{-(1+\beta)}$ and the small $s$ (long time) limit the form $\rho_{K,\beta} \sim (s)\exp[s^{-\beta/(1-\beta)}]$. It should be kept in mind that at short times observed relaxation functions usually deviate from the “asymptotic” form. Also at very long times for finite sized systems the relaxation is controlled by the smallest non-zero value of $s$, $s_{1}$. For time $t > s_{1}^{-1}$ the relaxation will tend to a pure exponential, $q(t) \sim \exp[-t s_{1}]$, but for large systems this condition corresponds to extremely long times and we will not consider it. What we are interested in is to establish the form of the relaxation in the regime where the mode distribution is no longer affected by short time effects and where $\rho(s)$ can be considered continuous. Random networks {#random-networks .unnumbered} =============== Random walks on the diluted $N$-simplex or hypertetrahedron which is an Erdös-Rényi graph having dead ends and vertices with two connections, was studied theoretically by Bray and Rodgers [@bray1988] using Replica theory. They showed that in this model the return function $p_{ret}(t)$, the probability that the walker will have returned to the origin after $t$ steps, behaves like a stretched exponential with exponent $1/3$. Samukhin [*et al*]{} [@samukhin2008] have made analytic studies of random walks and relaxation processes on uncorrelated Random Networks. They considered a stochastic process governed by the Laplacian operator occurring on a random graph with $N^{*}$ nodes, taking the limit as $N^{*} \to \infty$. They find that the determining parameter in this problem is the minimum degree $q_{m}$ of vertices (i.e. the minimum number of neighbors to any given vertex). For $q_m = 2$, meaning that the network is “sparse”, the graph tends to a random Bethe lattice in which almost all finite subgraphs are trees, i.e., they contain almost no closed loops. In the present context the essential statement of Samukhin [*et al*]{} [@samukhin2008] is that when $q_m = 2$ the mode density function $\rho_{S}(s)$ for this very general model can be approximated by $$\label{eq:laplacian} \rho_{S}(s) = s^{-4/3}\exp(-a/\sqrt{s})$$ where $$\label{eq:defa} a=\sqrt{\frac{4\tau^-1}{3}}$$ with a similar expression for $q_{m} = 1$ (graphs with dead ends). Then for a graph with $N^{*}$ vertices the asymptotics at $t > \ln N^{*}$ for the probability of return to the starting point at time $t$ during a random walk on the network (the “autocorrelator” [@samukhin2008]) will be $$\label{pretnet} p_{ret,S}(t) \sim t^{\eta}\exp[-3(a/2)^{2/3}t^{1/3}],$$ a stretched exponential having exponent $1/3$, multiplied by a mildly time dependent prefactor ($\eta$ is small). This limit should be observable if the network size satisfies $(\ln N^{*})^{2/3} \gg 1$. Hypercube model =============== We have already addressed the hypercube problem numerically through Monte Carlo techniques [@lemke1996] and through the explicit solution of Master equations [@lemke2000; @almeida2000]. In this paper we extend these results by investigating the time evolution for the autocorrelation function $q(N,t)$, the return probability $p_{ret}(N,t)$, and the eigenvalue spectrum $\rho(N,s)$ for diffusion on diluted hypercubes of dimension $N$ near the critical occupation probability $p_c(N)$, for $N$ up to $28$. Consider a hypercube (or n-cube) in \[high\] dimension $N$, $\mathbb{Q}_{N}$, with a fraction $p$ of its $2^N$ vertices occupied at random. It is well established [@erdos1979; @bollobas; @borgs] that there is a critical threshold at $p_{c}(N) \sim 1/N$. For $p > p_c(N)$ the occupied vertices having one or more occupied vertices as neighbors make up a giant spanning cluster; for $p<p_c$ there exist only small clusters (each with less than $N$ elements). By analogy with the equivalent situation in randomly occupied Euclidean space we will refer to $p_{c}$ as the “percolation” threshold. Gaunt and Brak [@gaunt1984] predict that the dependency of the critical site percolation concentration $p_c$ on a hypercubic lattice of dimension $d$, $\mathbb{Z}^d$, or on a hypercube of dimension $N$, $\mathbb{Q}_N$, is given to order $4$ by: $$p_c(\sigma) =\sigma+\frac{3}{2}\sigma^2+\frac{15}{14}\sigma^3+\frac{83}{4}\sigma^4\ldots \label{eq:pc}$$ where $\sigma(d)=1/(2d -1)$ for the hypercubic lattice and $\sigma(N)=1/(N-1)$ for the hypercube [@gaunt1976]. Although the terms in this expression are expected to be exact, the demonstration is not entirely rigorous [@gaunt1984], and the series is obviously truncated. Grassberger [@grassberger2003] tested the equation (\[eq:pc\]) through large scale Monte Carlo simulations on $\mathbb{Z}^d$ and verified that for $d > 10$ it represents the numerically determined $p_{c}(d)$ to within a small correction term. We will work with samples having vertex concentrations $p(N)$ equal to the values $p_c(N)$ given by the truncated series equation (\[eq:pc\]). For different samples $k$ the individual critical values $p_c(k)$ will in fact be distributed about the average value [@borgs]. For $p > p_c(N)$ we can define a random walk along edges on the giant cluster. Start at any vertex $i$ on the giant cluster. Choose at random a vertex $j$ on the hypercube, near neighbor to $i$. If the vertex $j$ is also on the giant cluster and so accessible, move to $j$; otherwise the walker remains one time step longer at the vertex $i$. This evolution rule is chosen to mimic Monte Carlo simulations using Metropolis dynamics. We can compare the autocorrelation function $q(N,t)$ obtained from this procedure, ($q(N,t)$ is defined in Eq. (\[eq:correlation\]) below), to the time dependent autocorrelation $\langle S_i(t).S_i(0)\rangle$ measured in thermodynamic models for systems of Ising spins $S_{i}$ [@ogi1985] and even to experimental magnetization decay results. From a theoretical point of view it is often more convenient to investigate the “return probability” $p_{ret}(t)$ that is basically the probability of finding the walker at the origin of the system after $t$ steps ($p_{ret}(N,t)$ is defined in Eq. (\[eq:pret\]) below). For any network $p_{ret}(t)$ can be defined, while $q(t)$ can be defined conveniently only on models such as the hypercube which have a suitable metric. The numerical data near criticality show that the long time relaxations of the autocorrelation parameter $q(N,t)$ and of the return probability $p_{ret}(N,t)$ are consistent with stretched exponentials having an exponent $\beta = 1/3$ over many orders of magnitude in time. Algorithm {#algorithm .unnumbered} ========= The time evolution of the entire probability distribution for the walker after $t$ steps, $\vec{\Pi}(t)$, can be described by a Master Equation. At $t=0$ the walker is localized on a single vertex $i_o$ on the hypercube; the probability distribution then diffuses over the system at each time step following the equation: $$\label{eq:master} \Pi_i(t) =\Pi_i(t-1) + \left[ \sum_{j} \Pi_i(t-1)W(j\to i)- \Pi_j(t-1)W(i\to j) \right]$$ where $W(i\to j)$ represents the transition probability that is given by: $$\label{eq:transition} W(i\to j)=\left\{ \begin{array}{l l} \frac{1}{N} & \mbox{if $i$ vertex and $j$ vertex are allowed} \\ 0 & \mbox{otherwise} \end{array} \right.$$ The equation (\[eq:master\]) can be rephrased as: $$\label{eq:operator} \vec{\Pi}(t)=F\vec{\Pi} (t-1)$$ where $F$ is the linear evolution operator. Since this process is Markovian we can diagonalize $F$; the smallest eigenvalue corresponding to the infinite time equilibrium limit (where all sites become equally populated) is 1. We can determine $U$ and $D$ satisfying: $$\label{eq:diag} F=U^TDU$$ where $D$ is a diagonal matrix. For practical reasons it is convenient to diagonalize $F$ so as to investigate the temporal evolution of the relevant quantities. We use: $$\label{eq:time-evol} \Pi(t)=F^t\Pi(0)=U^TD^tU\Pi(0)$$ We choose the initial condition as: $$\label{eq:initial} \Pi_i(0)=\delta_{ii_o}$$ where $i_o$ is a vertex on the giant cluster. The value of the normalized autocorrelation function $q(t)$ after time $t$ for a given walk starting from $i_o$ and arriving at $i$ after time $t$ can be defined by: $$\label{eq:correlation} q(t) =\left\langle \frac{1}{N_G}\sum_{i_o} \sum_{i} \Pi_i(t) \frac{N-2d_H(i,i_o)}{N}- q_\infty \right\rangle$$ where $d_H(i,i_o)$ is the Hamming distance between vertex $i$ and the initial state, $N_G$ is the number of vertices on the giant cluster, $q_{t=\infty}$ for a given realization is given by: $$\label{eq:correlation2} q_{t=\infty} =\frac{1}{N_G^2}\sum_{ii_o} \frac{N-2d_H(i,i_o)}{N}$$ and the averages are over different realizations of the diluted hypercube. We also calculated $p_{ret}$ defined by: $$\label{eq:pret} p_{ret}(t)=\left\langle \frac{1}{N_G}\sum_{i_o} \Pi_{i_o}(t)-\frac{1}{N_G}\right\rangle$$ We can show that: $$\label{eq:pret2} p_{ret}(t)=\frac{1}{N_G}\left\langle \sum_{j}\lambda_j^t -\frac{1}{N_G} \right\rangle$$ This quantity is easier to calculate theoretically than $q(t)$, but it is not useful to compare with results on model spin systems or experiments. We can write this equation in a more convenient form: $$\label{eq:pret3} p_{ret}(t)=\frac{1}{N_G}\left\langle \sum^\prime_{s\neq 0}e^{-s_it}\right\rangle$$ where $s=-\ln \lambda$ and we excluded $\lambda =1$ eigenvalue. Another convenient form for investigating $p_{ret}$ is: $$\label{eq:dens-exp} p_{ret}(t)=\int_0^\infty ds\rho(s) e^{-ts}$$ where the density $\rho$ is defined by: $$\label{eq:density} \rho(\lambda)=\left\langle \frac{1}{N_G-1} \sum_i \delta(s-s_i ) \right\rangle$$ Our numerical workflow can be summarized as follows: 1. generation of a diluted hypercube 2. determination of the giant cluster 3. determination of the eigenvalues and eigenvectors of $F$ 4. calculation of $\rho(s)$, $q(t)$ and $p_{ret}(t)$ The algorithm was implemented on Mathematica 8.0 and the simulations were performed on a Intel Xeon 2.27 Ghz with 24 Gbytes of Ram Memory. A single simulation for the $N=28$ cost 12 hours. The algorithm demands 24 Gbytes of memory for this case. Calculations were made with hypercubes of dimension $N=10, 12, 14, 16, 18, 20, 22, 24, 26$ and $28$. All the calculations were performed at $p_c(N)$ values given by equation (\[eq:pc\]); this condition is important since it allows us to scale conveniently data for systems having different dimensions $N$. It is useful to be able to include data for smaller $N$ in the global analysis as in these samples we deal with much smaller matrices which is simpler computationally. All vertices on the giant cluster were used as starting points, except for the largest systems $N=26$ and $28$ where we have not used all possible initial states $i_o$. For these sizes we approximated $q(t)$ and $p_{ret}(t)$ by using only $1000$ randomly chosen initial states for each realization. We have tested the accuracy of this approximation and we concluded that the error was very small (even for the smaller sizes). We studied $1000$ different realizations of the hypercube for all sizes $N$ except for $N=28$ when we have studied $100$. Numerical data ============== On Figure (\[fig:net\]) we represent a graphical representation of a diluted $\mathbb{Q}_{24}$ for this particular sample the graph is a tree showing the validity of the approximation proposed by [@samukhin2008]. ![ A graphical representation of a diluted $\mathbb{Q}_{24}$ exactly at $p_c$. The picture shows that the network presents no loops.[]{data-label="fig:net"}](fig1.pdf) The time evolution for the autocorrelation functions $q(N,t)$ (\[eq:correlation\]) is depicted in Figure \[fig:corrlog\] against $\log(t)$. On Figure \[fig:pretlog\] we show the equivalent results for the return probability $p_{ret}(N,t)$. In all cases we have fitted the long time part of the curves using the expression: $$f(t)=A\exp\left[- \left( \frac{t}{\tau}\right)^{1/3} \right]$$ ![ The relaxation of the autocorrelation function $\log q(N,t)$. Eqn.(\[eq:correlation\]), against $\log(t)$ for $N$ from $10$ to $28$. . []{data-label="fig:corrlog"}](fig2.pdf) ![ The decay of the return probability $\log p_{ret}(N,t)$, Eqn.(\[eq:pret\]), against $\log t$ for $N$ from $10$ to $28$. []{data-label="fig:pretlog"}](fig3) In Figures \[fig:corrt\] and \[fig:prett\] we present the same results in a different manner so as to demonstrate the stretched exponential long time behavior. On the $x$ axis the time scale is normalized with $x(t) = (t/\tau(N))^{1/3}$ and on the $y$ axis the measured $q(N,t)$ or $p_{ret}(N,t)$ are normalized so $y(N,t) = \ln (q(N,t)/A_{q}(N))$ and $y(N,t) = \ln (p_{ret}(N,t)/A_{ret}(N))$ respectively. In these plots a stretched exponential with exponent $1/3$ is a straight line as observed; we have chosen the normalization factors $\tau(N)$ and $A_{q}(N), A_{ret}(N)$ so that data for different hypercube dimensions $N$ collapse. This form of plot allows one to distinguish clearly between the short time regime and the stretched exponential regime; the latter can be seen to extend over a wide time range until measurements are limited by the statistical noise. The effective exponent $\beta = 1/3$ is independent of $N$ to within the statistics. ![ The decay of the normalized autocorrelation function $\ln (q(N,t)/A_{q}(N))$ against $(t/\tau)^{1/3}$. For stretched exponentials with exponent $\beta=1/3$ in the long time regime the data should lie on a straight line in this form of plot, as observed. []{data-label="fig:corrt"}](fig4) ![ The decay of the normalized return probability $\log (p_{ret}(N,t)/A_{ret}(N))$ against $(t/\tau(N))^{1/3}$. For stretched exponentials with exponent $\beta=1/3$ in the long time regime the data should lie on a straight line on this form of plot, as observed. []{data-label="fig:prett"}](fig5.pdf) On Figure \[fig:tau\] we show the size dependence on the $\tau(N)$ time scale parameter from the fits of the autocorrelation $q(t,N)$ and the return probability $p_{ret}(t,N)$ data. The data can be fitted by fitted by $$\tau(N)=B 10^{\gamma N} \label{eq:tau}$$ with the fit parameters $\gamma = 0.24\pm 0.1$ and $B= 1.5\pm 0.1$ for autocorrelation function, $\gamma = 0.24\pm 0.05$ and $B= 1.7\pm 0.2$ for the return probability. The values of the time scaling parameters $\tau(N)$ for the two different observables are identical within the precision of the measurements. ![ The dependence of the time scale $\tau(N)$ with dimension for the return probability $p_{ret}(N,t)$ (in red) and autocorrelation $q(N,t)$ (in blue). []{data-label="fig:tau"}](fig6) The most fundamental way to understand the system dynamics is through investigating the eigenvalue spectra; the stretched exponential long time behavior depends exclusively on the density of the eigenvalues above the smallest eigenvalue, in the region where the distribution for a finite size sample can still be considered to be continuous. A given spectrum leads unambiguously to a unique relaxation function, while it is much more difficult to determine the precise form of a mode spectrum from a relaxation function. On Figure \[fig:rho\] we compare the mode density $\rho(s)$ obtained through the present simulations with the theoretical expressions. All the numerical results were obtained using $1000$ different realizations of the diluted hypercube at each dimension $N$. Unfortunately in practice the calculations of $\rho(s)$ are numerically demanding because of strong sample to sample fluctuations. The spectra were first binned in the form of histograms. We defined a cut-off $\lambda_{min}(N)$ or equivalently $s_{max}(N)=-\ln \lambda_{min}(N)$ to eliminate the short time effects and selected the eigenvalues on the interval $s\in (0, s_{max}(N))$. We choose $s_{max}=2/\tau(N)$ for all dimensions. We divided this interval in bins equally spaced on a logarithmic scale and then calculated the densities for each interval, normalizing the frequencies by the length of the intervals. The continuous curves were calculated from the expression (\[eq:laplace\]) for $\rho_{K,1/3}(\lambda)$ and from the approximate analytic expression (\[eq:laplacian\]) for $\rho_{S}(\lambda)$ using $\tau(N)$ estimated from equation (\[eq:tau\]). To compare with simulation results we normalized $\rho(\lambda)$ functions using: $$C^{-1}=\int_{0}^{s_{max}} \rho(s)ds$$ and $$\rho^\prime(s)=C\rho(s)$$ Over the ranges for which reliable data points have been obtained the measured mode spectrum densities $\rho(N,s)$ closely resemble the corresponding parts of the calculated spectra from the Laplace transform $\rho_{K,1/3}(s)$, (\[eq:laplace\]) or the analytic $\rho_{S}(s)$ spectrum (\[eq:laplacian\]) [@samukhin2008] (which are in fact very similar to each other). The numerical spectra for the hypercube model are indeed consistent with the mode density spectral form derived analytically for the more general random network model [@samukhin2008]. ![Spectral density data $\rho(N,s)$ from the hypercube evaluations together with the exact Laplace transforms $\rho_{1/3}(s)$ (\[eq:laplace\]) for stretched exponentials with $\beta=1/3$ and $\tau(N)$ values equal to the numerical estimates \[eq:tau\] (dashed lines), and the analytic sparse network expression (\[eq:laplacian\]),[@samukhin2008] (full lines). The values are normalized (see text). []{data-label="fig:rho"}](fig7) Discussion and conclusions {#discussion-and-conclusions .unnumbered} ========================== We have studied numerically relaxation through random walks along near neighbor edges on the giant cluster of vertices in randomly diluted hypercubes of dimensions up to $N=28$ near the percolation threshold for the cluster. The data show clearly that at the percolation threshold concentration $p_c(N)$, the relaxation mode spectrum, the time dependence of the autocorrelation $q(N,t)$, and the return probability $p_{ret}(N,t)$, are all consistent with asymptotic stretched exponential relaxation $\exp[-(t/\tau(N))^\beta]$ having exponent $\beta = 1/3$. The time scale $\tau(N)$ increases exponentially with dimension $N$, Eqn. (\[eq:tau\]). The observed eigenvalue spectra demonstrate that the dynamical $q(N,t)$ behavior previously obtained from Monte Carlo simulations and from numerical solutions of the master equation [@lemke1996; @lemke2000; @almeida2000] does not represent a crossover between different exponential regimes, but that it is the consequence of a specific wide eigenvalue spectrum. A final long time crossover to a pure exponential (which would correspond to a regime where the effective relaxation mode spectrum is reduced to a gap between the ground state and the lowest mode) is not visible in the data. This diluted hypercube model at threshold can be considered as the limiting high dimensional case of percolation on sphere-like spaces. Alternatively it can be considered as a specific explicit example of a generic sparse random network. The observed stretched exponential behavior with exponent $\beta =1/3$ on the dilute hypercube at the percolation threshold is consistent with the predictions of the sphere-like percolation approach [@ian1] and with studies of random walks on sparse random networks [@bray1988; @samukhin2008], where the same stretched exponential relaxation with the same exponent $\beta = 1/3$ has been derived analytically. For a physical system, configuration space can be imagined as a very high dimensional graph. The system’s dynamics is equivalent to a random walk of the point representing the instantaneous state of the system among those vertices of the graph which are thermodynamically accessible. We suggest that when the stretched exponential $\exp[-(t/\tau)^{1/3}]$ form of limiting relaxation with diverging $\tau$ is observed numerically or experimentally for the autocorrelation function relaxation $q(t)$ in complex physical systems (which is often the case, see for instance [@ogi1985; @angelani; @billoire])it is the signature of a configuration space tending to a percolation threshold and having a sparse random network topology. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by FAPESP grant no. 09/10382-2. This research was supported by resources supplied by the Center for Scientific Computing (NCC/GridUNESP) of the São Paulo State University (UNESP). [99]{} R. Kohlrausch, Pogg. Ann. Phys. Chem. [**91**]{}, 179 (1854). F. Kohlrausch, Pogg. Ann. Phys. Chem. [**119**]{}, 337 (1863). W. Weibull, J. Appl. Mech. [**18**]{}, 293 (1951). Eliazar and Klafter Phys.Rev.E [**77**]{}, 061125 (2008). G. Williams and D.C. Watts, Trans. Faraday Soc. [ **66**]{}, 80 (1970). R.V. Chamberlin, G. Mozurkewich, and R. Orbach, Phys. Rev. Lett. [**52**]{}, 867 (1984). A.T. Ogielski, Phys. Rev. B [**32**]{}, 7384 (1985). J.C. Phillips, Rep. Prog. Phys. [**59**]{}, 1133 (1996). A. Bunde, S. Havlin, J. Klafter, G. Gräff, and A. Shehter, Phys. Rev. Lett. [**78**]{}, 3338 (1998). J.C. Rasaiah, J. Zhu, J.B. Hubbard, and R.J. Rubin, J. Chem. Phys. [**93**]{}, 5768 (1990). M.D. Donsker and S.R.S. Varadhan, Commun. Pure Appl. Math. [**28**]{}, 525 (1975). M.D. Donsker and S.R.S. Varadhan, Commun. Pure Appl. Math. [**32**]{}, 721 (1979). P. Grassberger and I. Procaccia, J. Chem. Phys. [ **77**]{}, 6281 (1982). W. Götze and L. Sjögren, Rep. Prog. Phys. [**55**]{}, 241 (1992). I. A. Campbell, J.M. Flesselles, R. Jullien, and R. Botet, J. Phys. C: Solid State Phys. [**20**]{}, L47 (1987). I.A. Campbell, Europhys. Lett. [**21**]{}, 959 (1993). I.A. Campbell and L. Bernardi, Phys. Rev. B [**50**]{}, 12643 (1994).P. Erdös and A. Rényi, Magyar Tud. Akad. Mat. Kutató Int. Közl. [**5**]{} 17 (1960), P. Eördos and J. Spencer, Comput.Math. Appl. [**5**]{}, 33 (1979). S. Alexander and R. Orbach, J. Phys. Lett. [ **43**]{}, L625 (1982); Y. Gefen, A. Aharony and S. Alexander, Phys. Rev. Lett. [**50**]{}, 77 (1983). P. Debye, [*Polar molecules*]{}, Dover(London) (1929) J.-M. Caillol, J.Phys.A:Math.Gen, [**37**]{}, 3077 (2004). P. Jund, R. Jullien and I.A. Campbell, Phys.Rev. E [ **63**]{} 036131 (2001). A. N. Samukhin, S.N. Dorogovtsev and J.F.F. Mendes, Physical Review E [**77**]{}, 036115 (2008). A.J. Bray and G.J. Rodgers, Phys Rev B [**38**]{}, 11461(1988). H. Pollard, Bull. Am. Math. Soc. [**52**]{}, 908 (1946). M. N. Berberan-Santos, Chem. Phys. Lett. [ **460**]{}, 146-150 (2008). N. Lemke and I. A. Campbell, Physica A [**230**]{}, 554 (1996). R. M. C. de Almeida and N. Lemke and I. A. Campbell, [**30**]{}, 701, Brazilian Journal of Physics (2000). R.M.C. de Almeida and N. Lemke and I. A. Campbell, Eur. J. Phys. B, [**18**]{}, 513, 2000. B. Bollobás, Trans. Amer. Math. Soc. [**286**]{}, 257 (1984). C. Borgs, J. T. Chayes, R. van der Hofstad, G. Slade and J. Spencer, Combinatorica [**26**]{}, 395 (2006). D. Gaunt and R. Brak, J. Phys. A-Math. Gen. [ **17**]{}, 1761 (1984). D. S. Gaunt, M. F. Skiis, and H. Ruskin, J. Phys A [**9**]{}, 1899 (1976). P. Grassberger, Physical Review E [**67**]{}, 036101 (2003). L. Angelani, G.Parisi, G. Ruocco and G. Viliani, Phys. Rev. Lett. [**81**]{}, 4648 (1998). Alain Billoire and I.A. Campbell, arXiv:1105.1902.
--- abstract: 'We extend slow manifolds near a transcritical singularity in a fast-slow system given by the explicit Euler discretization of the corresponding continuous-time normal form. The analysis uses the blow-up method and direct trajectory-based estimates. We prove that the qualitative behaviour is preserved by a time-discretization with sufficiently small step size. This step size is fully quantified relative to the time scale separation. Our proof also yields the continuous-time results as a special case and provides more detailed calculations in the classical (or scaling) chart.' author: - Maximilian Engel - 'Maximilian Engel[^1]  and Christian Kuehn' bibliography: - 'mybibfile.bib' title: - Desingularization of the transcritical bifurcation in discrete time - | Discretized Fast-Slow Systems\ near Transcritical Singularities --- [**Keywords:**]{} transcritical bifurcation, slow manifolds, invariant manifolds, loss of normal hyperbolicity, blow-up method, discretization, maps. [**Mathematics Subject Classification (2010):**]{} 34E15, 34E20, 37M99, 37G10, 34C45, 39A99. Introduction ============ We study the dynamics of the two-dimensional quadratic map $$\label{map_intro} p: \begin{pmatrix} x \\ y \end{pmatrix} \mapsto \begin{pmatrix} \tilde{x} \\ \tilde y \end{pmatrix} = \begin{pmatrix} x + h(x^2 - y^2 + \lambda {\varepsilon}) \\ y + {\varepsilon}h \end{pmatrix}$$ for $h, {\varepsilon}> 0$. We interpret ${\varepsilon}$ as a small time scale sepration parameter between the fast variable $x$ and the slow variable $y$. The parameter $h$ can be viewed as the step size for the explicit Euler discretization of the corresponding ordinary differential equation (ODE) $$\begin{aligned} \label{ODE_intro} \begin{array}{r@{\;\,=\;\,}l} x' & x^2 - y^2 + \lambda {\varepsilon}\,, \\ y' & {\varepsilon}\,, \end{array}\end{aligned}$$ which represents the normal form of a fast-slow system exhibiting a *transcritical singularity* at the origin. The term transcritical refers to the fact that, if $y$ is seen as a bifurcation parameter for the flow in the $x$-variable, a transcritical bifurcation occurs at the origin $(x,y)=(0,0)$. The origin is singular since hyperbolicity of the dynamics breaks down at this point. The same holds for the map . In the case of , Krupa and Szmolyan [@ks2001/2] analyze the dynamics around the origin by using the *blow-up method* for vector fields with singularities. The key idea to use the blow-up method [@Du78; @Du93] for fast-slow systems goes back to Dumortier and Roussarie [@DuRo96]. They observed that this technique may convert non-hyperbolic singularities at which fast and slow directions interact into partially hyperbolic problems. The method inserts a suitable manifold, e.g. a sphere, at the singularity and describes the extension of hyperbolic objects through a neighbourhood of the singularity via the partially hyperbolic dynamics on this manifold; see e.g. [@ku2015 Chapter 7] for an introduction and [@DeMaesschalckDumortier7; @DeMaesschalckDumortier4; @DeMaesschalckWechselberger; @GucwaSzmolyan; @ks2011; @KuehnUM; @KuehnHyp] for a list of a few, yet by no means exhaustive, list of different applications to planar fast-slow systems. Krupa and Szmolyan also used the blow-up method for normal form of *fold singularities* in fast-slow systems [@ks2011]. For this case Nipp and Stoffer [@ns2013] transform the blow-up technique to the corresponding explicit Runge-Kutta, in particular Euler, discretization and prove the extension of slow manifolds for the discrete time system around the singularity. They treat the discretized dynamics in vicinity to the fold singularity as an application of a more general existence theory for invariant manifolds they develop in [@ns2013]. In this paper, we use instead a direct approach to analyze, how trajectories induced by  pass the singularity at the origin. This approach allows us to obtain pathwise control over the transitions in and within the different blow-up charts. The singularity is blown up to a sphere on which trajectories can be described directly via the map. This leads to the main result of the paper, Theorem \[transcritical\_discrete\], which is the discrete-time extension of [@ks2001/2 Theorem 2.1]. In this context, we only impose that $h$ is bounded by ${\varepsilon}$ and prove that there is no further restriction on the step size. Our theorem states explicitly, how for the cases $\lambda < 1$ and $\lambda > 1$ in  attracting slow manifolds extend beyond the singularity at the origin. It gives estimates on the contraction rates of neighbourhoods of the manifolds. The case $\lambda=1$ corresponds with the problem of *canard solutions* [@BenoitCallotDienerDiener; @DuRo96; @ks2001/3] and will be dealt with in future work. It should be noted that, by letting $h \to 0$, our proof of Theorem \[transcritical\_discrete\] can also be seen as a different way of proving [@ks2001/2 Theorem 2.1] and our proof makes the results [@ks2001/2] for the scaling chart more explicit. Additionally, the blow-up method provides the insight that only in one chart around the singularity the preservation of stability behaviour is bound to the stability criteria of the Euler discretisation derived from the Dahlquist test equation while in the other charts there is no such restriction. This work lays the foundation of a broader effort to apply the blow-up method, which has so far mainly been used for flows, to fast-slow dynamical systems induced by maps. First, it is insightful to look at key examples that can be compared to continuous-time analogues, as in the case of the transcritical singularity. In the future, also multiscale discrete-time problems, which have no correspondence to fast-slow flows, will be considered. The remainder of the paper is structured as follows. In Section \[continuoustime\], we summarize the results of Krupa and Szmolyan for transcritical singularities in continous time [@ks2001/2]. We state and explain their main result Theorem \[transcritical\_classic\] in Section \[contdynamics\], and we outline the main ingredients of the proof in Section \[contblowup\] thereby introducing the basic ideas of the blow-up technique. In Section \[discretemain\], we discuss the problem in discrete time associated with . Our main result is Theorem \[transcritical\_discrete\]. The ingredients of the proof are developed in the following subsections. Section \[sec:blow\] introduces the blow-up method for the new discrete setting and, subsequently, the dynamics are analyzed in three different charts of the manifold that blows up the singularity, leading to the proof of Theorem \[transcritical\_discrete\]. In Section \[secK1\], we describe how trajectories enter a neighbourhood of the origin via the *entrance chart* $K_1$ and leave this neighbourhood in the case of $\lambda < 1$. Section \[secK2\] builds the core of the proof: we analyze, how trajectories pass the origin depending on $\lambda$ in the *scaling chart* $K_2$. In Section \[secK3\], the *exit* dynamics through chart $K_3$ are described for the case $\lambda >1$. Finally, in Section \[secproof\] we combine the findings of the previous sections to finish the proof of Theorem \[transcritical\_discrete\]. We conclude with a short summary of our results and an outlook on future work in Section \[summaryoutlook\]. **Acknowledgements:** CK and ME gratefully acknowledge support by the DFG via the SFB/TR109 Discretization in Geometry and Dynamics. Transcritical singularity in continuous time {#continuoustime} ============================================ We start with a brief review and notation for continuous-time fast-slow systems. Consider a system of singularly perturbed ordinary differential equations (ODEs) of the form $$\begin{aligned} \label{slowequ} \begin{array}{rcrcl} {\varepsilon}\frac{{\mathrm{d}}x}{{\mathrm{d}}\tau} &=& {\varepsilon}\dot{x} &=& f(x,y,{\varepsilon})\,, \\ \frac{{\mathrm{d}}y}{{\mathrm{d}}\tau}&=&\dot{y} &=& g(x,y,{\varepsilon})\,, \quad \ x \in \mathbb{R}^m, \quad y \in \mathbb R^n, \quad 0 < {\varepsilon}\ll 1\,, \end{array}\end{aligned}$$ where $f,g,$ are $C^k$-functions with $k \geq 3$. Since ${\varepsilon}$ is a small parameter, the variables $x$ and $y$ are often called the *fast* variable(s) and the *slow* variable(s) respectively. The time variable in , denoted by $\tau$, is termed the *slow* time scale. The change of variables to the *fast* time scale $t:= \tau / {\varepsilon}$ transforms the system  into the ODEs $$\begin{aligned} \label{fastequ} \begin{array}{r@{\;\,=\;\,}r} x' & f(x,y,{\varepsilon})\,, \\ y' & {\varepsilon}g(x,y,{\varepsilon})\,. \end{array}\end{aligned}$$ Both equations correspond with a respective limiting problem for ${\varepsilon}= 0$: the *reduced problem* (or *slow subsystem*) is given by $$\begin{aligned} \label{redequ} \begin{array}{r@{\;\,=\;\,}l} 0 & f(x,y,0)\,, \\ \dot{y} & g(x,y,0)\,, \end{array}\end{aligned}$$ and the *layer problem* (or *fast subsystem*) is $$\begin{aligned} \label{layerequ} \begin{array}{r@{\;\,=\;\,}l} x' & f(x,y,0)\,, \\ y' & 0\,. \end{array}\end{aligned}$$ We can understand the reduced problem  as a dynamical system on the *critical manifold* $$S_0= \{(x,y) \in \mathbb{R}^{n+m} \,:\, f(x,y,0) = 0 \}\,.$$ Observe that the manifold $S_0$ consists of equilibria of the layer problem . $S_0$ is called *normally hyperbolic* if the matrix $\textnormal{D}_xf(p)\in\mathbb{R}^{m\times m}$ for all $p\in S_0$ has no spectrum on the imaginary axis. For a normally hyperbolic $S_0$ *Fenichel Theory* [@Fenichel4; @Jones; @ku2015; @WigginsIM] implies that for ${\varepsilon}$ sufficiently small, there is a locally invariant slow manifold $S_{{\varepsilon}}$ such that the restriction of  to $S_{{\varepsilon}}$ is a regular perturbation of the reduced problem . Furthermore, it follows from Fenichel’s perturbation results that $S_{{\varepsilon}}$ possesses an invariant stable and unstable foliation, where the dynamics behave as a small perturbation of the layer problem . A challenging phenomenon is the breakdown of normal hyperbolicity of $S_0$ such that Fenichel Theory cannot be applied. Typical examples of such a breakdown are found at bifurcation points $p\in S_0$, where the Jacobian ${\mathrm{D}}_x f(p)$ has at least one eigenvalue with zero real part. The simplest examples are folds or points of transversal self-intersection of $S$ in planar systems ($m=1=n$). In the following we focus on the *transcritical* bifurcation in planar systems. Dynamics near the transcritical singularity {#contdynamics} ------------------------------------------- We briefly recall the main results for transcritical fast-slow singularities in the continuous-time setting from [@ks2001/2]. Without loss of generality, i.e., up to translation of coordinates, we may just assume that the transcritical point coincides with the origin. Consider the system of planar ODEs on the fast time scale $$\begin{aligned} \label{fastequtrans} \begin{array}{r@{\;\,=\;\,}r} x' & f(x,y,{\varepsilon})\,, \\ y' & {\varepsilon}g(x,y,{\varepsilon})\,, \end{array}\end{aligned}$$ where the vector field $f$ satisfies the following conditions at the origin: $$f(0,0,0) = 0, \qquad \frac{\partial }{\partial x}f (0,0,0) = 0, \qquad \frac{\partial} {\partial y}f (0,0,0) = 0$$ These conditions imply that the critical manifold $S:=S_0= \{ (x,y)\in\mathbb{R}^2:f(x,y,0) = 0 \}$ has a self-intersection at the origin. The condition $$\det H_f((0,0,0)) < 0\,,$$ where $H_f$ denotes the Hessian matrix of $f$ in $x,y$, implies that this intersection is non-degenerate. Moreover, we require $$\frac{\partial^2 f}{\partial^2 x} (0,0,0) \neq 0$$ to guarantee that $S$ is transverse to the critical fibre $\{(x,0):x \in \mathbb{R} \}$. Furthermore, we assume that $$g(0,0,0) \neq 0\,$$ to ensure transversal slow dynamics at the origin. The transcritical bifurcation at the origin, induced by such a system, can be brought [@ks2001/2] to the normal form $$\begin{aligned} \label{normalform} \begin{array}{r@{\;\,=\;\,}l} x' & x^2 - y^2 + \lambda {\varepsilon}+ \mathcal{R}_1(x,y,{\varepsilon})\,, \\ y' & {\varepsilon}(1 + \mathcal{R}_2(x,y,{\varepsilon}))\,, \end{array}\end{aligned}$$ where $\lambda > 0$ is a constant and $$\mathcal{R}_1(x,y, {\varepsilon}) = \mathcal{O}(x^3, x^2 y, xy^2, y^3, {\varepsilon}x, {\varepsilon}y, {\varepsilon}^2) \,, \quad \mathcal{R}_2 (x,y, {\varepsilon}) = \mathcal{O}(x,y, {\varepsilon}) \,.$$ The critical manifold $S$ is the union of four branches. We denote them by $S_\txta^+, S_\txta^-, S_\txtr^+, S_\txtr^-$ where $\txta$ means attracting and $\txtr$ repelling with respect to the fast variables and $+$ and $-$ correspond to the sign of the $y$-variable, see also Figure \[fig:1\]. We denote the corresponding slow manifolds for small ${\varepsilon}> 0$ by $S_{\txta, {\varepsilon}}^+, S_{\txta, {\varepsilon}}^-, S_{\txtr, {\varepsilon}}^+, S_{\txtr, {\varepsilon}}^-$. We focus on the fate of $S_{\txta, {\varepsilon}}^-$, when it is continued through a neighbourhood of $(0,0)$. For that purpose, we fix $\rho > 0$ and let $J$ be a small open interval around $0$ in $\mathbb{R}$, potentially depending on ${\varepsilon}$. Then one can define $$\Delta^{\textnormal{in}} := \{(- \rho, y), \, y + \rho \in J \}, \ \Delta_\txta^{\textnormal{out}} = \{(- \rho, y), \, y - \rho \in J \}, \ \Delta_\txte^{\textnormal{out}} = \{( \rho, y), \, y \in J \}\,.$$ If $\Pi_\txta$ and $\Pi_\txte$ denote the transition maps from $\Delta^{\textnormal{in}}$ to $\Delta_\txta^{\textnormal{out}}$ and $\Delta_\txte^{\textnormal{out}}$ respectively, we can formulate the main result on the transcritical singularity [@ks2001/2 Theorem 2.1]. \[transcritical\_classic\] Fix $\lambda \neq 1$. There exists ${\varepsilon}_0 > 0$ such that the following assertions hold for ${\varepsilon}\in [0, {\varepsilon}_0)$. 1. If $ \lambda > 1$, then the manifold $S_{\txta, {\varepsilon}}^-$ passes through $\Delta_\txte^{\textnormal{out}}$ at a point $(\rho, h({\varepsilon}))$ where $h({\varepsilon}) = \mathcal{O}(\sqrt{{\varepsilon}})$. The section $\Delta^{\textnormal{in}}$ is mapped by $\Pi_\txte$ to an interval containing $S_{\txta, {\varepsilon}}^{-} \cap \Delta_\txte^{\textnormal{out}}$ of size $\mathcal{O}( \txte^{-C/{\varepsilon}})$, where $C$ is a positive constant. 2. If $ \lambda < 1$, then the section $\Delta^{\textnormal{in}}$ (including the point $\Delta^{\textnormal{in}} \cap S_{\txta, {\varepsilon}}^-$) is mapped by $\Pi_\txta$ to an interval about $S_{\txta, {\varepsilon}}^{+}$ of size $\mathcal{O}(\txte^{-C/{\varepsilon}})$, where $C$ is a positive constant. [.5]{} [cont\_lambdasmall\_v1]{} (10,32.5)[$\Delta^{\textnormal{in}}$]{} (11,91)[$\Delta_\txta^{\textnormal{out}}$]{} (3,20)[$S_{\txta, {\varepsilon}}^{-}$]{} (5,1.5)[$S_\txta^-$]{} (5,74)[$S_\txta^+$]{} (90,9)[$S_\txtr^-$]{} (90,75)[$S_\txtr^+$]{} (95,47.5)[$x$]{} (53,90)[$y$]{} [.5]{} [cont\_lambdalarge\_v1]{} (14,32)[$\Delta^{\textnormal{in}}$]{} (78,59)[$\Delta_\txte^{\textnormal{out}}$]{} (4,-2)[$S_{\txta, {\varepsilon}}^{-}$]{} (95,41)[$x$]{} (52,86)[$y$]{} In the following, we are going to sketch the proof of Theorem \[transcritical\_classic\] to illustrate the setting for the continuous-time case [@ks2001/2] and to be able to have this case available for comparison. To start, we consider ${\varepsilon}$ as variable, and write the problem in three variables $$\begin{aligned} \label{threevariables} x' &= x^2 - y^2 + \lambda {\varepsilon}+ \mathcal{R}_1(x,y,{\varepsilon})\,, \nonumber \\ y' &= {\varepsilon}(1+ \mathcal{R}_2(x,y,{\varepsilon}))\,, \\ {\varepsilon}' &= 0\,\nonumber.\end{aligned}$$ The total derivative of the above vector field $X$ in $(x,y,{\varepsilon})$ has only zero eigenvalues at the origin $(x,y,{\varepsilon})=(0,0,0)$. In particular, the origin is a non-hyperbolic equilibrium. To gain hyperbolicity at the singularity one can use the *blow-up method*, which maps the equilibrium point to an entire manifold, on which the dynamics can be desingularized. The shortest proof of Theorem \[transcritical\_classic\] uses the quasi-homogeneous *blow-up transformation* $$x = r \overline x, \quad y = r \overline y, \quad {\varepsilon}= r^2 \overline {\varepsilon},$$ where $(\overline{x}, \overline{y}, \overline{{\varepsilon}}, r) \in B := S^2 \times [0, r_0]$ for some $r_0 > 0$. The transformation can be formalized as a map $\Phi: B \to \mathbb R^3$, where $r_0$ is small enough such that the dynamics on $\Phi(B)$ can be described by the normal form approximation. The map $\Phi$ induces a vector field $\overline{X}$ on $B$ by $\Phi_{*}(\overline{X}) = X$, where $\Phi_{*}$ is the pushforward induced by $\Phi$. Note that, since $\Phi(B)$ is a full neighbourhood of the origin, it suffices to study the vector field $\overline{X}$. The associated dynamics on $B$ are best analysed in three charts $K_i$, $i = 1,2,3$, an entrance ($\overline x = -1$), a scaling ($\overline {\varepsilon}= 1$) and an exit ($ \overline x = 1$) chart, given by $$\begin{aligned} K_1 : \quad x &= - r_1, \quad y = r_1 y_1, \quad {\varepsilon}= r_1^2 {\varepsilon}_1, \label{K1} \\ K_2 : \quad x &= r_2 x_2, \quad y = r_2 y_2, \quad {\varepsilon}= r_2^2 , \label{K2} \\ K_3 : \quad x &= r_3, \quad y = r_3 y_3, \quad {\varepsilon}= r_3^2 {\varepsilon}_3. \label{K3}\end{aligned}$$ The changes of coordinates between the charts look as follows: $k_{12}: K_1 \to K_2$ is given by $$\label{kappa12} x_2 = - {\varepsilon}_1^{-1/2}, \quad y_2 ={\varepsilon}_1^{-1/2} y_1, \quad r_2 = {\varepsilon}_1^{1/2} r_1,$$ $k_{21}: K_2 \to K_1$ is given by $$\label{kappa21} {\varepsilon}_1 = x_2^{-2}, \quad y_1 = -x_2^{-1} y_2, \quad r_1 = - x_2 r_2,$$ $k_{32}: K_3 \to K_2$ is given by $$\label{kappa32} x_2 = {\varepsilon}_3^{-1/2}, \quad y_2 ={\varepsilon}_3^{-1/2} y_3, \quad r_2 = {\varepsilon}_3^{1/2} r_3,$$ and $k_{23}: K_2 \to K_3$ is given by $$\label{kappa23} {\varepsilon}_3 = x_2^{-2}, \quad y_3 = x_2^{-1} y_2, \quad r_3 = x_2 r_2.$$ Describing the dynamics via the blow-up method {#contblowup} ---------------------------------------------- The dynamics in $K_1$ and $K_3$ can be desingularised. Indeed, the origin is mapped to a sphere $S^2\times \{r=0\}$, and dividing the three vector fields by $r_i$ for $i =1,3$ and using a time change yield: $$\begin{aligned} \label{K1_dynamics} r_1' &= - r_1 F_1(r_1, y_1, {\varepsilon}_1), \nonumber \\ y_1' &= {\varepsilon}_1 + y_1 F_1(r_1, y_1, {\varepsilon}_1) + \mathcal{O}(r_1), \nonumber \\ {\varepsilon}_1' &= 2 {\varepsilon}_1 F_1(r_1, y_1, {\varepsilon}_1),\end{aligned}$$ where $F_1(y_1, {\varepsilon}_1) = 1- y_1^2 + \lambda {\varepsilon}_1 + \mathcal{O}(r_1)$, and $$\begin{aligned} \label{K3_dynamics} r_3' &= r_3 F_3(r_3, y_3, {\varepsilon}_3), \nonumber \\ y_3' &= {\varepsilon}_3 - y_3 F_3(r_3, y_3, {\varepsilon}_3) + \mathcal{O}(r_3), \nonumber \\ {\varepsilon}_3' &= -2 {\varepsilon}_3 F_3(r_3, y_3, {\varepsilon}_3),\end{aligned}$$ where $F_3(y_1, {\varepsilon}_1) = 1- y_3^2 + \lambda {\varepsilon}_3 + \mathcal{O}(r_3)$. The $\mathcal{O}(r_i)$-terms are of higher order, derived from $\mathcal{R}_1$ and $\mathcal{R}_2$ in the original equation . There are six equilibria on the invariant circle $\{ r = \overline {\varepsilon}= 0\}$; see also Figure \[fig:2\]. We denote by $$p_{\txta,1}^- = (0, -1,0), \quad q_1^{\textnormal{in}} = (0,0,0), \quad p_{\txta,1}^+ = (0,1,0)$$ the resulting equilibria in $K_1$. The points $p_{\txta,1}^+$ and $p_{\txta,1}^-$ have a one-dimensional stable eigenspace along the $y_1$-direction with eigenvalue $-2$ and a two-dimensional centre eigenspace. The point $q_1^{\textnormal{in}}$ is a saddle with eigenvalues $-1,1,2$. Similarly, we denote by $$p_{\txtr,3}^- = (0, -1,0), \quad q_3^{\textnormal{out}} = (0,0,0), \quad p_{r,3}^+ = (0,1,0)$$ the equilibria in $K_3$. The points $p_{\txtr,3}^+$ and $p_{\txtr,3}^-$ have a one-dimensional unstable eigenspace along the $y_3$-direction with eigenvalue $2$ and a two-dimensional centre eigenspace. The point $q_1^{\textnormal{out}}$ is a saddle with eigenvalues $1,-1,-2$. Note that we have two hyperbolic equilibria and four partially hyperbolic equilibria on $\{ r = \overline {\varepsilon}= 0\}$ as opposed to complete lack of hyperbolicity in the original problem, see Figure \[fig:2\]. The centre manifolds $M_{\txta,1}^{\pm}$ of $p_{\txta,1}^{\pm}$ in chart $K_1$ and $M_{\txtr,3}^{\pm}$ of $p_{r,3}^{\pm}$ in chart $K_3$ are the unique extensions of slow manifolds $S_{\txta,1}^{\pm}$ and $S_{\txtr,3}^{\pm}$, which correspond to $S_{\txta,{\varepsilon}}^{\pm}$ and $S_{\txtr,{\varepsilon}}^{\pm}$ in the original coordinates. Furthermore, $ M_{\txta,1}^{\pm}$ and $ M_{\txtr,3}^{\pm}$ correspond to locally invariant manifolds $\overline M_{\txta}^{\pm}$ and $\overline M_{\txtr}^{\pm}$ of the blown-up vector field $\overline{X}$ at the equilibria $\overline p_{\txta}^{\pm}$ and $\overline p_{\txtr}^{\pm}$. The idea is to track $M_{\txta}^{-}$ as it moves across the sphere $S^2$ guided by the dynamics in chart $K_2$. For that purpose, it is helpful to introduce the centre manifolds $ N_{\txta,1}^{\pm} = M_{\txta,1}^{\pm} \cap \{r_1 = 0\}$ and $ N_{\txtr,3}^{\pm} = M_{\txtr,3}^{\pm} \cap \{r_3 = 0\}$ and their global counterparts $N_{\txta}^{\pm}$ and $N_{\txtr}^{\pm}$. In that way, the dynamics can be analyzed first for $r=0$, i.e. at the origin now blown up to a sphere where partial hyperbolicity is gained, and then for small $r > 0$ in order to connect the dynamics around the origin. [.5]{} [blown\_up\_vf]{} (32,80)[$S^1 \times \{0\} \times [0,r_0]$]{} (34,64)[$\overline p_{\txta}^{+}$]{} (34,23)[$\overline p_{\txta}^{-}$]{} (58,23)[$\overline p_{\txtr}^{-}$]{} (59,64)[$\overline p_{\txtr}^{+}$]{} (23,47)[$\overline q^{\textnormal{in}}$]{} (69,47)[$\overline q^{\textnormal{out}}$]{} (34,9)[$S^1 \times \{0\} \times \{0\}$]{} [.5]{} [original\_vf]{} (84,80)[$S_{\txtr}^{+}$]{} (89,4)[$S_{\txtr}^{-}$]{} (10,80)[$S_{\txta}^{+}$]{} (11,4)[$S_{\txta}^{-}$]{} The crucial dynamics happen in the scaling chart $K_2$. The desingularized equations have the form $$\begin{aligned} \label{K2_dyanmics} x_2' &= x_2^2 - y_2^2 + \lambda + \mathcal{O}(r_2)\, \nonumber \\ y_2' &= 1 + \mathcal{O}(r_2)\,\,, \nonumber \\ r_2' &= 0\,.\end{aligned}$$ Firstly, taking $r_2 = 0$, one can develop the typical behaviour of trajectories around the canard case $\lambda =1$. In this case, there is the solution $x_2 = y_2 = t$ denoted by $\gamma_2(t)$. We observe that $$\lim_{t \to \infty} \kappa_{23}(\gamma_2(t)) = p_{\txtr,3}^+\,, \quad \lim_{t \to -\infty} \kappa_{21}(\gamma_2(t)) = p_{\txta,1}^-\,.$$ Hence, for $\lambda =1 $, there is a connection between the equilibrium points $\overline p_\txta^-$ and $\overline p_\txtr^+$ by a trajectory $ \overline \gamma$ which equals $\gamma_2$ in $K_2$. Krupa and Szmolyan [@ks2001/2 Lemma 2.2] argue with the rotational property of to observe that a connection from $\overline p_\txta^-$ to $\overline p_\txtr^+$ only exists for $\lambda =1$. They conclude [@ks2001/2 Proposition 2.1] that for $\lambda < 1$ there is a unique trajectory connecting $\overline p_\txta^-$ to $\overline p_\txta^+$ corresponding to $\overline N_\txta^-$, and for $\lambda > 1$ there is a unique trajectory connecting $\overline p_\txta^-$ to $\overline q^{\textnormal{out}}$ corresponding to $\overline N_\txta^-$; see Figure \[fig:2\]. Moreover, one can argue that for $\lambda \neq 1$ the $\mathcal{O}(r_2)$ terms do not change the qualitative picture: if $\lambda < 1$, it follows from the previous considerations and perturbation theory that $\overline M_\txta^-$ reaches the vicinity of $\overline M_\txta^+$ and that both centre manifolds are exponentially attracting. Analyzing the dynamics of  on $\overline M_\txta^{\pm}$ shows that the passage times near the equilibria $\overline p_\txta^{\pm}$ are of order $\mathcal{O}(1/{\varepsilon})$. Hence, one can conclude that the relevant contraction rate is of order $\mathcal{O}(\txte^{-C/{\varepsilon}})$ for some $C > 0$, as stated in Theorem \[transcritical\_classic\]. If $\lambda > 1$, it follows from the previous considerations and perturbation theory that $\overline M_\txta^-$ passes near $\overline q^{\textnormal{out}}$. Contraction towards $\overline M_\txta^-$ works as for $\lambda < 1$. The fact that $h({\varepsilon}) = \mathcal{O}(\sqrt{{\varepsilon}})$ has to be worked out from the asymptotics of $\overline N_\txta^-$ in chart $K_2$ and an analysis of the linearization of $\overline{X}$ at $\overline q^{\textnormal{out}}$ in chart $K_3$. The latter is delicate due to the resonance of the eigenvalues $-1,1$ and $2$. Krupa and Szmolyan discuss this problem in detail for the fold singularity [@ks2011 Section 2.6] but not for the transcritical problem and claim that the statement follows analogously. We are going to give a detailed argument for the discrete-time problem below. Transcritical singularity in discrete time {#discretemain} ========================================== We can now turn to the main part, i.e., we want to analyze the discrete-time problem obtained via an explicit Euler method. For that purpose, we first set the higher order terms in , represented by $\mathcal{R}_1$ and $\mathcal{R}_2$, to zero. We introduce the step size $h > 0$ of the Euler method as an additional variable and obtain a map $P: \mathbb{R}^4 \to \mathbb{R}^4$, whose iterations $P^n(z_0)$, for $n \in \mathbb{N}$ and $z_0 \in \mathbb{R}^4$, we are going to analyze close to the origin with $h, {\varepsilon}> 0$: $$\label{map_transcritical} P: \begin{pmatrix} x \\ y \\ {\varepsilon}\\ h \end{pmatrix} \mapsto \begin{pmatrix} \tilde{x} \\ \tilde y \\ \tilde {\varepsilon}\\ \tilde h \end{pmatrix} = \begin{pmatrix} x + h(x^2 - y^2 + \lambda {\varepsilon}) \\ y + {\varepsilon}h \\ {\varepsilon}\\ h \end{pmatrix}.$$ Also in this case there is a normally hyperbolic critical manifold $$S_0 = \{(x,y,{\varepsilon},h)\in\mathbb{R}^4:x^2 = y^2\} \setminus \{0\},$$ which splits into the attracting branches $S_\txta^-, S_\txta^+$ and repelling branches $S_\txtr^-, S_\txtr^+$. It follows from [@HPS77 Theorem 4.1] that for ${\varepsilon}, h > 0$ small enough there are corresponding forward invariant slow manifolds $S_{\txta, {\varepsilon},h}^-, S_{\txta, {\varepsilon},h}^+$ and $S_{\txtr, {\varepsilon},h}^-, S_{\txtr, {\varepsilon},h}^+$. Note that ${\mathrm{D}}P(0)$ only has quadruple eigenvalue $1$, which means a complete loss of hyperbolicity at the origin, as in the ODE case. Similarly to the problem in continous time, we fix some $\rho > 0$ and let $J$ be a small open interval around $0$ in $\mathbb{R}$. We define $$\Delta^{\textnormal{in}} = \{(- \rho, y), \, y + \rho \in J \}, \ \Delta_\txta^{\textnormal{out}} = \{(- \rho, y), \, y - \rho \in J \}, \ \Delta_\txte^{\textnormal{out}} = \{( \rho, y), \, y \in J \},$$ where $\varepsilon$ and $h$ are fixed as prescribed by the map $P$; see also Figure \[fig:discrete1\]. In contrast with flows, the intervals $\Delta_\txta^{\textnormal{out}}$, $\Delta_\txte^{\textnormal{out}}$ are not necessarily hit by $P^n(-\rho,y)$ for fixed $y \in \Delta^{\textnormal{in}}$ and some $n > 0$. Notice that we used an abbreviated notation $P^n(x,y,\varepsilon,h)=P^n(x,y)$ for the map $P$ and also for points in $\Delta^{\textnormal{in}}$ just denoting them by their $y$-coordinate. We define the transition maps from $\Delta^{\textnormal{in}}$ to the vicinity of $\Delta_\txta^{\textnormal{out}}$ and $\Delta_\txte^{\textnormal{out}}$ by $$\begin{aligned} \label{maps} \Pi_\txta(y) = P^{n^*(y)}(-\rho,y)\,, \text{ where } n^*(y) &= \operatorname*{arg\,min}_{n \in \mathbb{N}} \operatorname{dist}( P^{n}(-\rho,y), \Delta_\txta^{\textnormal{out}})\,, \ y \in \Delta^{\textnormal{in}}\,, \\ \Pi_\txte(y) = P^{m^*(y)}(-\rho,y)\,, \text{ where } m^*(y) &= \operatorname*{arg\,min}_{n \in \mathbb{N}} \operatorname{dist}( P^{n}(-\rho,y), \Delta_\txte^{\textnormal{out}})\,, \ y \in \Delta^{\textnormal{in}}.\end{aligned}$$ We can formulate the main result on the transcritical singularity in discrete time (see Figure \[fig:discrete1\] for an illustration): \[transcritical\_discrete\] Fix $\lambda \neq 1$ and $\rho > 0$. There exists ${\varepsilon}_0 > 0$ such that the following assertions hold for all ${\varepsilon}\in [0, {\varepsilon}_0]$, $h > 0$ such that $0 < h \rho^3 < {\varepsilon}$ and any $0 < c < \rho h$. 1. If $ \lambda < 1$, then the section $\Delta^{\textnormal{in}}$ (including the point $\Delta^{\textnormal{in}} \cap S_{\txta, {\varepsilon}, h}^-$) is mapped by $\Pi_\txta$ to a set about $S_{\txta, {\varepsilon}, h}^{+}$ of $y$-width $\mathcal{O}\left( (1- c)^{\frac{C \rho }{h {\varepsilon}}} \right)$, where $C$ is a positive constant, such that every point in $\Pi_\txta \left( \Delta^{\textnormal{in}} \right)$ is $\mathcal{O}(h {\varepsilon})$-close to $\Delta_\txta^{\textnormal{out}}$. 2. If $ \lambda > 1$, then the manifold $S_{\txta, {\varepsilon},h}^-$ passes through $\Delta_\txte^{\textnormal{out}}$ at a point $(\rho, k({\varepsilon}))$ where $k({\varepsilon}) = \mathcal{O}({\varepsilon}^{1/3})$. The section $\Delta^{\textnormal{in}}$ is mapped by $\Pi_\txte$ to a set about $S_{\txta, {\varepsilon},h}^{-}$ of $y$-width $\mathcal{O}\left( (1- c)^{\frac{C \rho}{ h {\varepsilon}}}\right)$, where $C$ is a positive constant, such that every point in $\Pi_\txte \left( \Delta^{\textnormal{in}} \right)$ is $\mathcal{O}(h ({\varepsilon}+ \rho^2))$-close to $\Delta_\txte^{\textnormal{out}}$. [.5]{} [discrete\_lambdasmall]{} (17,31)[$\Delta^{\textnormal{in}}$]{} (17,82)[$\Delta_\txta^{\textnormal{out}}$]{} (2,19)[$S_{\txta, {\varepsilon},h}^{-}$]{} (26,10)[$y$]{} (26,77)[$\Pi_\txta(y)$]{} [.5]{} [discrete\_lambdalarge]{} (18,33)[$\Delta^{\textnormal{in}}$]{} (74,57)[$\Delta_\txte^{\textnormal{out}}$]{} (10,3)[ $S_{\txta, {\varepsilon},h}^{-}$]{} (26,9)[$y$]{} (80,45)[$\Pi_\txte(y)$]{} Note carefully that Theorem \[transcritical\_discrete\] includes a precise requirement between three parameters, i.e., $0 < h \rho^3 < {\varepsilon}$, which means that the choice of step size for the Euler scheme is crucial. Since we only work in the normal form, the parameter $\rho$ does not have to be small and can, for example, be chosen equal to $1$ such that the requirement reads $0 < h < {\varepsilon}$. Our aim is to prove Theorem \[transcritical\_discrete\] using the blow-up method for the problem in four variables and to track individual trajectories inside the slow manifolds. Blow-up transformation {#sec:blow} ---------------------- We conduct the quasihomogeneous blow-up transformation $$x = \bar r \bar x, \quad y = \bar r \bar y, \quad {\varepsilon}= \bar r^2 \bar {\varepsilon}, \quad h = \bar h/\bar r\,,$$ where $(\bar x, \bar y, \bar {\varepsilon}, \bar h, \bar r) \in B := S^2 \times (0, h_0] \times (0, \rho]$ for some $h_0, \rho > 0$. The change of variables in $h$ is chosen such that the map is desingularized in the relevant charts. We exclude $0$ from the domain of $\bar h$ since at $\bar h = 0$ every point is a neutral fixed point. Due to the transformation $h = \bar h/\bar r$ we have to exclude $0$ from the domain of $\bar r$ as well. The whole transformation can be formalised as a map $\Phi: B \to \mathbb R^4$. The map $\Phi$ induces a map $\overline{P}$ on $B$ by $\Phi \circ \overline{P} \circ \Phi^{-1} = P$. Analogously to the continuous time case, we are using the charts $K_i$, $i=1,2,3$, to describe the dynamics. The chart $K_1$ focuses on the entry of trajectories for any value of lambda and the exit of trajectories for $\lambda < 1$, and is given by $$\label{K1d} x = - r_1, \quad y = r_1 y_1, \quad {\varepsilon}= r_1^2 {\varepsilon}_1, \quad h = h_1/r_1\,.$$ In the scaling chart $K_2$ the dynamics arbitrarily close to the origin are analyzed. It is given via the mapping $$\label{K2d} x = r_2 x_2, \quad y = r_2 y_2, \quad {\varepsilon}= r_2^2 , \quad h = h_2/r_2\,.$$ The exit chart $K_3$ plays a role for the dynamics emerging from a neighbourhood of the origin for $\lambda > 1$ and is given by $$\label{K3d} x = r_3, \quad y = r_3 y_3, \quad {\varepsilon}= r_3^2 {\varepsilon}_3, \quad h = h_3/r_3\,.$$ There are four relevant changes of coordinates between the charts. The map $k_{12}: K_1 \to K_2$ is given by $$\label{kappa12d} x_2 = - {\varepsilon}_1^{-1/2}, \quad y_2 ={\varepsilon}_1^{-1/2} y_1, \quad r_2 = {\varepsilon}_1^{1/2} r_1, \quad h_2 = {\varepsilon}_1^{1/2} h_1\,,$$ $k_{21}: K_2 \to K_1$ is given by $$\label{kappa21d} {\varepsilon}_1 = x_2^{-2}, \quad y_1 = -x_2^{-1} y_2, \quad r_1 = - x_2 r_2, \quad h_1 = - x_2 h_2\,,$$ $k_{32}: K_3 \to K_2$ is given by $$\label{kappa32d} x_2 = {\varepsilon}_3^{-1/2}, \quad y_2 ={\varepsilon}_3^{-1/2} y_3, \quad r_2 = {\varepsilon}_3^{1/2} r_3, \quad h_2 = {\varepsilon}_3^{1/2} h_3\,,$$ and $k_{23}: K_2 \to K_3$ is given by $$\label{kappa23d} {\varepsilon}_3 = x_2^{-2}, \quad y_3 = x_2^{-1} y_2, \quad r_3 = x_2 r_2, \quad h_3 = x_2 h_2.$$ Dynamics in the chart $K_1$ {#secK1} --------------------------- We choose $\delta > 0$ small such that $ \left| \lambda \delta \right| \leq 1$, to be determined later in more detail which will also determine ${\varepsilon}_0 = \rho^2 \delta$. Furthermore, we assume $\nu := \rho h < \delta$ for fixed $h \in (0, h_0]$. We are interested in trajectories entering $B$ at $\bar r = \rho$ which is best analyzed in the entering chart $K_1$. At $\bar r = \rho$ we have $h_1 = \nu$. We investigate the dynamics within the domain $$D_1 := \{(r_1, y_1, {\varepsilon}_1, h_1) \in \mathbb{R}^4 : r_1 \in [ 0, \rho], {\varepsilon}_1 \in [0, 2 \delta], h_1 \in [ 0, \nu] \}\,.$$ Note that we have to bound $h_1$ from below since for $h_1 = 0$ everything is fixed and it is helpful to choose a uniform bound to get estimates on the contraction rates. A suitable choice is $h_1 \geq \nu/2$. The proportionality $h= h_1/r_1$ implies that $r_1 \geq \rho/2$. Furthermore, we want to see what happens for ${\varepsilon}_1 = \delta$. Due to the invariant relation ${\varepsilon}_1 r_1 h_1 = {\varepsilon}h$, this implies taking ${\varepsilon}_1 \geq \delta/4$. These considerations lead to introducing the subdomain $\hat D_1 \subset D_1$ which is given as $$\hat D_1 := \{(r_1, y_1, {\varepsilon}_1, h_1) \in \mathbb{R}^4 : r_1 \in [ \rho/2, \rho], {\varepsilon}_1 \in [\delta/4, \delta], h_1 \in [ \nu/2, \nu] \}\,.$$ We will later restrict $y_1$ to obtain a small neighbourhood of $\Delta^{\textnormal{in}}$ as entering domain. To derive the blown-up map we calculate $$\begin{aligned} \tilde r_1 &= - \tilde x = - x - h(x^2 - y^2 + \lambda {\varepsilon}) \\ &= r_1 - \frac{h_1}{r_1}(r_1^2 - r_1^2 y_1^2 + \lambda r_1^2 {\varepsilon}_1) \\ &= r_1(1 - h_1(1-y_1^2 + \lambda {\varepsilon}_1))\,.\end{aligned}$$ Similarly, we can derive the maps for the other variables in chart $K_1$ leading to the following dynamics, desingularised by choosing $h = h_1/r_1$: $$\begin{aligned} \label{K1dynamics} \tilde{r}_1 &= r_1(1 - h_1F_1(y_1, {\varepsilon}_1)), \nonumber \\ \tilde y_1 &= (y_1 + {\varepsilon}_1 h_1)(1 - h_1F_1(y_1, {\varepsilon}_1))^{-1},\nonumber \\ \tilde {\varepsilon}_1 &= {\varepsilon}_1 (1 - h_1F_1(y_1, {\varepsilon}_1))^{-2},\nonumber \\ \tilde{h}_1 &= h_1(1 - h_1F_1(y_1, {\varepsilon}_1)),\end{aligned}$$ where $F_1(y_1, {\varepsilon}_1) = 1 - y_1^2 + \lambda {\varepsilon}_1$. Now we have to analyze the dynamics of  in detail. For any $h_1 \in [ 0, \nu]$ system  has the fixed points $$v_{\txta,1}^-(h_1) = (0,-1,0,h_1), \quad v_{\txta,1}^+(h_1) = (0,1,0,h_1).$$ The points $v_{\txta,1}^-$ and $v_{\txta,1}^+$ have a three-dimensional centre eigenspace and a one-dimensional eigenspace spanned by $(0,1,0,0)^\top$ with the eigenvalue $\lambda_1 = 1-2h_1$, which is stable as long as $h_1 < 1$. Note that the set $$\{ w^{\textnormal{in}}(h_1) := (0,0,0,h_1) \, ; \, h_1 \in [ 0, \nu ] \}$$ is an invariant set for system  within $D_1$. The points $w^{\textnormal{in}}(h_1)$ have two stable and two unstable eigenvalues $$\lambda_1 = 1 - 2h_1, \quad \lambda_2 = 1 - h_1 , \quad \lambda_3 =(1 - h_1)^{-1}, \quad \lambda_4 =(1 - h_1)^{-2}\,,$$ such that again the stability depends on $h_1$ and is analogous to the time-continuous case, if $h_1 < 1$. The eigenvalues $\lambda_1, \lambda_2$ correspond with the $h_1$- and $r_1$-directions and $\lambda_3, \lambda_4$ with the $y_1$-and ${\varepsilon}_1$-directions. Moreover, we remark that we can re-interpret the stability conditions to obtain the same behaviour as in the continuous time case, such as $$1>|\lambda_1| = |1 - 2h_1|,$$ precisely as the stability criteria of the Euler method derived from the Dahlquist test equation [@Dahlquist] within each eigenspace of the continuous-time blow-up problem in chart $K_1$. We observe that the two-dimensional planes $$S_{\txta,1}^{\pm} = \{(r_1, y_1, {\varepsilon}_1, h_1) \in D_1 \,:\, y_1 = \pm 1, \ {\varepsilon}_1 = 0 \}$$ are invariant manifolds of $D_1$ only consisting of fixed points, attracting in the $y_1$-direction and neutral in the other directions. One can extend these manifolds $S_{\txta,1}^{\pm}$ to center-stable invariant manifolds $M_{\txta,1}^{\pm}$ (see Figure \[fig:K1\]), which are given in $D_1$ by graphs $y_1 = l_{\pm}({\varepsilon}_1, h_1)$ for mappings $l_\pm$. We can derive $l_{\pm}$ from the discrete invariance equation $$\label{Invariance_equ} l_{\pm} (\tilde {\varepsilon}_1, \tilde h_1)= \frac{l_{\pm} ({\varepsilon}_1, h_1) + {\varepsilon}_1 h_1}{1 - h_1F_1( l_{\pm} ({\varepsilon}_1, h_1), {\varepsilon}_1)}\,.$$ Solving this equation allows us to make the following statement. \[Invariance\_Prop\] Equation  has the solutions $$\begin{aligned} l_{-} ({\varepsilon}_1, h_1) &= -1 + \frac{1 - \lambda}{2} {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2 h_1)\,, \label{lminus}\\ l_{+} ({\varepsilon}_1, h_1) &= 1 + \frac{1 + \lambda}{2} {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2 h_1)\,. \label{lplus}\end{aligned}$$ which characterize $M_{\txta,1}^{-}$ and $M_{\txta,1}^{+}$ respectively. Furthermore, ${\varepsilon}_1$ is increasing on $M_{\txta,1}^{-}$ and decreasing on $M_{\txta,1}^{+}$, whereas $h_1, r_1$ are decreasing on $M_{\txta,1}^{-}$ and increasing on $M_{\txta,1}^{+}$. It is easy to derive that for $l_{-} ({\varepsilon}_1, h_1)$ given by , we have $$\label{F_manifold} F_1(l_{-} ({\varepsilon}_1, h_1), {\varepsilon}_1) = {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2).$$ Hence, we observe that $$\tilde {\varepsilon}_1 = {\varepsilon}_1 (1 - h_1F_1(l_{-} ({\varepsilon}_1, h_1), {\varepsilon}_1))^{-2} = {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2 h_1)$$ and $$\tilde h_1 = h_1 + \mathcal{O}(h_1^2 {\varepsilon}_1)\,.$$ Therefore, we deduce that $$\begin{aligned} \frac{l_{-} ({\varepsilon}_1, h_1) + {\varepsilon}_1 h_1}{1 - h_1F( l_{-} ({\varepsilon}_1, h_1), {\varepsilon}_1)} &= (l_{-} ({\varepsilon}_1, h_1) + {\varepsilon}_1 h_1)(1 + h_1 {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2 h_1))\\ &= l_{-} ({\varepsilon}_1, h_1) - h_1 {\varepsilon}_1 + \frac{1 - \lambda}{2} {\varepsilon}_1^2 h_1 + {\varepsilon}_1 h_1 + \mathcal{O}({\varepsilon}_1^2 h_1)\\ &= l_{-} ({\varepsilon}_1, h_1) + \mathcal{O}({\varepsilon}_1^2 h_1) = -1 + \frac{1 - \lambda}{2} {\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2 h_1) = l_{-} (\tilde {\varepsilon}_1, \tilde h_1),\end{aligned}$$ which shows the claim for $l_{-} ({\varepsilon}_1, h_1)$. Since we can assume that $h_1 {\varepsilon}_1 < 1$, the dynamics on $M_{\txta,1}^{-}$ follow as stated. Similarly we can derive that for $l_{+} ({\varepsilon}_1, h_1)$ given by , we have $$F_1(l_{+} ({\varepsilon}_1, h_1)) = -{\varepsilon}_1 + \mathcal{O}({\varepsilon}_1^2).$$ The statements then follow analogously to before. For all trajectories, as explained above, we have to consider the entry region $$\Sigma_{1,-}^{\textnormal{in}} := \{(r_1, y_1, {\varepsilon}_1, h_1) \in D_1 \,:\, r_1 = \rho, \ h_1 = \nu, {\varepsilon}_1 = \delta/4 \}\,.$$ Before exiting $\hat D_1$ for the first time, the dynamics must reach the set $$\Sigma_{1,-}^{\textnormal{out}} = \left\{(r_1, y_1, {\varepsilon}_1, h_1) \in \mathbb{R}^4 \,:\, \frac{\rho}{2}\leq r_1 \leq \frac{\rho}{2}(1+ \nu), \ \frac{\nu}{2}\leq h_1 \leq \frac{\nu}{2}(1+\nu), \ \delta(1 -2 \nu) \leq {\varepsilon}_1 \leq \delta\right\}\,,$$ since $F_1(y_1, {\varepsilon}_1) \leq2$. Next, we want to find a set $R \subset \Sigma_{1,-}^{\textnormal{in}} $ such that $M_{\txta,1}^- \cap \Sigma_{1,-}^{\textnormal{in}} \subset R$ and there is a well-defined map $\Pi_{1,-}: R \to \Sigma_{1,-}^{\textnormal{out}}$ that maps points in $R$ along a trajectory of  to a first entry point in $\Sigma_{1,-}^{\textnormal{out}}$. By Proposition \[Invariance\_Prop\], this is feasible for $R$ small enough such that trajectories through $R$ stay sufficiently close to $M_{\txta,1}^-$ in the first part of the passage in $K_1$. We choose $R$ to be the interval $$\label{R1} R_1 := \left\{(r_1, y_1, {\varepsilon}_1, h_1) \in \Sigma_{1,-}^{\textnormal{in}} \,;\, -1- \beta_1(\lambda) \leq y_1 \leq -1 + \hat \beta_1 \right\}$$ with, for example, $$\label{betas} \hat \beta_1 := \left| \lambda - 1\right| \delta\,, \quad \beta_1(\lambda) := \begin{cases} \frac{\lambda}{16} \delta &\text{ if } 0 < \lambda < 1 \\ \frac{2 \lambda -1}{16} \delta &\text{ otherwise.} \end{cases}$$ Note that with these choices we have $M_{\txta,1}^- \cap \Sigma_{1,-}^{\textnormal{in}} \subset R_1$ for $\nu, \delta$ sufficiently small. Furthermore, these choices guarantee that the trajectories stay close to $M_{\txta,1}^-$ such that $F_1(y_1, {\varepsilon}_1)$ is positive, and, hence, we can formulate the following Proposition (see Figure \[fig:K1\]). \[Justification\] Trajectories in $\hat{D}_1$ starting in $R_1$ are increasing in ${\varepsilon}_1$ and decreasing in $h_1, r_1$. Hence, the transition map $\Pi_{1,-}: R_1 \to \Sigma_{1,-}^{\textnormal{out}}$ is well-defined. It is enough to show that in this case $F_1(y_1, {\varepsilon}_1)$ is positive. If $ \lambda \geq 1$ or $\lambda \leq 0$, we observe that $ \beta_1(\lambda) = \frac{2 \lambda -1}{16} \delta$ implies $F_1 \geq \frac{\delta - \mathcal{O}(\delta^2)}{8}$. If $ 0 < \lambda < 1$, we have $$F_1(y_1, {\varepsilon}_1) \geq 1 - \left(-1 - \frac{\lambda}{16} \delta\right)^2 + \lambda {\varepsilon}_1 \geq \frac{\lambda}{8} \delta - \left( \frac{\lambda}{16} \right)^2 \delta^2 \,.$$ Together with the considerations above, we can conclude the claim. [K1]{} (44,45)[$\Sigma_{1,-}^{\textnormal{out}}$]{} (2,42)[ $\Sigma_{1,+}^{\textnormal{out}}$]{} (39,13)[ $R_1$]{} (5,78)[ $R_2$]{} (89,12)[ $y_1$]{} (52,79)[ ${\varepsilon}_1$]{} (27,19)[ $r_1$]{} (40,76)[ $M_{\txta,1}^{+}$]{} (77,47)[ $M_{\txta,1}^{-}$]{} We can make the following statement about the transition time from $R_1$ to $\Sigma_{1,-}^{\textnormal{out}}$ which will be crucial for estimates on the contraction close to $M_{\txta,1}^-$. Define $\gamma := 2 \left| \lambda -1\right| + \left| \lambda \right|$ and assume without loss of generality that $ \nu < \frac{1}{8}$. \[transitiontime\_K1\] The transition time $N$ of system  from a point $p = (\rho, y_1, \delta/4, \nu)$ in $R_1$ to the point $\Pi_{1,-}(p)$ in $\Sigma_{1,-}^{\textnormal{out}}$ satisfies $$N \geq \frac{1}{17 \gamma } \frac{1}{\nu \delta} \,.$$ Let $({\varepsilon}_1(n))_{n\in \mathbb{N}}$ denote the trajectory starting at ${\varepsilon}_1(0) = \delta/4 $ with $${\varepsilon}_1(n+1) = {\varepsilon}_1(n) (1 - h_1(n)F_1(y_1(n), {\varepsilon}_1(n)))^{-2}.$$ We can show by induction that for all $n \in \mathbb{N}$ such that ${\varepsilon}_1(n) \leq \delta$ we have $$\label{claim1} {\varepsilon}_1(n) \leq \frac{\delta}{4} + n \left( 2\gamma\nu \delta^2 + f(\nu, \delta) \right)\,,$$ where $f(\nu, \delta) = \mathcal{O}(\nu \delta^3)$ does not depend on $n$. In more detail, we observe that $$\begin{aligned} h_1(n)F_1(y_1(n), {\varepsilon}_1(n)) &\leq h_1(n) \left[ 1 - (-1 + \left| \lambda - 1 \right| \delta)^2 + \lambda {\varepsilon}_1(n)) \right] \\ & \leq \nu \left[ 2\left| \lambda - 1 \right| \delta + \left| \lambda \right| \delta) - ( \lambda - 1 )^2 \delta^2 \right] \\ & = \nu \gamma \delta - \nu ( \lambda - 1 )^2 \delta^2\,.\end{aligned}$$ Hence, we conclude with a first order Taylor approximation that for some $g(\nu, \delta) = \mathcal{O}(\nu \delta^2)$ we have $$\begin{aligned} {\varepsilon}_1(1) &\leq \frac{\delta}{4} \left( 1 + \nu \gamma \delta + g(\nu, \delta) \right)^2 = \frac{\delta}{4} + \frac{\gamma}{2}\nu \delta^2 + \frac{\delta}{4} \left(2 g(\nu, \delta) + g(\nu, \delta)^2 + \nu^2 \gamma^2 \delta^2 + 2 g(\nu,\delta) \nu \gamma \delta \right) \\ & \leq \frac{\delta}{4} + 2\gamma\nu \delta^2 + f(\nu, \delta) \,,\end{aligned}$$ where $f(\nu, \delta) = \delta (2 g(\nu, \delta) + \mathcal{O}(\nu^2 \delta^2) )= \mathcal{O}(\nu \delta^3)$. Similarly, the step from $n$ to $n +1$ can be written as $$\begin{aligned} {\varepsilon}_1(n+1) &\leq {\varepsilon}_1(n) \left( 1 + \nu \gamma \delta + g(\nu, \delta) \right)^2 \leq {\varepsilon}_1(n) + 2 \gamma \nu \delta {\varepsilon}_1(n) + f(\nu,\delta)\,, \\ &\leq \frac{\delta}{4} + n \left( 2\gamma\nu \delta^2 + f(\nu, \delta) \right) + 2 \gamma \nu \delta^2 + f(\nu,\delta)\,, \\ &= \frac{\delta}{4} + (n+1) \left( 2 \gamma \nu \delta^2 + f(\nu,\delta) \right)\,.\end{aligned}$$ This shows for all $n \in \mathbb{N}$ such that ${\varepsilon}_1(n) \leq \delta$. We can rewrite the right hand side of , using a geometric series, as $$\frac{\delta}{4} + n \left( 2\gamma\nu \delta^2 + f(\nu, \delta) \right) = \frac{\frac{\delta}{4}}{1 - n\left( 8 \gamma \nu \delta + \tilde{f}(\nu, \delta) \right)}\,,$$ where $ \tilde{f}(\nu, \delta) = \mathcal{O}\left(\nu \delta^2\right)$. By definition of the transition time $N$ we have $ {\varepsilon}_1(N) \geq \delta(1 - 2 \nu)$. Hence, we deduce that $$\delta(1 - 2 \nu) \leq \frac{\frac{\delta}{4}}{1 - N ( 8 \gamma \nu \delta + \tilde{f}(\nu, \delta) )}\,,$$ and therefore $$\delta \left(1 - 2 \nu- \frac{1}{4} \right) \leq N \delta(1 - 2 \nu)( 8 \gamma \nu \delta + \tilde{f}(\nu, \delta) ) \,.$$ Finally, for $\delta$ sufficiently small and due to $\nu < \frac{1}{8}$, this leads to $$N \geq \frac{\frac{\delta}{2}}{\delta(1 - 2 \nu)( 8 \gamma \nu \delta + \tilde{f}(\nu, \delta) )} \geq \frac{1}{17 \gamma \nu \delta } \,,$$ which concludes the proof. In addition to the first passage moving up the sphere, we already anticipate that for $\lambda<1$ trajectories eventually re-enter $K_1$ from $K_2$. With more precision to be added after the analysis in chart $K_2$, we define $$\begin{aligned} \Sigma_{1,+}^{\textnormal{in}} &:= \{(r_1, y_1, {\varepsilon}_1, h_1) \in D_1 \,:\, \left| {\varepsilon}_1 - \delta \right| \text{ small}, \left| r_1 - \frac{\rho}{2} \right| \text{ small}, \left| h_1 - \frac{\nu}{2} \right| \text{ small} \}\,, \\ \Sigma_{1,+}^{\textnormal{out}} &:= \{(r_1, y_1, {\varepsilon}_1, h_1) \in D_1 \,:\, r_1 = \rho, \ h_1 = \nu, {\varepsilon}_1 = \delta/4, \ y_1 > 0 \}\,,\end{aligned}$$ and denote by $\Pi_{1,+}: \Sigma_{1,+}^{\textnormal{in}} \to \Sigma_{1,+}^{\textnormal{out}}$ the map that sends points in $\Sigma_{1,+}^{\textnormal{in}}$ along a trajectory of  to the point of this trajectory, which is closest to $\Sigma_{1,+}^{\textnormal{out}}$. Note that $\Pi_{1,+}$ is well-defined sufficiently close to $M_{\txta,1}^+$ according to Proposition \[Invariance\_Prop\]. In more detail, for $\beta_1^+$ and $\hat \beta_1^+$ to be determined more precisely in the analysis of chart $K_2$, there is $$\label{R2} R_2 := \left\{(r_1, y_1, {\varepsilon}_1, h_1) \in \Sigma_{1,+}^{\textnormal{in}} \,;\, 1- \hat \beta_1^+\leq y_1 \leq 1 + \beta_1^+ \right\}$$ such that $M_{\txta,1}^+ \cap \Sigma_{1,+}^{\textnormal{in}} \subset R_2$ and $\Pi_{1,+}$ is well-defined on $R_2$; see also Figure \[fig:K1\]. Of course, a completely analogous result for the passage time as stated in Proposition \[transitiontime\_K1\] also holds for the map $\Pi_{1,+}$. We can use the lower bounds on the transition times to find the following lower bounds for the contraction rates of $\Pi_{1,-}|R_1$ and $\Pi_{1,+}|R_2$. \[contraction\_K1\] There are constants $K_1, K_2 > 0$ such that for any $c$ with $0 < c < \nu = \rho h $ 1. the map $\Pi_{1,-}|R_1$ is a contraction (in the $y_1$-direction) with a rate stronger than $$K_1 (1- c)^{\frac{1}{17 \gamma } \frac{1}{\nu \delta} }.$$ 2. the map $\Pi_{1,+}|R_2$ is a contraction (in the $y_1$-direction) with a rate stronger than $$K_2 (1- c)^{\frac{1}{17 \gamma } \frac{1}{\nu \delta} }.$$ The statement about $\Pi_{1,-}$ follows from Lemma \[transitiontime\_K1\] and the fact that the stable eigenvalue at the fixed points in $S_{\txta,1}^- \subset M_{\txta,1}^-$ is given by $1 - 2h_1 \geq 1 - \nu$, in combination with standard perturbation arguments. The estimate for $\Pi_{1,+}$ uses the symmetry of system  with respect to the dynamics around $M_{\txta,1}^-$ and $M_{\txta,1}^+$: the transition time from $\Sigma_{1,+}^{\textnormal{in}}$ to $\Sigma_{1,+}^{\textnormal{out}}$ is of the same order as the transition time from $\Sigma_{1,-}^{\textnormal{in}}$ to $\Sigma_{1,-}^{\textnormal{out}}$, and the eigenvalues at $S_{\txta,1}^+$ are the same as at $S_{\txta,1}^-$. Dynamics in the scaling chart $K_2$ {#secK2} ----------------------------------- We turn to analyzing the dynamics in the scaling chart $K_2$ in order to understand the behaviour of trajectories past the origin. The chart $K_2$ covers the upper part of the sphere, where we can desingularize with respect to ${\varepsilon}$. Recall from  that the change of coordinates from $K_1$ to $K_2$ is given by $k_{12}: K_1 \to K_2$ with $$x_2 = - {\varepsilon}_1^{-1/2}, \quad y_2 ={\varepsilon}_1^{-1/2} y_1, \quad r_2 = {\varepsilon}_1^{1/2} r_1, \quad h_2 = {\varepsilon}_1^{1/2} h_1\,,$$ It becomes clear from this transformation that the set of interest can be restricted to $$D_2 := \bigg\{(x_2,y_2,r_2,h_2) \in \mathbb{R}^4 \, :\, \delta^{1/2}\frac{\rho}{2} \leq r_2 \leq \delta^{1/2}\rho, \ \delta^{1/2}\frac{\nu}{2} \leq h_2 \leq \delta^{1/2}\nu \bigg\}\,.$$ First of all, we need to make sure that $\kappa_{1,2} \left(\Pi_{1,-} \left( R_1\right) \right) \subset \Sigma_2^{\textnormal{in}}$ for the entering set $\Sigma_2^{\textnormal{in}}$. From the analysis in $K_1$ we derive that this is satisfied for $$\begin{aligned} \label{Sigma_2_in} \Sigma_2^{\textnormal{in}} := \bigg\{(x_2, y_2,r_2, h_2) \in D_2 \, :\, & -(\delta(1 - 2 \nu))^{-1/2} \leq x_2 \leq -\delta^{-1/2}, \nonumber \\ &\delta^{-1/2}(-1 - \beta_2(\lambda)) \leq y_2 \leq \delta^{-1/2}(-1 + \hat \beta_2) \bigg\}\,,\end{aligned}$$ where $$\label{betas2} \hat \beta_2 := \left| \lambda - 1\right| \delta\,, \quad \beta_2(\lambda):= \begin{cases} \frac{\lambda}{8} \delta &\text{ if } 0 < \lambda < 1 \\ \frac{2 \lambda -1}{4} \delta &\text{ otherwise.} \end{cases}$$ We derive the desingularized equations and thereby justify the choice of blow-up in $h$. Observe that $\tilde r_2 = r_2$ since $\tilde {\varepsilon}= {\varepsilon}$ and $ {\varepsilon}= r_2^2$. Similarly, we have $\tilde h_2 = h_2$. Furthermore observe that $$\tilde y_2 = \frac{\tilde y}{\tilde r_2} = \frac{y + {\varepsilon}h}{\tilde r_2} = \frac{r_2 y_2 + r_2^2 h_2 r_2^{-1}}{ r_2} = y_2 + h_2.$$ In addition to that, we obtain $$\tilde x_2 = \frac{\tilde x}{\tilde r_2} = \frac{r_2 x_2 + h_2 r_2^{-1}(r_2^2x_2^2 - r_2^2 y_2^2 + \lambda r_2^2}{ r_2} = x_2 + h_2(x_2^2 - y_2^2 + \lambda).$$ Hence, summarising, the dynamics in chart $K_2$ are given by iterating the map $$\begin{aligned} \label{K2_discrete} \tilde x_2 &= x_2 + h_2(x_2^2 - y_2^2 + \lambda)\,, \nonumber \\ \tilde y_2 &= y_2 + h_2\,, \nonumber \\ \tilde r_2 &= r_2\,, \nonumber \\ \tilde h_2 &= h_2\,.\end{aligned}$$ The transition areas from $K_2$ to another chart depend on $\lambda$. For $\lambda < 1$, we will return to chart $K_1$. Recall from  that the change of coordinates $k_{21}: K_2 \to K_1$ is given by $${\varepsilon}_1 = x_2^{-2}, \quad y_1 = -x_2^{-1} y_2, \quad r_1 = - x_2 r_2, \quad h_1 = - x_2 h_2,$$ We need to choose $\Sigma_{2,\txta }^{\textnormal{out}}$ and the cuboid $R_2$ in $\Sigma_{1,+}^{\textnormal{in}}$ in chart $K_1$ (see ) such that, firstly, trajectories starting in $\Sigma_2^{\textnormal{in}}$ reach $\Sigma_{2,\txta }^{\textnormal{out}}$ and, secondly, $k_{21}(\Sigma_{2,\txta }^{\textnormal{out}}) \subset R_2 $. It turns out (see proof of Proposition \[propK2\] for the first criterion) that a suitable choice is given by $$\begin{aligned} \label{Sigma_2a_out} \Sigma_{2,\txta }^{\textnormal{out}} := \bigg\{(x_2, y_2,r_2, h_2) \in D_2 \, :\,& - \delta^{-1/2} - \frac{h_2}{2} \leq x_2 \leq - \delta^{-1/2} + \frac{h_2}{2}, \nonumber \\ & \delta^{-1/2} ( 1- \hat \beta_2^+)\leq y_2 \leq \delta^{-1/2}(1 + \beta_2^+) \bigg\}\,,\end{aligned}$$ where we define $\beta_2^+ : = \frac{\left|\lambda + 1\right|}{2} \delta$ and $\hat \beta_2^+ : = \frac{\left|\lambda\right| + 1}{2} \delta$; see also Figure \[fig:transcrit\]. The second criterion is then satisfied by adapting $\Sigma_{1,+ }^{\textnormal{in}}$ in the $({\varepsilon}_1, r_1, h_1)$-components accordingly via $k_{21}$ and choosing, for example, $\beta_1^+ : = \frac{3\left|\lambda + 1\right|}{4} \delta$ and $\hat \beta_1^+ : = \frac{3(\left|\lambda\right| + 1)}{4} \delta$ in the definition of $R_2$. For $\lambda > 1$, we set the area of exit as $$\begin{aligned} \label{Sigma_2e_out} \Sigma_{2,\txte }^{\textnormal{out}} := \{(x_2, y_2,r_2, h_2) \in D_2 \, :\,& \delta^{-1/2} \leq x_2 \leq \delta^{-1/2} + h_2(\lambda + \delta^{-1})\,, \nonumber \\ & 0 \leq y_2 < \Omega(\lambda)\delta^{-1/6} \}\,,\end{aligned}$$ where $\Omega(\lambda) > 0$ is a constant for fixed $\lambda$; see also Figure \[fig:transcrit\]. In the situation of continuous time, the $y_2$-component in $\Sigma_{2,\txte }^{\textnormal{out}}$ can be bounded by a constant independent from $\delta$, by using the Riccati equation [@ks2011 Proposition 2.3]. As we do not have such a tool in the case of maps, we give an estimate from a first oder expansion in $h$ of the iterated maps (see proof of Proposition \[propK2\]). Let us denote the sequence induced by iterating for some initial condition $(x_0,y_0)$ as $(x_2(n), y_2(n))$ for $n \in \mathbb N$, and call such a sequence a trajectory. As for continuous time, the special case is the canard problem, i.e. when $\lambda =1$. In this case, for any $c=x_0 = y_0 \in \mathbb{R}$ the system of maps has the obvious solution $\gamma_2^c(n)$ with $x_2(n) = y_2(n) = c + n h_2$. For $\lambda \neq 1$ we can make the following direct observations about the dynamics of the maps, where we recall that $\nu := \rho h < \delta$, in particular $\nu < \frac{1}{8}$. [.5]{} [K2\_lambdasmaller0]{} (6,25)[ $\Sigma_{2}^{\textnormal{in}}$]{} (13,77)[ $\Sigma_{2,\txta }^{\textnormal{out}}$]{} (17,39)[$- \delta^{-1/2}$]{} (47,82)[$y_2$]{} (91,43)[$x_2$]{} [.5]{} [K2\_lambdasmaller1]{} (9,23)[ $\Sigma_{2}^{\textnormal{in}}$]{} (15,80)[ $\Sigma_{2,\txta }^{\textnormal{out}}$]{} (19,41)[$- \delta^{-1/2}$]{} (49,84)[$y_2$]{} (95,44)[$x_2$]{} [.5]{} [K2\_lambdaslarger1]{} (6,22)[ $\Sigma_{2}^{\textnormal{in}}$]{} (84,53)[ $\Sigma_{2,\txte }^{\textnormal{out}}$]{} (17,38)[$- \delta^{-1/2}$]{} (72,38)[$ \delta^{-1/2}$]{} (48,84)[$y_2$]{} (91,43)[$x_2$]{} \[propK2\]The following results hold: 1. If $\lambda < 1$, every trajectory starting in $\Sigma_2^{\textnormal{in}}$ passes through $\Sigma_{2,\txta }^{\textnormal{out}}$. 2. If $\lambda > 1$, every trajectory starting in $\Sigma_2^{\textnormal{in}}$ passes through $\Sigma_{2,\txte }^{\textnormal{out}}$. The proof of this proposition is based on a couple of lemmas, which are shown in the following. We divide the diagonals $\{x=y\}$ and $\{x = -y\}$ into the subsets $$S_{\txta,2}^-:=\{(x,y) \in \mathbb{R}^2 \,:\, y \leq 0, \ x = y\}\,, \quad S_{\txta,2}^{+} := \{(x,y) \in \mathbb{R}^2 \,:\, y \geq 0, \ x = -y\}$$ and $$S_{\txtr,2}^-:=\{(x,y) \in \mathbb{R}^2 \,:\, y \leq 0, \ x = -y\}\,, \quad S_{\txtr,2}^{+} := \{(x,y) \in \mathbb{R}^2 \,:\, y \geq 0, \ x = y\}\,.$$ Furthermore, we write as a shorthand $x_{2,n} = x_2(n), y_{2,n} = y_2(n)$ for $n \in \mathbb{N}$ and investigate the behaviour of the trajectories $$\begin{aligned} y_{2,n+1} &= y_{2,n} + h_2\,, \\ x_{2,n+1} &= x_{2,n} + h_2 \lambda + h_2 (x_{2,n}^2 - y_{2,n}^2)\end{aligned}$$ for different values of $\lambda$. In fact, even for $\lambda<1$, there are subtle differences in the paths of trajectories (see Figure \[fig:transcrit\]). \[lambda01\]The following cases occur for $\lambda<1$: - Let $ 0 < \lambda < 1$ and $\delta$ be sufficiently small. Then any trajectory starting in $\Sigma_2^{\textnormal{in}}$ is strictly increasing in $x$ as long as $x_{2,n}, y_{2,n} < 0$, and will be above the diagonal $\{x = y\}$ at a certain point of time and stay there forever afterwards. In particular, if $(x_{2,0}, y_{2,0})$ in $\Sigma_2^{\textnormal{in}}$ with $y_{2,0} < x_{2,0} < 0$, there exists $n^* \in \mathbb{N}$ such that $x_{2,n^*} \leq y_{2,n^*} < 0$ and $$\frac{n^* h_2}{n^*h_2 + \frac{\delta^{1/2}}{8}} \geq \lambda\,.$$ - If $\lambda \leq 0$, any trajectory starting in $\Sigma_2^{\textnormal{in}}$ is strictly increasing in $x$ as long as $x_{2,n}, y_{2,n} < 0$ and will be above the diagonal $\{x = y\}$ at any point of time. We start with the case $ 0 < \lambda < 1$. Consider an initial condition $(x_{2,0}, y_{2,0})$ in $\Sigma_2^{\textnormal{in}}$ with $y_{2,0} < x_{2,0} < 0$, i.e., below $S_{\txta,2}^-$. We obviously have $x_{2,1} < x_{2,0} + \lambda h_2$. Furthermore, observe that $$\begin{aligned} x_{2,0}^2 - y_{2,0}^2 + \lambda &\geq \delta^{-1} - ( \delta^{-1/2}(-1 - \beta_2(\lambda)))^2 + \lambda \\ &= \delta^{-1} - \delta^{-1} - \frac{1}{4} \lambda - \frac{1}{64} \lambda^2 \delta + \lambda \\ & = \frac{3}{4} \lambda - \frac{1}{64} \lambda^2 \delta \geq \frac{1}{2} \lambda.\end{aligned}$$ Hence, $x_{2,1} \geq x_{2,0} + \frac{\lambda}{2} h_2$. Either we already have $ x_{2,1} \leq y_{2,1} < 0$. If not, we can infer from the facts $ 0 > y_{2,1} > y_{2,0}$, $ 0 > x_{2,1} > x_{2,0}$ and $ x_{2,1} - y_{2,1} < x_{2,0} - y_{2,0}$ that $y_{2,1}^2 - x_{2,1}^2 < y_{2,0}^2 - x_{2,0}^2$. Hence, we have $x_{2,0} + \lambda h_2 < x_{2,2} < x_{2,0} + 2 \lambda h_2$ and obviously $ y_{2,2} = y_{2,0} + 2 h_2$. Therefore, we see inductively that for $0 > x_{2,n} > y_{2,n}$ both sequences are increasing and we either already have $ x_{2,n+1} \leq y_{2,n+1} < 0$ or $$\begin{aligned} x_{2,n+1} - y_{2,n+1} &< x_{2,0} - y_{2,0} -(n+1)(1- \lambda)h_2 < \frac{\lambda}{8} \delta^{1/2} -(n+1)(1- \lambda)h_2 \\ &= \lambda \left( (n+1) h_2 + \frac{\delta^{1/2}}{8} \right) - (n+1) h_2.\end{aligned}$$ Thus, we can conclude that $x_{2,n^*} \leq y_{2,n^*} < 0$ for some $n^*$ such that $\lambda \leq \frac{n^* h_2}{n^*h_2 + \frac{\delta^{1/2}}{8}}$, if $\delta$ is chosen small enough such that $\frac{\delta^{-1/2}}{\delta^{-1/2} + \frac{\delta^{1/2}}{8}} > \lambda$. Namely, if there was $\hat n \in \mathbb{N}$ such that $0 > x_{2,n} > y_{2,n}$ for all $n < \hat n$ and $x_{2,\hat n} > 0$, we would have, for $h_2, \delta$ small enough, that $\frac{\hat n -1}{\hat n} \geq \lambda$ and would obtain $$\frac{(\hat n-1) h_2}{ (\hat n -1) h_2 + \frac{\delta^{1/2}}{8}} \geq \frac{\lambda \hat n h_2}{ \lambda \hat n h_2 + \frac{\delta^{1/2}}{8}} > \frac{\delta^{-1/2}}{\delta^{-1/2} + \frac{\delta^{1/2}}{8}} > \lambda\,,$$ which is a contradiction, since this would imply $x_{2, \hat n -1} < y_{2,\hat n -1} $ with the above. Assume now that, at time $n \in \mathbb{N}$, the trajectory is above the diagonal $\{x = y\}$, i.e., $x_{2,n} \leq y_{2,n}$. In particular, this covers the case when $x_{2,n} \leq y_{2,n} < 0$, when the trajectory lies above $S_{\txta,2}^-$, which is relevant for the initial data. We have $y_{2,n+1} = y_{2,n} + h_2$ and $x_{2,n+1} \geq x_{2,n} + h_2 \lambda$. If $x_{2,n} = y_{2,n}$, then obviously $ x_{2,n+1} < y_{2,n+1}$. If $x_{2,n} < y_{2,n}$, we observe that $$\label{x2biggery2} x_{2,n+1} \geq y_{2,n+1} \quad \text{iff} \quad 1 + \frac{h_2(1- \lambda)}{y_{2,n}-x_{2,n}} \leq - h_2 ( x_{2,n} + y_{2,n} )\,.$$ Hence, since $ h_2 \left| x_{2,n} + y_{2,n} \right| < 2 (1 - 2 \nu)^{-1/2} \delta^{-1/2} \delta^{1/2} \nu < 1 $, we have $ x_{2,n+1} < y_{2,n+1}$ and the argument goes on inductively. We can also see that, for $x_{2,n} \leq y_{2,n} < 0$, the trajectories stay close to $S_{\txta,2}^-$ since, if $ x_{2,n}^2 - y_{2,n}^2 > 1 - \lambda$, we have that $ y_{2,n+1} - x_{2,n+1} < y_{2,n} - x_{2,n}$. This concludes the proof of the first statement. Next, we consider the case $\lambda \leq 0$. Again, assume that at time $n \in \mathbb{N}$ the trajectory is above the diagonal $\{x = y\}$, i.e. $x_{2,n} \leq y_{2,n}$. In particular, this covers the case when $x_{2,n} \leq y_{2,n} < 0$ as relevant for the initial data. If $x_{2,n} = y_{2,n}$, then obviously $ x_{2,n+1} < y_{2,n+1}$. If $x_{2,n} < y_{2,n}$, we observe as before that  holds and that $ h_2 \left| x_{2,n} + y_{2,n} \right| < 1 $. Hence, we have $ x_{2,n+1} < y_{2,n+1}$ and the argument goes on inductively. Therefore trajectories stay above the diagonal. Furthermore, observe that $y_{2,0}^2 \leq \left( \delta^{-1/2} \left( -1 - \frac{2\lambda - 1}{4}\delta \right) \right)^2 $ and therefore $$x_{2,0}^2 - y_{2,0}^2 + \lambda \geq \delta^{-1} - \delta^{-1} - \lambda + \frac{1}{2} - \frac{(2 \lambda -1)^2}{16} \delta + \lambda = \frac{1}{2} - \frac{(2 \lambda -1)^2}{16} \delta\,,$$ which is greater than $0$ for $\delta$ small enough, depending on $\lambda$. Hence, $x_{2,1} > x_{2,0}$. We show that $x_{2,n+1} > x_{2,n}$ as long as $x_{2,n} < y_{2,n} < 0$ by proving that $\xi_n := x_{2,n}^2 - y_{2,n}^2 + \lambda > 0$ implies $x_{2,n+1}^2 - y_{2,n+1}^2 + \lambda > 0$. Assuming that $x_{2,n} < y_{2,n} < 0$ and $\xi_n > 0$ yields $$\begin{aligned} x_{2,n+1}^2 - y_{2,n+1}^2 + \lambda &= (x_{2,n} + h_2 \xi_n)^2 - (y_{2,n} + h_2)^2 + \lambda \\ &= \xi_n + h_2^2(\xi_n^2 -1) + 2 h_2 ( \left| y_{2,n} \right| - \xi_n \left| x_{2,n} \right|)\,.\end{aligned}$$ From there, it is easy to observe that for $h_2$ small enough the claim follows. Although the argument is quite technical, the proof of the last lemma shows that the key steps in the scaling chart involve the sign of the nonlinear term for the $x_2$-variable. This idea can also be carried out in the case $\lambda>1$. \[lambda1big\] Let $\lambda > 1$ and $\delta$, $\nu$ sufficiently small. Then all trajectories starting in $\Sigma_{2}^{\textnormal{in}}$ are strictly increasing in $x$ as long as $ x_{2,n}, y_{2,n} < 0$, will be below the diagonal $\{x = y\}$ at a certain point of time and stay there forever afterwards. In particular, if $(x_{2,0}, y_{2,0})$ in $\Sigma_2^{\textnormal{in}}$ with $x_{2,0} < y_{2,0} < 0$, there is a $n^* \in \mathbb{N}$ such that $y_{2,n^*} \leq x_{2,n^*} < 0$ and $$n^* h_2 \geq \delta^{1/2}\,.$$ We consider two cases, trajectories below and above the diagonal. First, we assume that, at time $n \in \mathbb{N}$, the trajectory is below the diagonal $\{x = y\}$ so that $ y_{2,n} \leq x_{2,n}$. In particular, this covers the case when $ y_{2,n} \leq x_{2,n} < 0$, i.e., the trajectory lies below $S_{\txta,2}^-$, which is relevant for the initial data. If $x_{2,n} = y_{2,n}$, then obviously $y_{2,n+1} < x_{2,n+1}$. If $y_{2,n} < x_{2,n}$, we observe similarly to  that $$y_{2,n+1} \geq x_{2,n+1} \quad \text{iff} \quad 1 + \frac{h_2(\lambda - 1)}{x_{2,n}-y_{2,n}} \leq - h_2 ( x_{2,n} + y_{2,n} )\,.$$ Hence, since $ h_2 \left| x_{2,n} + y_{2,n} \right| < (2 (1 - 2 \nu)^{-1/2}\delta^{-1/2} + \frac{2 \lambda -1}{4} \delta^{1/2} )\delta^{1/2} \nu < 1$ due to $\delta \lambda \leq 1$ and $\nu < \frac{1}{8}$, we have $ y_{2,n+1} < x_{2,n+1} $ and the argument goes on inductively. Moreover, we check that the sequences are increasing in this case. We consider an initial condition $(x_{2,0}, y_{2,0})$ in $\Sigma_2^{\textnormal{in}}$ with $y_{2,0} < x_{2,0} < 0$, i.e., below $S_{\txta,2}^-$. We obviously have $x_{2,1} < x_{2,0} + \lambda h_2$. Furthermore, observe that $$\begin{aligned} x_{2,0}^2 - y_{2,0}^2 + \lambda &\geq \delta^{-1} - ( \delta^{-1/2}(-1 - \beta_2(\lambda)))^2 + \lambda \\ &= \delta^{-1} - \delta^{-1} - \lambda + \frac{1}{2} - \frac{(2 \lambda - 1)^2}{16} \delta + \lambda \\ & = \frac{1}{2} - \frac{(2 \lambda - 1)^2}{16} \delta\geq \frac{1}{4}\,,\end{aligned}$$ if $\delta$ is chosen sufficiently small in comparison to $\lambda$. Hence, $x_{2,1} \geq x_{2,0} + \frac{1}{4} h_2$. Note in particular that if $y_{2,0} = \delta^{-1/2}(-1 - \beta_2(\lambda)) =: y^*$, we have $x_{2,0}^2 - y_{2,0}^2 + \lambda < \frac{1}{2}$. From here, it is easy to observe that, for $\nu$ small enough, $ x_{2,1} - y_{2,1} < x_{2,0} - y^* $. This together with the fact that $ \left| y_{2,1} + x_{2,1} \right| < \left| y_{2,0} + x_{2,0} \right|$ yields $ x_{2,1}^2-y_{2,1}^2 + \lambda > \frac{1}{4}$. Hence, we have $ x_{2,2} \geq \frac{1}{2} h_2$ and obviously $ y_{2,2} = y_{2,0} + 2 h_2$. Therefore, we see inductively that for $0 > x_{2,n} > y_{2,n}$ both sequences are increasing. As the second case, we consider a trajectory with initial condition $(x_{2,0}, y_{2,0})$ in $\Sigma_2^{\textnormal{in}}$ and $x_{2,0} < y_{2,0} < 0$, i.e., above $S_{\txta,2}^-$. Either we already have $ y_{2,1} \leq x_{2,1} < 0$. If not, we have $x_{2,1} > x_{2,0} + \lambda h_2$ and obviously $ y_{2,1} = y_{2,0} + h_2$. Therefore, we see inductively that for $x_{2,n} < y_{2,n} <0$ both sequences are increasing and we either already have $ y_{2,n+1} \leq x_{2,n+1} < 0$ or $$\begin{aligned} x_{2,n+1} - y_{2,n+1} & > x_{2,0} - y_{2,0} +(n+1)(\lambda -1)h_2 \\ & \geq \delta^{-1/2} - \delta^{-1/2}(1 - 2 \nu)^{-1/2} - (\lambda-1)\delta^{1/2} +(n+1)(\lambda -1)h_2 .\end{aligned}$$ Thus, we can conclude that $ y_{2,n^*} \leq x_{2,n^*} < 0$ for some $n^*$ such that $n^* h \geq \delta^{1/2}\left( 1+ \frac{(1 - 2 \nu)^{-1/2}-1}{\lambda -1} \right)$. Using $(1 - 2 \nu)^{-1/2} > 1$, the claim follows. Finally, we turn to the proof of Proposition \[propK2\]. We distinguish three cases: (I) $ 0 < \lambda < 1$, (II) $\lambda\leq 0$, and (III) $\lambda>1$. The case distinction is going to allow us to apply each of the preliminary results obtained above. *Case (I) $0<\lambda<1$:* From Lemma \[lambda01\] we know that trajectories starting in $\Sigma_2^{\textnormal{in}}$ will be above the diagonal $\{x = y\}$ at certain point of time and stay there forever afterwards. From this result and the fact that $y_{2,n}$ and $x_{2,n}$ are both strictly increasing uniformly as long as $\left|y_{2,n} \right| \leq \left|x_{2,n}\right|$, we can conclude that any such trajectory reaches a point $(x_{2,\tilde n}, y_{2,\tilde n})$, with $y_{2,\tilde n} > 0$, such that $y_{2,\tilde n}^2 > x_{2,\tilde n}^2$. We can conclude that there must be a minimal $n^* > \tilde n$ such that $y_{2,n^*}^2 > x_{2,n^*}^2 + \lambda$. Note that $(x_{2,n^*}, y_{2,n^*})$ lies between $ S_{\txta,2}^{+}$ and $ S_{\txtr,2}^{+}$. As long as this is the case for $n > n^*$, we have $x_{2,n+1} < x_{2,n}$. Additionally, we observe that for $y_{2,n}^2 - x_{2,n}^2 > \lambda +1$ we have $y_{2,n+1} + x_{2,n+1} < y_{2,n} + x_{2,n}$. Hence, trajectories are rapidly approaching the vicinity of $S_{2,\txta }^{+}$. Similarly to , we find that for any such $y_{2,n} > - x_{2,n} > 0$ $$\left|x_{2,n+1} \right| \geq y_{2,n+1} \quad \text{iff} \quad 1 + \frac{h_2(1+\lambda)}{y_{2,n}-\left|x_{2,n}\right|} \leq h_2 ( \left|x_{2,n}\right| + y_{2,n} )\,.$$ Hence, since $ h_2 \left| x_{2,n}\right| + y_{2,n} < (2 + \delta + h_2) \nu < 1 $ before hitting $\Sigma_{2,\txta }^{\textnormal{out}}$, we have $ \left|x_{2,n+1} \right| < y_{2,n+1}$ and the argument goes on inductively before hitting $\Sigma_{2,\txta }^{\textnormal{out}}$. The fact that the trajectory will actually be located within $\Sigma_{2,\txta }^{\textnormal{out}}$ at a certain point of time can be inferred as follows: we observe from the above that $(x_{2,n},y_{2,n})$ satisfies $0 \leq y_{2,n}^2 - x_{2,n}^2 \leq \lambda +1$ for large enough $n$. First, we can conclude that $x_{2,n} - x_{2,n+1} \leq h_2$. Hence, there is an $m \in \mathbb{N}$ such that $x_{2,m} \in [-\delta^{-1/2} - \frac{h_2}{2}, -\delta^{-1/2} +\frac{h_2}{2}]$. Therefore we have $y_{2,m} \geq \delta^{-1/2} - \frac{h_2}{2}$ and $$y_{2,m}^2 \leq \delta^{-1} + (\lambda +1) - h_2 \delta^{-1/2} + \frac{h_2^2}{4} \leq \delta^{-1} + 2 \beta_2^+ \delta^{-1} + \delta^{-1} (\beta_2^+)^2 = \left(\delta^{-1/2}(1 + \beta_2^+)\right)^2.$$ Figure \[fig:transcrit\] (b) illustrates the behaviour of trajectories starting in $\Sigma_{2,\txta }^{\textnormal{in}}$ for $ 0 < \lambda <1$. *Case (II) $\lambda\leq 0$:* We know from Lemma \[lambda01\] that any trajectory starting in $\Sigma_2^{\textnormal{in}}$ is strictly increasing in $x$ as long as $x_{2,n}, y_{2,n} < 0$ and will be above the diagonal $\{x = y\}$ at any point of time. Analogously to before, we can conclude that there exists a minimal $n^* \in \mathbb{N}$ such that $y_{2,n^*} > 0$ and $y_{2,n^*}^2 > x_{2,n^*}^2 + \lambda$. Note that if $(x_{2,n^*}, y_{2,n^*})$ lies between $ S_{\txta,2}^{+}$ and $ S_{\txtr,2}^{+}$ and stays in this region for all $n > n^*$ before hitting $\Sigma_{\txta,2}^{\textnormal{out}}$, as for example for $ -1\leq \lambda \leq 0$, the arguments go exactly as before. Otherwise we observe, symmetrically to before, that $(x_{2,n},y_{2,n})$ satisfies $0 \geq x_{2,n}^2 - y_{2,n}^2 + \lambda \geq - 1$ for large enough $n$. Again, we infer that $x_{2,n} - x_{2,n+1} \leq h_2$ and conclude that there is an $m \in \mathbb{N}$ such that $x_{2,m} \in [-\delta^{-1/2} - \frac{h_2}{2}, -\delta^{-1/2} +\frac{h_2}{2}]$. Therefore we have $y_{2,m} \leq \delta^{-1/2} + \frac{h_2}{2}$ and for $\delta$ sufficiently small depending on $\lambda$ $$y_{2,m}^2 \geq \delta^{-1} - h_2 \delta^{-1/2} + \frac{h_2^2}{4} + \lambda \geq \delta^{-1} + \lambda + \frac{(\left| \lambda \right| + 1)^2}{4} \delta - 1 = \left(\delta^{-1/2}(1 - \hat \beta_2^+)\right)^2.$$ Figure \[fig:transcrit\] (a) illustrates the behaviour of trajectories starting in $\Sigma_{2,\txta }^{\textnormal{in}}$ for $ \lambda < 0$. *Case (III) $\lambda>1$:* We can conclude from Lemma \[lambda1big\] that trajectories starting in $\Sigma_2^{\textnormal{in}}$ will be below $S_{\txta,2}^-$ at a certain point of time and stay below the diagonal $\{x = y\}$ forever afterwards. From that and the fact that $y_{2,n}$ is strictly increasing for all time, we can conclude that any such trajectory will reach a point $(x_{2,n^*}, y_{ 2, n^*})$ with $ x_{2,n^*}> y_{2,n^*} > 0$. Then the trajectory will increase its distance from the diagonal in each time step by $$h_2 (\lambda -1) + h_2 (x_{2,n}^2 - y_{2,n}^2)\,.$$ Let us take the largest $n$ such that $ x_{2,n}> 0 \geq y_{2,n} $. It is now obvious that there is an $m \in \mathbb{N}$ such that $\delta^{-1/2} \leq x_{2,n+m} \leq \delta^{-1/2} + h_2(\lambda + \delta^{-1}) $. We give an upper bound for $y_{2,m+n}$ by expanding $x_n$ up to $h_2^3$, which is the first order estimate in this case: $$\begin{aligned} (1 + 2\nu) \delta^{-1/2} &\geq x_{2,m +n} > mh_2 \lambda + \left(\sum_{k=1}^{m-1} k^2 \right) (\lambda^2 -1) h_2^3 \\ &\geq y_{2,m+n} \lambda + \frac{1}{6} (\lambda^2 - 1)y_{2,m+n}(y_{2,m+n} -h_2)(2y_{2,m+n} -h_2) \\ &\geq \lambda y_{2,m+n} + \frac{1}{6} (\lambda^2 - 1)y_{2,m+n}^3\,.\end{aligned}$$ Hence, we conclude that $y_{2,m+n} = \mathcal{O}(\delta^{-1/6})$ and $(x_{2,n+m}, y_{2,m+n}) \in \Sigma_{2,\txte }^{\textnormal{out}}$. Figure \[fig:transcrit\] (c) illustrates such a trajectory. Dynamics in the chart $K_3$ {#secK3} --------------------------- We investigate the dynamics in the chart $K_3$  for $\lambda > 1$. First, recall from  that the change of ccordinates $k_{23}: K_2 \to K_3$ is given by $${\varepsilon}_3 = x_2^{-2}, \quad y_3 = x_2^{-1} y_2, \quad r_3 = x_2 r_2, \quad h_3 = x_2 h_2\,,$$ Symmetrically to the chart $K_1$, we define $$D_3 := \{(r_3, y_3, {\varepsilon}_3, h_3) \in \mathbb{R}^4 \,:\, r_3 \in [ 0, \rho], {\varepsilon}_3 \in [0, \delta], h_3 \in [ 0, \nu] \}$$ and $$\hat D_3 := \{(r_3, y_3, {\varepsilon}_3, h_3) \in \mathbb{R}^4 \,:\, r_3 \in [ \rho/2, \rho], {\varepsilon}_3 \in [\delta/4, \delta], h_3 \in [ \nu/2, \nu] \}\,.$$ Since we need to have $k_{23}\left( \Sigma_{2,\txte}^{\textnormal{out}} \right) \subset \Sigma_3^{\textnormal{in}}$, a suitable choice is given by $$\Sigma_{3}^{\textnormal{in}} := \{(r_3, y_3, {\varepsilon}_3, h_3) \in D_3 \,:\, \left(\delta^{-1} + 4 \nu \delta^{-1}\right)^{-1} \leq {\varepsilon}_3 \leq \delta \}\,.$$ Furthermore, we will simply set $$\Sigma_{3}^{\textnormal{out}} := \{(r_3, y_3, {\varepsilon}_3, h_3) \in D_3 \,:\, r_3 = \rho, h_3 = \nu, {\varepsilon}_3 = \delta/4, \ y_3 > 0 \}\,,$$ and will end the analysis with the point of the trajectory which is closest to $\Sigma_{3}^{\textnormal{out}}$. The dynamics, desingularised by choosing $h = h_3/r_3$, look as follows: $$\begin{aligned} \label{K3discrete} \tilde{r}_3 &= r_3(1 + h_3F_3(y_3, {\varepsilon}_3)), \nonumber \\ \tilde y_3 &= (y_3 + {\varepsilon}_3 h_3)(1 + h_3F_3(y_3, {\varepsilon}_3))^{-1}\,\nonumber \\ \tilde {\varepsilon}_3 &= {\varepsilon}_3 (1 + h_3F_3(y_3, {\varepsilon}_3))^{-2}\,\nonumber \\ \tilde{h}_3 &= h_3(1 + h_3F_3(y_3, {\varepsilon}_3)),\end{aligned}$$ where $F_3(y_3, {\varepsilon}_3) = 1 - y_3^2 + \lambda {\varepsilon}_3$. For any $h_3$ system  has the fixed points $$v_{\txtr,3}^- = (0,-1,0,h_3), \quad v_{\txtr,3}^+ = (0,1,0,h_3).$$ The points $v_{\txtr,3}^-$ and $v_{\txtr,3}^+$ have a three-dimensional centre eigenspace and a one-dimenional unstable eigenspace with the eigenvalue $1+2h_3$. Hence, unlike the analogous case in the chart $K_1$, the stability does not depend on the size of $h_3$. The most relevant manifold for our problem is given by $$W := \{w^{\textnormal{out}}(h_3) := (0,0,0,h_3)\;\:\;\ h_3 \in [0, \nu]\}\,,$$ which is a line for system  within $D_3$. The points $w^{\textnormal{out}}(h_3)$ have two stable and two unstable eigenvalues $$\lambda_1 = (1 +h_3)^{-2}, \quad \lambda_2 = (1 + h_3)^{-1}, \quad \lambda_3 = 1+ h_3 , \quad \lambda_4 = 1 + 2h_3\,,$$ such that the stability corresponds to the time-continuous problem independently from $h_3$. Note that the chart $K_3$ differs in that respect from the chart $K_1$ where preservation of stability is bound to the stability criteria of the Euler method known from the Dahlquist test equation. The eigenvalues $\lambda_1, \lambda_2$ correspond with the ${\varepsilon}_3$- and $y_3$-directions and $\lambda_3, \lambda_4$ with the $r_3$- and $h_3$-directions. We extend the set $W$ to the attracting invariant manifold $M_{\txta,3}$, which is given in $D_3$ by a graph $y_3 = l_{3}({\varepsilon}_3, h_3)$. One can derive $l_{3}$ from the discrete invariance equation $$\label{invarianceK3} l_{3} (\tilde {\varepsilon}_3, \tilde h_3)= \frac{l_{3} ({\varepsilon}_3, h_3) + {\varepsilon}_3 h_3}{1 + h_3F_3( l_{3} ({\varepsilon}_3, h_3), {\varepsilon}_3)}\,.$$ Note that, analogously to the continuous time case, there is the resonance $\lambda_1 \lambda_3 = \lambda_2$, which makes the description of the dynamics close to $W$ and $M_{\txta,3}$ a delicate problem. However, the exiting behaviour can still be estimated by a relatively simple argument without a full analysis of the resonance as follows. Let $P_3$ denote the map given by  and $\pi_y$ the projection to the $y$-component. \[TranstionK3\] The transition map $\Pi_3$ from $\Sigma_3^{\textnormal{in}}$ to the vicinity of $\Sigma_3^{\textnormal{out}}$ given by $$\Pi_3(z) = P^{m^*(z)}(z)\,, \text{ where } m^*(z) = \operatorname*{arg\,min}_{n \in \mathbb{N}} \operatorname{dist}( P_3^{n}(z), \Sigma_3^{\textnormal{out}})\,, \ z \in \Sigma_3^{\textnormal{in}}\,,$$ is well-defined on $k_{23}\left( \Sigma_{2,\txte}^{\textnormal{out}} \right) $. Furthermore, for $z \in k_{23}\left( \Sigma_{2,\txte}^{\textnormal{out}} \right) \subset \Sigma_3^{\textnormal{in}}$ we have $\pi_y(\Pi_3(z)) = \mathcal{O}(\delta^{1/3})$. By the construction of $\Sigma_{2,\txte }^{\textnormal{out}}$, Proposition \[propK2\], and the fact that $y_3 = x_2^{-1} y_2$ we have $\pi_y(z) = \mathcal{O}\left(\delta^{1/3}\right)$ for any $z \in k_{23}\left( \Sigma_{2,\txte}^{\textnormal{out}} \right)$. Further note that $F_3$ is clearly positive as long as $y_3$ maintains some positive order of $\delta$. Since ${\varepsilon}_3 h_3 = \mathcal{O}(\delta \nu)$ in $D_3$, we can immediately infer that $$\pi_y(P_3^n(z)) = \mathcal{O}\left(\delta^{1/3}\right)\qquad \text{for all $n \leq m^*(z)$.}$$ This implies both statements as $F_3$ stays positive along the trajectory. [K3]{} (32,49)[$\Sigma_{3}^{\textnormal{in}}$]{} (6,28)[ $\Sigma_{3}^{\textnormal{out}}$]{} (86,9)[ $y_3$]{} (56,68)[ ${\varepsilon}_3$]{} (52,46)[ $M_{\txta,3}$]{} (39,13)[ $r_3$]{} Connecting the charts and proof of the Theorem {#secproof} ---------------------------------------------- Finally, we can prove Theorem \[transcritical\_discrete\] by combining the dynamics in $K_1$, $K_2$ and $K_3$ into a global picture. We have proven the statements in charts $K_1, K_2, K_3$ for $\bar {\varepsilon}= \delta$ with $\delta$ sufficiently small. Hence, we choose ${\varepsilon}_0 = \rho^2 \delta_0$, where $\delta_0$ is the largest value of $\delta/4$ such that the statements hold. We did not use any further restrictions on $h$ apart from $\rho h < \delta$ and $\rho h < \frac{1}{8}$. Hence, it is enough to assume $ \rho^3 h < {\varepsilon}$. As before, we distinguish several cases. First, we consider $\lambda < 1$. We define the map $\bar \Pi_\txta$ from $ R_1 \subset \Sigma_{1,-}^{\textnormal{in}}$ to the vicinity of $\Sigma_{1,+}^{\textnormal{out}}$ by $$\bar \Pi_\txta := \Pi_{1,+} \circ k_{21} \circ \Pi_2 \circ k_{12} \circ \Pi_{1, -}\,,$$ where $\Pi_2: \Sigma_2^{\textnormal{in}} \to \Sigma_{2,\txta }^{\textnormal{out}}$ is the map well-defined by Proposition \[propK2\]. We have seen that $$k_{12} \left(\Pi_{1,-} \left( R_1\right) \right) \subset \Sigma_2^{\textnormal{in}} \ \text{ and } \ k_{21} \left( \Sigma_{2,\txta }^{\textnormal{out}} \right) \subset R_2 \subset \Sigma_{1, +}^{\textnormal{in}}\,.$$ Hence, $\bar \Pi_\txta$ is indeed a well-defined map. We have that $ \Pi_\txta = \Phi \circ \bar \Pi_\txta \circ \Phi^{-1}$, where $\Delta^{\textnormal{in}} = \Phi(R_1) $ and $\Delta_\txta^{\textnormal{out}} \subset \Phi \left(\Sigma_{1,+}^{\textnormal{out}}\right)$ is an interval about $S_\txta^+$ of the same size as $\Delta^{\textnormal{in}}$. We observe with Proposition \[Invariance\_Prop\] that $ \Phi \left(M_{\txta,1}^- \right) \subset S_{\txta, {\varepsilon},h}^{-}$ and $ \Phi \left(M_{\txta,1}^+\right) \subset S_{\txta, {\varepsilon},h}^{+}$, and, by the choices of $R_1$ and $R_2$, that $\Delta^{\textnormal{in}} \cap S_{\txta,{\varepsilon},h}^-$ and $ \Pi_\txta \left( \Delta^{\textnormal{in}} \right) \cap S_{\txta,{\varepsilon},h}^+$ are nonempty. Summarizing, we can conclude that $\Pi_\txta$ maps $\Delta^{\textnormal{in}}$ including $\Delta^{\textnormal{in}} \cap S_{\txta,{\varepsilon},h}^-$ to a set about $S_{\txta, {\varepsilon},h}^{+}$. The distance between any point in $ \Pi_\txta \left( \Delta^{\textnormal{in}} \right)$ and $\Delta_\txta^{\textnormal{out}}$ is of order $\mathcal{O}(h {\varepsilon})$ since for $(x,y) \in \Delta_\txta^{\textnormal{out}} \cap S_{\txta,{\varepsilon},h}^+$ it is bounded by $h(x^2 - y^2 + \lambda {\varepsilon})$ due to the definition of $\Pi_\txta$ and we have by  that $$\left| x^2 - y^2 \right| = \left| x - y \right| \left| x + y \right| = \mathcal{O}(\delta \rho)\mathcal{O}( \rho) = \mathcal{O}\left(\frac{{\varepsilon}}{\rho} \right)\mathcal{O}( \rho)= \mathcal{O}({\varepsilon})\,.$$ Furthermore, Proposition \[contraction\_K1\] says that $\Pi_{1,-}|R_1$ and $\Pi_{1,+}|R_2$ are contractions in the $y_1$ direction with rates of order at least $ \mathcal{O} \left((1- c)^{\frac{C}{\rho h \delta} } \right)$ for some constant $C>0$. Since $\Pi_2$ is also contracting and due to $\mathcal{O}(\delta) = \mathcal{O} \left(\frac{{\varepsilon}}{\rho^2} \right)$, we obtain that $\Pi_\txta \left(\Delta^{\textnormal{in}}\right)$ has $y$-width at most of order $\mathcal{O}\left( (1- c)^{\frac{C \rho }{h {\varepsilon}}} \right)$. Let now $\lambda > 1$. We define the map $\bar \Pi_\txte$ from $ R_1 \subset \Sigma_{1,-}^{\textnormal{in}}$ to the vicinity of $\Sigma_{3}^{\textnormal{out}}$ by $$\bar \Pi_\txte := \Pi_{3} \circ k_{23} \circ \Pi_2 \circ k_{12} \circ \Pi_{1, -}\,.$$ Again, we know that $k_{12} \left(\Pi_{1,-} \left( R_1\right) \right) \subset \Sigma_2^{\textnormal{in}}$, and furthermore from Proposition \[TranstionK3\] that $\Pi_3$ is well-defined on $k_{23} \left( \Sigma_{2,\txte }^{\textnormal{out}} \right) \subset \Sigma_{3}^{\textnormal{in}}$. Hence, $\bar \Pi_\txte$ is indeed a well-defined map. We have that $ \Pi_\txte = \Phi \circ \bar \Pi_\txte \circ \Phi^{-1}$, where again $\Delta^{\textnormal{in}} = \Phi(R_1) $ and $\Delta_\txte^{\textnormal{out}} \subset \Phi \left(\Sigma_{3}^{\textnormal{out}}\right)$ is an interval perpendicular to the $x$-axis. It follows immediately that $S_{\txta, {\varepsilon},h}^-$ passes through $\Delta_\txte^{\textnormal{out}}$ at a point $(\rho, k({\varepsilon}))$. Using Proposition \[TranstionK3\] we can characterize $k({\varepsilon}) = \rho \mathcal{O}(\delta^{1/3}) = \rho^{1/3} \mathcal{O}({\varepsilon}^{1/3})$. The fact that $\Pi_\txte \left( \Delta^{\textnormal{in}} \right)$ has $y$-width $\mathcal{O}\left( (1- c)^{\frac{C \rho}{ h {\varepsilon}}}\right)$ follows as for $\lambda < 1$. The distance between any point in $ \Pi_\txte \left( \Delta^{\textnormal{in}} \right)$ and $\Delta_\txta^{\textnormal{out}}$ is of order $\mathcal{O}(h ({\varepsilon}+ \rho^2))$ since for $(x,y) \in \Delta_\txte^{\textnormal{out}} \cap S_{\txta,{\varepsilon},h}^-$ it is bounded by $h(x^2 - y^2 + \lambda {\varepsilon})$ due to the definition of $\Pi_\txte$ and $ x^2 - y^2 = \mathcal{O}\left(\rho^2 - {\varepsilon}^{2/3}\rho^{2/3} \right)$. This finishes the proof. Summary and Outlook {#summaryoutlook} =================== We have applied the blow-up method to the Euler discretization of a fast-slow system with a transcritical singularity at the origin. We have shown that the qualitative behaviour of the slow manifolds is preserved by the discretization for any choice of $0 < h < {\varepsilon}$ (setting $\rho =1$), where $h$ denotes the time step size and ${\varepsilon}$ the small time scaling parameter of the fast-slow system. The central part of the proof lies in the scaling chart $K_2$ of the manifold corresponding with the blown up singularity and is expressed in Proposition \[propK2\]. The proof of the proposition uses direct analysis of the map and, by that, can be seen of an alternative way of also showing the continuous analogue of the result when $h \to 0$. Furthermore, we are able to estimate transition times of trajectories by the analysis of the entering chart $K_1$ and give a bound for the $y$-component in the exiting chart $K_3$. In fact, our estimates provide a very fine control on individual trajectories, which is a potential advantage of the discrete-time framework for fast-slow systems. We consider the work presented in this paper as one of the key steps towards a more comprehensive analysis of non-hyperbolic fixed points and non-hyperbolic submanifolds of fixed points in maps with multiple time scales. Whereas the normally hyperbolic theory for discrete-time multiple time scale systems is already quite well developed in [@HPS77; @ns2013; @Poe03], the geometric desingularisation of non-hyperbolic objects for maps still needs several extensions. For example, our problem  is based on an explicit Euler discretization, which is obviously the most straight forward scheme. We conjecture that one can use the more direct blow-up approach we use here for maps corresponding to ODEs also for other time-discretization schemes. There are several reasons, why particular schemes should be checked: it is well-known from the area of geometric integration and the general theory of structure-preserving discretizations [@HLW06] that only certain discrete-time schemes preserve relevant dynamical properties, e.g. adiabatic invariants for the Hamiltonian systems case [@HLW06] or certain asymptotic dynamics for the dissipative case [@Jin99]. For multiple time scale maps, Runge-Kutta methods have been studied from a geometric viewpoint [@ns95; @ns96]. It remains to clarify more systematically, for which discretization the geometric blow-up approach can be applied and what the relation between the two small parameters $0 <h, {\varepsilon}\ll 1$ must be. In this context, an interesting problem are canard explosions in discrete-time [@ER03; @Fru88], where also a third parameter is going to play a key role. Working out this case starting from the geometric approach by Krupa and Szmolyan for fast-slow ODEs [@ks2001/3] is currently work in progress by the authors of this paper. [^1]: Zentrum Mathematik der TU München, Boltzmannstr. 3, D-85748 Garching bei München
--- abstract: 'In this paper, we show that it is possible for a commutative ring with identity to be non-atomic (that is, there exist non-zero nonunits that cannot be factored into irreducibles) and yet have a strongly atomic polynomial extension. In particular, we produce a commutative ring with identity, $R$, that is antimatter (that is, $R$ has no irreducibles whatsoever) such that $R[t]$ is strongly atomic. What is more, given any nonzero nonunit $f(t)\in R[t]$ then there is a factorization of $f(t)$ into irreducibles of length no more than $\text{deg}(f(t))+2$.' address: - | Department of Mathematical Sciences\ Clemson University\ Clemson, SC 29634 - | Department of Science and Mathematics\ Northern State University\ Aberdeen, SD 57401 author: - Jim Coykendall - Stacy Trentham bibliography: - 'biblio2.bib' title: 'Spontaneous Atomicity for Polynomial Rings with Zero-Divisors' --- Introduction and Background =========================== The last two and a half decades have seen a renaissance in the study of the theory of factorization. The main focus of this research has been in the theater of integral domains, but much work has also been done in the more general setting of commutative rings with identity; for example, the interested reader should consult the papers [@DDAZ],[@DDAZ2], and [@DDAZ3]. Even for factorization in integral domains, rather surprising effects can occur. For example, Roitman has produced an example of an atomic domain, $R$, whose polynomial extension $R[t]$ is not atomic ([@R1]). Of course, for domains it is the case that if $R[t]$ is atomic, then $R$ must be atomic, but even now, the subtle interplay of atomicity between a domain and its polynomial extension is not completely understood. Perhaps at least as suprising is Roitman’s result showing that for the conditions “$R$ is atomic" and “$R[[x]]$ is atomic" neither one implies the other ([@R2]). The intent of this note is to provide a companion to the Roitman papers [@R1] and [@R2], and to provide a cautionary tale of the subtleties of factorization without the assumption of “integral domain". We will provide an example of a non-atomic (in fact, with no irreducibles whatsoever) commutative ring with identity, $R$, whose polynomial extension $R[t]$ [*is*]{} atomic (and is, in fact, strongly atomic). We first recall the distinction between “atom” and “strong atom” in a ring with zero divisors (we note that we will be using the terminology “(strong) irreducible” and “(strong) atom” interchangably). Let $R$ be a commutative ring with identity. We say that $a\in R$ is an atom if $a=bc$ implies that $a$ is associated to either $b$ or $c$ (in the sense that $(a)=(b)$ or $(a)=(c)$). We say that $a\in R$ is a strong atom if $a=bc$ implies that $a$ is strongly associated to either $b$ or $c$ (in the sense that either $b$ or $c$ is a unit in $R$). We say that a ring is (strongly) atomic if every nonzero nonunit is a product of (strong) atoms. These two notions are distinct (see [@DDAZ] for example). But a simple example of the distinction occurs in the ring $\mathbb{Z}/6\mathbb{Z}$ where the element $\overline{3}$ is an atom, but not a strong atom (hence $\mathbb{Z}/6\mathbb{Z}$ is atomic, but not strongly atomic). Preliminaries and the Example ============================= We first outline the construction of the ring that we will consider throughout this paper. Let $\mathbb{F}$ be a perfect field of characteristic $p$ and $\{x_1, x_2,\cdots, x_n,\cdots\}$ be a countable collection of indeterminates. We first define the domain $T$ as follows: $$T:=\mathbb{F}[x_1^{\alpha_1}, x_2^{\alpha_2},\cdots, x_n^{\alpha_n},\cdots]$$ where the exponents $\alpha_i\in\mathbb{Q}^+\bigcup\{0\}$ range over the non-negative rationals for all $i{\geqslant}1$. We now define the ideal $$I:=\langle\{\prod_{i=1}^\infty x_i^{\beta_i}\}\rangle$$ where $\beta_i=0$ for all but finitely many $i$, and $\sum_{i=1}^\infty\beta_i>1$ (essentially $I$ is the ideal generated by monomials of total degree greater than 1). The ring of our focus will be the ring $$R:=T/I.$$ We record some results concerning the properties of the ring $R$ for later use. We first remark that a typical element (coset) of $T$ can be represented in the form $$\epsilon_0+\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n+I$$ where each $\epsilon_i\in\mathbb{F}$ and each $\overline{X}_i$ is a monomial from $R$ of the form $\overline{X}_i=x_{i,1}^{a_{i,1}}x_{i,2}^{a_{i,2}}\cdots x_{i,t_i}^{a_{i,t_i}}$. Additionally if $\overline{X}_i=x_{i,1}^{a_{i,1}}x_{i,2}^{a_{i,2}}\cdots x_{i,t_i}^{a_{i,t_i}}$, we say that $\overline{X}_i$ is [*composed*]{} of the elements $\{x_{i,1}, x_{i,2},\cdots , x_{i, t_i}\}$, and has [*potential*]{} $\sum_{j=1}^{t_i} a_{i,j}$, and we will write $\text{pot}(\overline{X}_i)=\sum_{j=1}^{t_i} a_{j,t_j}$. If we want to specify a single $x_{i,j}$ we write $\text{pot}_{x_{i,j}}(\overline{X}_i)=a_{i,j}$. Also in the sequel, we will abuse the notation and represent elements of $R$ as elements of $T$ and suppress the coset notation. \[0dim\] $R$ is $0-$dimensional and quasi-local. In particular, every element of $R$ is either nilpotent or a unit. Using the notation from above, we let $d_i=\sum_{j=1}^{t_i}a_{i, j}$ be the potential of the monomial $\overline{X}_i$. If $m=\min_{1{\leqslant}i{\leqslant}n}(d_i)$ then there is an $N\in\mathbb{N}$ such that $p^Nm> 1$ and hence $p^Nd_i> 1$ for all $1{\leqslant}i{\leqslant}n$. Note first that if $\epsilon_0=0$ then because the characteristic of $R$ is $p$, we have that $$(\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n)^{p^N}=\epsilon_1^{p^N}\overline{X}_1^{p^N}+\cdots +\epsilon_n^{p^N}\overline{X}_n^{p^N}=0.$$ Hence $\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n$ is nilpotent. This computation shows that every nonunit is nilpotent and the statements of the lemma follow. \[nonatomic\] $R$ has no irreducible elements. In particular, $R$ is non-atomic. If $$\overline{X}=x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k}$$ then $\overline{X}^{\frac{1}{p}}=x_1^{\frac{a_1}{p}}x_2^{\frac{a_2}{p}}\cdots x_k^{\frac{a_k}{p}}\in R$. Since any monomial has a nontrivial (nonassociate) $p^{\text{th}}$ root in $R$, an arbitrary nonzero, nonunit $\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n$ has the nonassociate $p^{\text{th}}$ root $\epsilon_1^{\frac{1}{p}}\overline{X}_1^{\frac{1}{p}}+\cdots +\epsilon_n^{\frac{1}{p}}\overline{X}_n^{\frac{1}{p}}$ (since $\mathbb{F}$ is a perfect field). Hence $R$ contains no irreducibles and is therefore non-atomic. The following lemma is straightforward, but will be useful later. The content basically asserts that multiplying the two lowest potential terms (of highest degree) yields a nonzero term in the product of two polynomials (assuming, of course, that the product is not identically $0$). \[survive\] Let $f(t)=\sum_{i=0}^nf_it^i, g(t)=\sum_{i=0}^mg_it^i\in R[t], (f_i,g_i\in R)$ be such that $f(t)g(t)\neq 0$. If $f_j$ contains a monomial that minimizes potential among all monomials in $f(t)$ (and $j$ is maximized in the case that there are multiple monomials of minimal potential) and $g_{j^{\prime}}$ is the analog term for $g(t)$, then the coefficient of $t^{j+j^{\prime}}$ has a (surviving) monomial that is the sum of these minimal potentials. Suppose that $f_j$ contains a monomial of minimal potential in $f(t)$ (and in the case of multiple minimums, we assume that $j$ is maximal) and $g_{j^{\prime}}$ is the analog for $g(t)$. We will call these monomials (reordering if necessary), $z_1:=x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k}$ and $z_2:=x_1^{b_1}x_2^{b_2}\cdots x_k^{b_k}$ respectively. Here each $a_i, b_i{\geqslant}0$ and $a_i+b_i>0$ for all $1{\leqslant}i{\leqslant}k$. Since the potential of every term in $f(t)$ (resp. $g(t)$) at degree greater than $j$ (resp. $j^{\prime}$) strictly exceeds $\text{pot}(z_1)$ (resp. $\text{pot}(z_2)$), it remains only to show that there is a a monomial of $\text{pot}(z_1)$ and one of $\text{pot}(z_2)$ whose product cannot be cancelled by the product of two other monomials. To, this end we reselect $z_1$ and $z_2$ as follows. Among all monomials of minimal potential, select to maximize $a_1$ (resp. $b_1$). If there are multiple solutions in either case select from among these to maximize $a_2$ (resp. $b_2$). The process terminates for either monomial if a unique maximum is found, and in any case it will terminate for both by the $k^{\text{th}}$ step. We now observe that if we can find two other monomials of $f_j$ and $g_{j^{\prime}}$ respectively, say $w_1, w_2$ such that $\text{pot}(w_i)=\text{pot}(z_i)$ for $i=1,2$ and $w_1w_2=z_1z_2$ then given our selection of of $z_1$ and $z_2$, we can see that $\text{pot}_{x_i}(w_1)=\text{pot}_{x_i}(z_1), 1{\leqslant}i{\leqslant}k$. Hence $w_1=z_1$ and $w_2=z_2$, and this establishes the lemma. For simplicity, we consider a two-variable analog of the ring we constructed earlier. \[two\] Let $A:=K[x^{\alpha},y^{\beta}]$ where $\alpha, \beta$ range over the non-negative rationals. If $I$ is the ideal generated by all monomials of degree strictly greater than 1, then in the ring $(A/I)[t]$, the polynomial $x+yt$ (abusing the notation) is strongly irreducible. Let $K$ be a field and let $K[x,y; M]$ be the monoid domain with the indeterminates $x$ and $y$. In $R:=K[x,y;\mathbb{Q}^+]$ we impose the deglex order (see, for example, [@adams]) as follows. If $a,b,c,d\in\mathbb{Q}$ are positive we declare that $x^ay^b\prec x^cy^d$ if $a+b<c+d$. If $a+b=c+d$ we again say that $x^ay^b\prec x^cy^d$ if $a<b$ or $c<d$ in the case that $a=b$. So this totally orders the subset of nonzero monomials. To simplify the argument, we argue from the point of domains as follows. We now suppose that $x+yt=fg +h$ where $f,g\in R[t]$ are two nonunits $\text{mod}(IR[t])$ where is $I$ is the ideal of $R$ generated by all monomials of total degree greater than 1 and $h\in IR[t]$. We denote by $\text{min}(f)$ to be the monomial(s) of least degree in $f$. Since we have that $x+yt=fg+h$, it must be the case that $1=\text{deg}(\text{min}(fg))=\text{deg}(\text{min}(f))+\text{deg}(\text{min}(g))$. Letting $a=\text{deg}(\text{min}(f))$ and $b=\text{deg}(\text{min}(g))$, we take $u,v$ to be two monomials occuring in $f,g$ respectively such that $uv\neq 0\text{mod}( IR[t])$. Note that $1{\geqslant}\text{deg}(u)+\text{deg}(v){\geqslant}a+b=1$, forces $\text{deg}(u)=a$ and $\text{deg}(v)=b$. We now throw out all monomials of $f$ of degree larger than $a$ and all monomials of $g$ with degree larger than $b$ and observe this means that $\text{deg}(u)=a$ for all monomials occuring in $f$ (respectively $\text{deg}(v)=b$ for all monomials occuring in $g$). From this, we conclude that $x+yt=fg$ (hence this factorization is analogous to a factorization in an integral domain). Without loss of generality, we can assume that $g\in R$ and so if we write $f=f_0+f_1t$ then $x=\text{min}(f_0)\text{min}(g)$ and $y=\text{min}(f_1)\text{min}(g)$. Hence $g$ has minimal monomials of the form $x^b$ and $y^b$. We first assume that $0<a,b< 1$. So we consider a (minimal) monomial of $f_0$ (say $z$) that maximizes $\text{deg}_y(z)$ and write $z=x^\alpha y^\beta$ with $\alpha+\beta=a$ and note that the monomial $zy^b=x^\alpha y^{\beta+b}$ must survive in the product and this is our contradiction. Hence either $a=0$ or $b=0$. If $a=0$ then each coefficient of $f$ is a unit in which case the previous argument demonstrates that the degree $0$ term of $fg$ cannot be (just) $x$. Hence we conclude that $b=0$ and hence $g$ is a unit. So we see that $x+yt$ is a strong atom. \[unit\] Any element of $f(t)\in R[t]$ that has at least one unit coefficient is either a unit or has a factorization into no more than $n$ strong atoms where $n$ is the highest degree term of $f(t)$ that has a unit coefficient. Let $\mathfrak{M}$ be the maximal ideal of $R$ generated by all the monomials $x_i$ and consider the image of $f(t)$ in the domain $R[t]/\mathfrak{M}[t]\cong\mathbb{F}[t]$, which we will denote by $\overline{f}(t)$. Any factorization of $f(t)\in R[t]$ must have the property that each factor must have at least one unit coefficient. Hence given any decomposition $$f(t)=f_1(t)f_2(t)\cdots f_m(t)$$ there is a corresponding factorization in $\mathbb{F}[t]$ $$\overline{f}(t)=\overline{f}_1(t)\overline{f}_2(t)\cdots \overline{f}_m(t).$$ Note that if $\text{deg}(\overline{f}_i(t))=0$, then $f_i(t)$ is a unit in $R[t]$ and so we will discount this possibility and assume that each $\text{deg}(\overline{f}_i(t)){\geqslant}1$. Since $\mathbb{F}[t]$ is a UFD, this puts an upper bound (namely $\text{deg}(\overline{f}(t))$) on the length of the second decomposition. Since each factor of $f(t)\in R[t]$ must have a (positive degree) unit coefficient, we see that factoring $f(t)$ must terminate after no more than $n$ steps where $n$ is the largest degree term of $f(t)$ that has a unit coefficient. Note this argument also demonstrates that each $f_i(t)$ must be strongly irreducible. Indeed if $f_i(t)=g(t)h(t)$ then $\overline{f}_i(t)=\overline{g}(t)\overline{h}(t)$. Then one of these factors (say $\overline{g}(t)$) is a unit and hence its degree is $0$. So $g(t)$ is a unit (constant term) plus a sum of nilpotent elements (higher degree terms) and hence is a unit in $R[t]$. This establishes the proposition. The ring $R$ is a non-atomic ring such that $R[t]$ is strongly atomic. What is more, given $f(t)\in R[t]$, a nonzero, nonunit polynomial, one of the following occurs. 1. If $f(t)$ has a unit coefficient, then $f(t)$ can be written as a product of no more than $n$ strong atoms where $n$ is the highest degree term of $f(t)$ that has a unit coefficient. 2. If $f(t)\in\mathfrak{M}[t]$ has a factorization $f(t)=g(t)h(t)$ with both $g(t), h(t)\in\mathfrak{M}[t]$ then $f(t)$ can be factored into two strong atoms. 3. If $f(t)\in\mathfrak{M}[t]$ does not have a factorization $f(t)=g(t)h(t)$ with both $g(t), h(t)\in\mathfrak{M}[t]$ then $f(t)$ has a factorization of length no more than $\text{deg}(f(t))+2$ strong atoms. The fact that $R$ is non-atomic is from Proposition \[nonatomic\]. To verify that $R[t]$ is strongly atomic, it suffices to show that if $f(t)\in R[t]$ is a nonzero nonunit, then one of the three statements holds. As the first statement is immediate from Proposition \[unit\], we focus on the last two. To verify the last two statements, we build in tandem. First suppose that $$f(t)=g(t)h(t)$$ with both $g(t), h(t)\in\mathfrak{M}[t]$. Suppose also that $g(t)$ and $h(t)$ are composed of the elements $x_1,x_2,\cdots, x_m$ and let $y$ and $z$ two other elements (homomorphic images of the original indeterminates $\{x_i\}$) that are distinct from the elements composing $g(t)$ and $h(t)$. Since $y, z$ annihilate all of $\mathfrak{M}$, we have the factorization $$f(t)=(g(t)+yt+z)(h(t)+yt+z).$$ It now suffices to show (without loss of generality) that $g(t)+yt+z$ is strongly irreducible. By way of contradiction assume that $g(t)+yt+z=p(t)q(t)$ and consider the ideal $\mathfrak{N}[t]$ where $\mathfrak{N}$ is the ideal generated by all positive rational powers of the elements $x_i$ with the exception of $y$ and $z$. Passing to the homomorphic image $R[t]/\mathfrak{N}[t]\cong(R/\mathfrak{N})[t]$, we obtain the equation $$yt+z=\overline{p}(t)\overline{q}(t).$$ But Proposition \[two\] assures us that $yt+z$ is strongly irreducible. Hence, without loss of generality, $\overline{p}(t)$ is a unit in $(R/\mathfrak{N})[t]$ and since $\mathfrak{N}$ is generated by nilpotents, $p(t)$ is a unit in $R[t]$. For the final case, we assume that $f(t)$ cannot be factored into a product of two elements from $\frak{M}[t]$. If $f(t)$ is strongly irreducible, then we are done. If not, we can assume $f(t)=g(t)h(t)$ with $g(t)\in(\mathfrak{M}, t)$ and $h(t)\in\mathfrak{M}[t]$. Additionally, it must be the case that $g(t)$ has a term with unit coefficient. Applying Lemma \[survive\] to the product $g(t)h(t)$, we see that the highest degree term of $g(t)$ that can possibly be a unit coefficient must occur at degree $m{\leqslant}\text{deg}(f(t))$. Hence, Proposition \[unit\] shows that $g(t)$ can be factored into no more than $m{\leqslant}\text{deg}(f(t))$ strongly irreducible factors. Once we have produced $g(t)$ that has the largest possible maximal degree term with a unit coefficient, we see that $h(t)$ cannot be decomposed with a factor that is not in $\mathfrak{M}[t]$ (lest we would have the ability produce a factor of $f(t)$ with unit term at a higher degree level than the one in $g(t)$) . So if $h(t)$ is not irreducible, we apply the previously proved statement of this theorem to see that $h(t)$ can be decomposed into two strong irreducibles. Putting it all together, $f(t)$ has a factorization into no more than $\text{deg}(f(t))+2$ strong irreducibles. It is interesting to point out that the fact that our example is “full of” nilpotents should not be too surprising given the following observation (made possible by an interesting observation by the referee). Let $R$ be reduced and $a\in R$ be a nonzero nonunit. If $a=fg$ with $f,g\in R[t]$, and $f$ is a strong atom in $R[t]$, then $g\in R$ We assume that $g\notin R$ and let $c$ be the leading coefficient of $g$. If $P$ is any prime ideal of $R$, we consider the reduction to $(R/P)[t]\cong R[t]/PR[t]$. Since $(R/P)[t]$ is a domain, we must conclude that $cf\in PR[t]$. Hence $cf$ is in every prime ideal of $R[t]$ and so must be nilpotent. As $R$ (and hence $R[t]$) is reduced, it must be the case that $cf=0$. So $f=f(1+ct)$. Since $f$ is a strong atom, this means that $1+ct$ is a unit in $R[t]$, but $R$ is reduced, so we conclude that $c=0$. Hence $g\in R$. If $R$ is a reduced ring and $R[t]$ is stongly atomic, then $R$ is (strongly) atomic. Suppose that $a=f_1f_2\cdots f_n$ with each $f_i\in R[t]$ being strong atoms. By the previous lemma, we can inductively see that each $f_i$ is a (strong) atom of $R$. Acknowledgement {#acknowledgement .unnumbered} =============== The authors express gratitude to the referee whose careful reading facilitated an improved version of this paper. Additionally, we thank the referee for the observation that made the last result on strong atomicity possible.
--- abstract: 'The objective of the present paper is to prove cluster multiplication theorem in the quantum cluster algebra of type $A_{2}^{(2)}$. As corollaries, we obtain bar-invariant $\mathbb{Z}[q^{\pm\frac{1}{2}}]$-bases established in [@cds], and naturally deduce the positivity of the elements in these bases. One bar-invariant basis as the triangular basis of this quantum cluster algebra is also explicitly described.' address: - 'Department of Applied Mathematics, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, P. R. China' - | Department of Mathematics, University of Wisconsin-Whitewater\ 800 W. Main Street, Whitewater, WI.53190. USA - 'School of Mathematical Sciences and LPMC, Nankai University, Tianjin, P. R. China' - | Department of Mathematical Sciences\ Tsinghua University\ Beijing 100084, P. R. China author: - 'Liqian Bai, Xueqing Chen, Ming Ding and Fan Xu' title: 'Cluster multiplication theorem in the quantum cluster algebra of type $A_{2}^{(2)}$' --- [^1] Introduction ============ The concept of cluster algebras was introduced by Fomin and Zelevinsky [@ca1][@ca2] in order to develop an algebraic framework for understanding total positivity and canonical bases in semisimple algebraic groups. As a noncommutative analogue of cluster algebras, quantum cluster algebras were defined by Berenstein and Zelevinsky in  [@BZ2005]. Under the specialization $q=1,$ the quantum cluster algebras are degenerated to cluster algebras. For the classical cluster algebras, Sherman and Zelevinsky [@SZ] firstly gave the cluster multiplication formula in rank $2$ cluster algebras of finite and affine types. For the general case, the cluster categories were introduced in  [@BMRRT] as the categorification of acyclic cluster algebras. Cluster algebras have a close link to quiver representations via cluster categories. The link is explicitly characterized by the Caldero-Chapoton map [@CC] and the Caldero-Keller multiplication theorems [@CK][@CK-2]. The Caldero-Chapoton map associates the objects in the cluster categories to some Laurent polynomials, in particular, sends indecomposable rigid objects to cluster variables. The remarkable Caldero-Keller multiplication theorems show the multiplication rules between images of objects under the Caldero-Chapoton map. For simply laced Dynkin quivers, Caldero and Keller constructed a cluster multiplication formula (of finite type) between two generalized cluster variables in [@CK]. On the one hand, this multiplication is similar to the multiplication in a dual Hall algebra and unifies homological and geometric properties of cluster categories and combinatorial properties of cluster algebras. Since cluster algebras were introduced in order to study canonical bases, it is important to construct integral bases of cluster algebras. In the cluster theory, the Caldero-Chapoton map and the Caldero-Keller cluster multiplication theorem open a new way to construct cluster algebras from 2-Calabi-Yau categories and play a very important role to obtain structural results such as bases with good properties, positivity conjecture, denominator conjecture and so on  [@CK][@dx][@DXX]. The cluster multiplication formula of finite type was generalized to affine type in [@Hubery2005] and to any type in [@XiaoXu]. Palu [@Palu2] further extended the formula to 2-Calabi-Yau categories with cluster tilting objects. In [@DWZ], the full generalization of the Caldero-Chapoton map was obtained for quivers with potentials. Following this link, some good bases have been constructed for finite and affine cluster algebras [@CK][@calzel][@DXX]. It is natural to ask the question: what are the quantum analogue of the link? Recently, Rupel [@rupel] defined a quantum version of the Caldero-Chapoton map for the quantum cluster algebras over finite fields associated with valued acyclic quivers and he conjectured that cluster variables could be expressed as images of indecomposable rigid objects under the quantum Caldero-Chapoton formula. A key ingredient of the conjecture is to confirm the mutation rules between quantum cluster variables given by [@BZ2005]. Most recently, the conjecture has been proved by Qin [@fanqin] for acyclic equally valued quivers. There the author constructed a quantum cluster multiplication formula and then confirmed the mutation rules between quantum cluster variables. Note that Qin verified the formula for the usual quantum cluster algebras through the existence of counting polynomials instead of working over the finite field. The quantum Caldero-Chapoton maps are further generalized in  [@DMSS][@Davison]. In  [@dx-2], Ding and Xu proved a multiplication theorem for acyclic quantum cluster algebras which generalized the quantum cluster multiplication formula in [@fanqin] and could be viewed as a quantum analogue of the one-dimensional Caldero-Keller multiplication theorem discussed in [@CK-2]. Compared to the role which the Caldero-Keller multiplication theorems play for cluster algebras, the quantum multiplication theorem is worthy of highlighting and also reflects the information and the difficulty to prove the more general quantum analog of the Caldero-Keller multiplication theorems. By using this multiplication theorem, it is not too difficult to construct some good $\mathbb{ZP}$-bases in quantum cluster algebras of finite and affine types. By specializing $q$ and coefficients to $1$, these bases induce the good bases for cluster algebras of finite [@CK] and affine types [@DXX], respectively. One may expect to explicitly express the multiplication of two basis elements in terms of basis elements in quantum cluster algebras, i.e., to get structure constants clearly and explicitly. Ding and Xu  [@dx] gave the cluster multiplication formula in the quantum cluster algebra of the Kronecker quiver. By using the multiplication formula, they constructed bar-invariant bases of the quantum cluster algebra of Kronecker quiver as quantum analogues of the canonical basis, semicanonical basis and dual semicanonical basis of the corresponding cluster algebra. As a byproduct, they also proved positivity of the elements in these bases. In this paper, we construct a nontrivial cluster multiplication formula in the quantum cluster algebra of the non-simply-laced valued quiver $A_{2}^{(2)}$, which is parallel to the results obtained for Kornecker quiver but is not a trivial generalization. This formula yields some important properties of quantum cluster algebra of type $A_{2}^{(2)}$. For example, we construct three integral bases of this quantum cluster algebra. It is worthy of highlighting that the basis $\mathcal{B}$ we obtained coincide with “quantum greedy basis" (or “quantum atomic basis", “quantum theta basis") defined in  [@LLRZ]. In general, quantum greedy basis does not have positive structure constants. But it does in this case following from the main result Theorem  \[theorem2\] of the paper . Whether quantum greedy bases exist or not for general cases are still not known. The basis $\mathcal{S}$ is closely related to the dual canonical basis of a quantum unipotent cell (subset of the dual canonical basis of the $U_q(n))$, when the valued quiver is attached some correct frozen variables. In the last section, we prove that $\mathcal{S}$ is the triangular basis in the sense of Qin [@fanqin2], or the triangular basis in the sense of Berenstein-Zelevinsky  [@BZ2014]. Note that Qin [@fanqin3] proved that both definitions of the triangular bases are equivalent for the seeds associated with acyclic quivers and for the seeds associated with bipartite skew-symmetrizable matrices. The basis $\mathcal{D}$ is similar to the quantum generic basis (quantum dual semi-canonical basis) in  [@KQ]. To construct explicit multiplication formula is still open for general cases. Preliminaries ============= For the terminology related to quantum cluster algebras, one can refer to  [@BZ2005] for more details, to quantum cluster algebra of type $A_2^{(2)}$, refer to  [@cds]. In this paper, we consider the valued quiver associated to the compatible pair $(\Lambda,B)$ where $$\Lambda=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right) \text{~and~} B=\left( \begin{array}{cc} 0 & 1 \\ -4 & 0 \\ \end{array} \right).$$ Note that $\Lambda^{T}B=\left( \begin{array}{cc} 4 & 0 \\ 0 & 1 \\ \end{array} \right)$. Now let $q$ denote the formal variable and $\mathcal{F}$ be the skew field of fractions of the *quantum torus* $\mathcal{T}=\ZZ[q^{\pm \frac{1}{2}}]\langle X_1,X_2~|~X_1X_2=qX_2X_1\rangle$. The *quantum cluster algebra* $\A_q(1,4)$ is the $\ZZ[q^{\pm \frac{1}{2}}]$-subalgebra of $\mathcal{F}$ generated by the cluster variables $X_k$ for $k\in \mathbb{Z}$, recursively defined by $$X_{k-1}X_{k+1}=\left\{ \begin{aligned} q^{\frac{1}{2}}X_k+1,& ~\text{ if }~k~\text{is odd}; \\ q^{2}X_{k}^{4}+1,& ~\text{ if }~k~\text{is even}. \end{aligned} \right.$$ Note that $X_k\in\mathcal{T}$ by the well-known quantum Laurent phenomenon  [@BZ2005]. For each $(a,b)\in\ZZ^{2}$, if we set $X^{(a,b)}=q^{-\frac{1}{2}ab}X_{1}^{a}X_{2}^{b}$, then $X^{(a,b)}X^{(c,d)}=q^{-\frac{1}{2}(bc-ad)}X^{(a+c,b+d)}$. The $\mathbb{Z}$-linear bar-involution on the based quantum torus $\mathcal{T}$ is defined as follows: $$\overline{q^{\frac{r}{2}}X^{(a,b)}}=q^{-\frac{r}{2}}X^{(a,b)}, \ \ \ \text{for any}~r,a \text{ and } b\in \mathbb{Z}.$$ In [@cds], the authors constructed three kinds of bar-invariant $\ZZ[q^{\pm \frac{1}{2}}]$-bases of the quantum cluster algebra $\A_q(1,4)$ by using the standard monomials discussed in [@BZ2005]. We now briefly recall some notations and results in  [@cds]. We define that $$\begin{aligned} X_\delta:=&X^{(-1,-2)}+X^{(-1,2)}+X^{(1,-2)}+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X^{(0,-2)} &\\ =&qX_{0}^{2}X_3-q^{2}(qX_1+q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{2}^{2}.&\end{aligned}$$ Let $$\begin{aligned} &\mathcal{B}=\{q^{-\frac{1}{2}ab}X^{a}_{m}X^{b}_{m+1}~|~m\in\ZZ,(a,b)\in\ZZ^{2}_{\geq0}\}\cup \{F_{n}(X_\delta)\},\\ &\mathcal{S}=\{q^{-\frac{1}{2}ab}X^{a}_{m}X^{b}_{m+1}~|~m\in\ZZ,(a,b)\in\ZZ^{2}_{\geq0}\}\cup \{S_{n}(X_\delta)\},\\ &\mathcal{D}=\{q^{-\frac{1}{2}ab}X^{a}_{m}X^{b}_{m+1}~|~m\in\ZZ,(a,b)\in\ZZ^{2}_{\geq0}\}\cup \{X_{\delta}^{n}\}, \end{aligned}$$ where $F_{n}(x)$ and $S_{n}(x)$ are well-known Chebyshev polynomials defined by $$F_0(x)=1,F_1(x)=x, F_2(x)=x^2-2, F_{n+1}(x)=F_{n}(x)x-F_{n-1}(x)~\text{for}~n\geq2,$$ $$S_0(x)=1,S_1(x)=x, S_2(x)=x^2-1, S_{n+1}(x)=S_{n}(x)x-S_{n-1}(x)~\text{for}~n\geq2,$$ and $F_n(x)=S_n(x)=0$ for $n<0$. The homomorphism $\sigma_2:\A_{q}(1,4) \rightarrow \A_{q}(1,4)$ defined by $X_m\mapsto X_{m+2}$ and $q^{\pm\frac{1}{2}}\mapsto q^{\pm\frac{1}{2}}$ is an automorphism of $\A_q(1,4)$ [@cds Section 4]. Note that $\sigma_2 (X_\delta)=X_\delta$. We can define a partial order $\leq$ on $\mathbb{Z}^2$ as follows: $(r_1,r_2)\leq(s_1,s_2)$ if $r_1\leq s_1$ and $r_2\leq s_2$ for $(r_1,r_2), (s_1,s_2)\in \mathbb{Z}^2$. Moreover if there exists some $i \in \{1,2\}$ such that $r_i< s_i$, we will write $(r_1,r_2)<(s_1,s_2)$. In  [@cds], the authors showed that every element in $\{X_n$ $(n\in \mathbb{Z}\setminus \{1,2\}), F_{n}(X_\delta)(n\geq 1), S_{n}(X_\delta)(n\geq 1)\}$ has a minimal non-zero term $X^{(a,b)}$ according to the partial order $\leq$. The vector $(-a,-b)$ associated to this minimal non-zero term $X^{(a,b)}$ of the corresponding element will be called the denominator vector. Then by using the standard monomials, they proved that $\mathcal{B}$, $\mathcal{S}$ and $\mathcal{D}$ are bar-invariant $\ZZ[q^{\pm \frac{1}{2}}]$-bases of the quantum cluster algebra $\A_q(1,4)$. Unfortunately, the structure constants and the positivity are not presented in this construction. This motivated our study for the multiplication formulas. Cluster multiplication theorem and positive bases ================================================= In this section, we mainly prove the cluster multiplication theorem of the quantum cluster algebra $\A_q(1,4)$. Since the element $X_\delta$ stated in previous section plays a crucial importance in the cluster multiplication theorem, we firstly address another expression of this element. \[lem1\] In $\A_q(1,4)$, we have that $ X_\delta=q^{-1}X_{4}^{2}X_1-q^{-2}(q^{-1}X_3+q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{2}^{2}. $ Note that $X_3=X^{(-1,4)}+X^{(-1,0)}$ and $X_4=X^{(-1,3)}+X^{(-1,-1)}+X^{(0,-1)}$, then we have that $$\begin{aligned} &q^{-1}X_{4}^{2}X_1-q^{-2}(q^{-1}X_3+q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{2}^{2}\\ =& q^{-1}(X^{(-1,3)}+X^{(-1,-1)}+X^{(0,-1)})^{2}X^{(1,0)} -(q^{-3}(X_3+q^{\frac{1}{2}}+q^{\frac{3}{2}})X_{2}^{2}) \\ =&q^{-4}X^{(-1,6)}+X^{(-1,-2)}+X^{(1,-2)}+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})q^{-2}X^{(0,2)}\\ &+(q^{-2}+q^{2})q^{-2}X^{(-1,2)} +(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X^{(0,-2)}\\ &-(q^{-4}X^{(-1,6)}+q^{-4}X^{(-1,2)}+(q^{-\frac{5}{2}}+q^{-\frac{3}{2}})X^{(0,2)})\\ =&X^{(-1,-2)}+X^{(1,-2)}+X^{(-1,2)}+ (q^{-\frac{1}{2}}+q^{\frac{1}{2}})X^{(0,-2)}= X_\delta.\end{aligned}$$ The following proposition is a special case discussed in [@BCDX], here we give an alternative proof by using the above lemma. \[prop-gene\] The quantum cluster algebra $\A_{q}(1,4)$ is the $\ZZ[q^{\pm\frac{1}{2}}]$-algebra generated by $\{X_m,X_{m+1},X_{m+2},X_{m+3}\}$ for any $m\in\ZZ$. By the definition of $X_\delta$, we know that $X_\delta\in\ZZ[q^{\pm\frac{1}{2}}]\langle X_0,X_1,X_2,X_3\rangle$. We have that $X_\delta\in\ZZ[q^{\pm\frac{1}{2}}]\langle X_1,X_2,X_3,X_4\rangle$ by Lemma \[lem1\]. Then through the automorphism $\sigma_2$, we can deduce that $X_\delta\in\ZZ[q^{\pm\frac{1}{2}}]\langle X_m,X_{m+1},X_{m+2},X_{m+3}\rangle$ for any $m\in\ZZ$. Note that for any $n\in\ZZ$, we have that $X_{2n}X_\delta=q^{-\frac{1}{2}}X_{2n-2}+q^{\frac{1}{2}}X_{2n+2}$ (see [@cds Proposition 4.2]). Then we can deduce that $X_{2n}\in\ZZ[q^{\pm\frac{1}{2}}]\langle X_m,X_{m+1},X_{m+2},X_{m+3}\rangle$ for any $n\in\ZZ$. Since $X_{2n-2}X_{2n}=q^{\frac{1}{2}}X_{2n-1}+1$, we obtain that all cluster variables belong to $\ZZ[q^{\pm\frac{1}{2}}]\langle X_m,X_{m+1},X_{m+2},X_{m+3}\rangle$. Thus $ \A_q(1,4)= \ZZ[q^{\pm\frac{1}{2}}]\langle X_m,X_{m+1},X_{m+2},X_{m+3}\rangle. $ For each $n\in\ZZ$, we denote by $$\begin{aligned} \langle n\rangle=\left\{ \begin{aligned} 1,&~\text{if~}n~\text{is~odd};\\ 2,&~\text{if~}n~\text{is~even}. \end{aligned} \right.\end{aligned}$$ Let $x\in\mathbb{R}$, we have the floor function $\lfloor x\rfloor:=\text{max}\{m\in\ZZ~|~m\leq x\}$ and the ceiling function $\lceil x\rceil:=\text{min}\{m\in\ZZ~|~m\geq x\}$. For any $m>n\geq1$, it is easy to show that (see [@cds Proposition 4.2]): $$F_n(X_\delta)F_m(X_\delta)=F_{m+n}(X_\delta)+F_{m-n}(X_\delta)~~ \text{ and } ~~F_n(X_\delta)F_n(X_\delta)=F_{2n}(X_\delta)+2.$$ The following cluster multiplication theorem is the main result of the present paper. \[theorem2\] In $\A_q(1,4)$, we have that 1. 1. if $m$ is even and $n$ is positive, then $$\label{equation1} X_{m}F_n(X_{\delta})=q^{-\frac{n}{2}}X_{m-2n}+q^{\frac{n}{2}}X_{m+2n}.$$ 2. if $m$ is odd and $n$ is positive, then $$\begin{aligned} \label{equation2} & X_mF_n(X_\delta)\\=&q^{-n}X_{m-n}^{\langle m-n\rangle}+q^{n}X_{m+n}^{\langle m+n\rangle}\nonumber +\sum\limits_{k\geq1} (\sum\limits_{l=1}^{k} (q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}}+q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}}))F_{n-2k}(X_{\de}).\end{aligned}$$ 2. if $m$ is even and $n$ is positive, then $$\label{equation3} X_mX_{m+2n}=q^{\frac{n}{2}}X_{m+n}^{\langle m+n\rangle}+\sum\limits_{k\geq1}\big(\sum\limits_{l=1}^{2k-1}q^{-\frac{n+1}{2}+l}\big)F_{n-2k+1}(X_{\delta}).$$ 3. if $m$ is even and $n$ is positive odd, then $$\begin{aligned} \label{equation4} \nonumber &X_{m-n}X_{m}\\=&\sum\limits_{1<2k<n}\big(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l}\big)X_{m-4k} +\left\{ \begin{aligned} q^{\frac{n}{2}}X^{3}_{m-\frac{2}{3}n},{\hskip 1.9cm}~~~~~~~~~ &~~ n\equiv 0~(\rm{mod}~3) \\ q^{\frac{n-1}{2}}X_{\lfloor m-\frac{2}{3}n\rfloor}X_{\lceil m-\frac{2}{3}n\rceil},&~~ \rm{otherwise,} \end{aligned} \right.\end{aligned}$$ and $$\begin{aligned} \label{equation5} \nonumber & X_{m+n}X_{m}\\=&\sum\limits_{1<2k<n}\big(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{\frac{1}{2}+k-l}\big)X_{m+4k} +\left\{ \begin{aligned} q^{-\frac{n}{2}}X^{3}_{m+\frac{2}{3}n},{\hskip 2.0cm}& ~n\equiv 0~(\rm{mod}~3)~~~~~~~ \\ q^{-\frac{n+1}{2}}X_{\lfloor m+\frac{2}{3}n\rfloor}X_{\lceil m+\frac{2}{3}n\rceil},&~\rm{otherwise.} \end{aligned} \right.\end{aligned}$$ 4. if $m$ is odd and $n$ is positive, then $$\begin{aligned} \label{equation6} \nonumber & X_mX_{m+2n}\\=&q^{2n}X_{m+n}^{2\langle m+n\rangle}+\sum\limits_{k=1}^{n-1}(\sum\limits\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l}) X_{m+2n-2k} +\sum\limits_{k\geq1}c_{n,k}F_{2n-2k}(X_{\delta}),\end{aligned}$$ where $$\begin{aligned} c_{n,k}=\sum\limits_{i=1}^{k}a_i(q^{-2(n-i)-1}+q^{4k-2(n+i)+1})+ \sum\limits_{i=1}^{k-1}b_i(q^{-2(n-i)}+q^{4k-2(n+i)}) +b_kq^{-2(n-k)}\end{aligned}$$ and $a_j=\frac{j(j-1)}{2}$, $b_j=\frac{j(j-1)}{2}+\lceil \frac{j}{2}\rceil$ for positive integer $j$. \(1) In order to prove (\[equation1\]), it suffices to show that $$X_2F_n(X_\delta)=q^{-\frac{n}{2}}X_{2-2n}+q^{\frac{n}{2}}X_{2+2n}.$$ We will prove the claim by induction on $n$. When $n=1$, it follows from [@cds Proposition 4.2]. When $n=2$, we have that $$\begin{aligned} &X_2F_2(X_\delta)=X_2(X_{\delta}^2-2)=q^{-\frac{1}{2}}X_0X_\delta+q^{\frac{1}{2}}X_4X_\delta-2X_2\\ =&(q^{-1}X_{-2}+X_2)+(X_2+qX_6)-2X_2 =q^{-1}X_{-2}+qX_6.\end{aligned}$$ Assume that $X_2F_n(X_\delta)=q^{-\frac{n}{2}}X_{2-2n}+q^{\frac{n}{2}}X_{2+2n}$ for $n\geq2$. Then $$\begin{aligned} &X_2F_{n+1}(X_\delta)=X_2F_n(X_\delta)X_\delta-X_2F_{n-1}(X_\delta)\\ =&q^{-\frac{n+1}{2}}X_{-2n}+q^{-\frac{n-1}{2}}X_{4-2n}+q^{\frac{n-1}{2}}X_{2n}+q^{\frac{n+1}{2}}X_{4+2n} -q^{-\frac{n-1}{2}}X_{4-2n}-q^{\frac{n-1}{2}}X_{2n}\\ =&q^{-\frac{n+1}{2}}X_{-2n}+q^{\frac{n+1}{2}}X_{4+2n}.\end{aligned}$$ To prove (\[equation2\]), it suffices to show that $$\begin{aligned} \label{equ2} &X_1F_n(X_\delta)\\ =&q^{-n}X_{1-n}^{\langle 1-n\rangle}+q^{n}X_{1+n}^{\langle 1+n\rangle}\nonumber+\sum\limits_{k\geq1} \big(\sum\limits_{l=1}^{k} (q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}}+q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})\big)F_{n-2k}(X_{\de}).\end{aligned}$$ When $n=1$, we have that $X_1X_\delta=q^{-1}X_0^2+qX_2^2$ by [@cds Proposition 4.2]. When $n=2$, we have that $$\begin{aligned} &X_1F_2(X_\delta)=X_1(X_\delta^2-2)=(q^{-1}X_0^2+qX_{2}^{2})X_\delta-2X_1\\ \pagebreak =&q^{-\frac{3}{2}}(q^{-\frac{1}{2}}X_{-1}+1)+q^{-\frac{1}{2}}(q^{\frac{1}{2}}X_{1}+1)+ q^{\frac{1}{2}}(q^{-\frac{1}{2}}X_{1}+1)+q^{\frac{3}{2}}(q^{\frac{1}{2}}X_{3}+1)-2X_1\\ =&q^{-2}X_{-1}+q^{2}X_3+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}}).\end{aligned}$$ Assume that (\[equ2\]) is true. Note that $X_1F_{n+1}(X_\delta)=X_1F_n(X_\delta)X_\delta-X_1F_{n-1}(X_\delta)$. When $n$ is even, $$\begin{aligned} &X_1F_{n+1}(X_\delta)\\ =&[q^{-n}X_{1-n}+q^{n}X_{1+n}+\sum\limits_{k\geq1}(\sum\limits_{l=1}^{k}(q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}} +q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})F_{n-2k}(X_\delta))]X_\delta\\ &-q^{1-n}X^{2}_{2-n}-q^{n-1}X_{n}^{2}-\sum\limits_{k\geq1}(\sum\limits_{l=1}^{k}(q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}} +q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})F_{n-1-2k}(X_\delta))\\ =&q^{-n-1}X_{-n}^{2}+q^{n+1}X^{2}_{n+2}+\sum\limits_{k=1}^{\frac{n}{2}-1} \sum\limits_{l=1}^{k}(q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}} +q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})F_{n+1-2k}(X_\delta)\\ &+\sum\limits_{l=1}^{\frac{n}{2}}(q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}} +q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})X_\delta \\ =&q^{-n-1}X^{2}_{-n}+q^{n+1}X_{n+2}^{2}+\sum\limits_{k=1}^{\frac{n}{2}}\sum\limits_{l=1}^{k} (q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}} +q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}})F_{n+1-2k}(X_\delta).\end{aligned}$$ The proof for the odd $n$ is similar. \(2) For $n\geq0$, it suffices to show that $$\label{equ3} X_2X_{2+2n}=q^{\frac{n}{2}}X_{2+n}^{<2+n>}+\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n+1}{2}+l}) F_{n-2k+1}(X_\delta).$$ When $n=1$, it is the exchange relation. When $n=2$, we have that $$\begin{aligned} X_2X_6 =&q^{-\frac{1}{2}}X_2X_4X_\delta-q^{-1}X_{2}^{2}=q^{-\frac{1}{2}}(q^{\frac{1}{2}}X_3+1)X_\delta-q^{-1}X_{2}^{2}\\ =&X_3X_\delta+q^{-\frac{1}{2}}X_\delta-q^{-1}X_{2}^{2}=qX_{4}^{2}+q^{-\frac{1}{2}}X_\delta.\end{aligned}$$ Assume that (\[equ3\]) is true. Now we calculate $X_2X_{4+2n}$. Note that $$X_{2+2n}X_\delta=q^{-\frac{1}{2}}X_{2n}+q^{\frac{1}{2}}X_{2n+4},$$ we have $X_{2n+4}=q^{-\frac{1}{2}}X_{2+2n}X_\delta-q^{-1}X_{2n}$. When $n$ is even, it follows that $$\begin{aligned} &X_2X_{4+2n} =q^{-\frac{1}{2}}X_{2}X_{2+n}X_\delta-q^{-1}X_2X_{2n}\\ =&q^{\frac{n-1}{2}}X^{2}_{2+n}X_\delta+\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l}) F_{n+1-2k}(X_\delta)X_\delta -q^{\frac{n-3}{2}}X_{n+1}\\&-\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l}) F_{n-2k}(X_\delta).\end{aligned}$$ Note that $X_{2+n}^{2}X_\delta=q^{-1}X_{n+1}+qX_{n+3}+(q^{\frac{1}{2}}+q^{-\frac{1}{2}})$. Therefore, we have that $$\begin{aligned} &X_2X_{4+2n} \\ =&q^{\frac{n-3}{2}}X_{n+1}+q^{\frac{n+1}{2}}X_{n+3}+q^{\frac{n-1}{2}}(q^{-\frac{1}{2}}+q^{\frac{1}{2}}) +\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l})F_{n+1-2k}(X_\delta)X_\delta\\ &-q^{\frac{n-3}{2}}X_{n+1} -\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l})F_{n-2k}(X_\delta)\\ =&q^{\frac{n+1}{2}}X_{n+3}+(q^{\frac{n}{2}-1}+q^{\frac{n}{2}})+ \sum\limits_{k=1}^{\frac{n}{2}-1}(\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l})F_{n+2-2k}(X_\delta) +\sum\limits_{l=1}^{n-1}q^{-\frac{n}{2}-1+l}X_{\delta}^{2}\\ &-\sum\limits_{l=1}^{n-1}q^{-\frac{n}{2}-1+l}\\ =&q^{\frac{n+1}{2}}X_{n+3}+ \sum\limits_{k=1}^{\frac{n}{2}+1} (\sum\limits_{l=1}^{2k-1}q^{-\frac{n}{2}-1+l})F_{n+2-2k}(X_\delta).\end{aligned}$$ Similarly, we can prove the statement for odd $n$. \(3) To prove (\[equation4\]), it suffices to show that for a positive odd integer $n$, we have that $$\begin{aligned} \label{equ4} \nonumber &X_{1}X_{1+n}\\ =&\sum\limits_{1<2k<n}\big(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l}\big)X_{n+1-4k} +\left\{ \begin{aligned} q^{\frac{n}{2}}X^{3}_{1+\frac{n}{3}},{\hskip 1.75cm}&~n\equiv 0~(\rm{mod}~3);\\ q^{\frac{n-1}{2}}X_{\lfloor 1+\frac{n}{3}\rfloor}X_{\lceil 1+\frac{n}{3}\rceil},&~\rm{otherwise.} \end{aligned} \right.\end{aligned}$$ When $n=1$, it is trivial. When $n=3$, note that $X_4=q^{-\frac{1}{2}}X_2X_\delta-q^{-1}X_0$ by (\[equation1\]). It follows that $$\begin{aligned} &X_1X_4=q^{-\frac{1}{2}}X_1X_2X_\delta-q^{-1}X_1X_0= q^{\frac{1}{2}}X_2X_1X_\delta-q^{-1}X_1X_0\\ =&q^{\frac{1}{2}}X_2(q^{-1}X^{2}_{0}+q^{}X^{2}_{2})-q^{-1}X_1X_0 =q^{-\frac{1}{2}}X_0+q^{\frac{3}{2}}X_{2}^{3}.\end{aligned}$$ Since $X_4X_\delta=q^{-\frac{1}{2}}X_2+q^{\frac{1}{2}}X_6$ and $X_6=q^{-\frac{1}{2}}X_4X_\delta-q^{-1}X_2$, when $n=5$, we have that $$\begin{aligned} &X_1X_6=X_1(q^{-\frac{1}{2}}X_4X_\delta-q^{-1}X_2)=q^{-1}X_0X_\delta+qX_{2}^{3}X_\delta-q^{-1}X_1X_2\\ =&q^{-\frac{3}{2}}X_{-2}+q^{-\frac{1}{2}}X_2+q^{\frac{1}{2}}X_{2}^{2}X_0+q^{\frac{3}{2}}X_{2}^{2}X_4-q^{-1}X_1X_2\\ =&q^{-\frac{3}{2}}X_{-2}+(q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}})X_2+q^2X_2X_3.\end{aligned}$$ Assume that (\[equ4\]) is true. Note that $X_1X_{3+n}=q^{-\frac{1}{2}}X_1X_{1+n}X_\delta-q^{-1}X_1X_{n-1}$. If $n\equiv0~(\rm{mod}~3)$, then $$X_1X_{n+1}=\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-4k)}q^{-\frac{1}{2}-k+l})X_{n+1-4k} +q^{\frac{n}{2}}X^{3}_{1+\frac{n}{3}}$$ and $$q^{-1}X_1X_{n-1}=\sum\limits_{1<2k<n-2} (\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{\frac{n-5}{2}}X_{\frac{n}{3}}X_{1+\frac{n}{3}}.$$ We then get $$\begin{aligned} &q^{-\frac{1}{2}}X_1X_{n+1}X_\delta\\ =&\sum\limits_{1<2k<n} (\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-1-k+l})(q^{-\frac{1}{2}}X_{n-1-4k} +q^{\frac{1}{2}}X_{n+3-4k})\\ &+q^{\frac{n-1}{2}}X^{2}_{1+\frac{n}{3}}(q^{-\frac{1}{2}}X_{\frac{n}{3}-1} +q^{\frac{1}{2}}X_{3+\frac{n}{3}})\\ =&\sum\limits_{1<2k<n} (\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +\sum\limits_{1<2k<n} (\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k}\\ &+q^{\frac{n-3}{2}}X_{1+\frac{n}{3}}X_{\frac{n}{3}}+q^{\frac{n+1}{2}}X_{1+\frac{n}{3}}X_{2+\frac{n}{3}} +(q^{\frac{n-2}{2}}+q^{\frac{n}{2}})X_{1+\frac{n}{3}}.\end{aligned}$$ Note that $\lfloor\frac{n-2}{6}\rfloor=\frac{n-3}{6}$, $\lceil\frac{n-2}{6}\rceil=\frac{n+3}{6}$, $\lfloor\frac{n}{6}\rfloor=\frac{n-3}{6}$, $\lceil\frac{n}{6}\rceil=\frac{n+3}{6}$, $\lfloor\frac{n+2}{6}\rfloor=\frac{n-3}{6}$ and $\lceil\frac{n+2}{6}\rceil=\frac{n+3}{6}$ since $n\equiv0$ (mod $3$). Then we have $$\begin{aligned} \label{reln0} \left\{ \begin{aligned} 4k<n-2-2k,&~\rm{if~}1\leq k\leq \frac{n-3}{6},\\ 4k>n-2-2k,&~\rm{if~}\frac{n+3}{6}\leq k\leq \frac{n-3}{2},\\ 4k<n-2k,{\hskip 0.6cm}&~\rm{if~}1\leq k\leq \frac{n-3}{6},\\ 4k>n-2k,{\hskip 0.6cm}&~\rm{if~}\frac{n+3}{6}\leq k\leq \frac{n-1}{2},\\ 4k<n+2-2k,&~\rm{if~}1\leq k\leq \frac{n-3}{6},\\ 4k>n+2-2k,&~\rm{if~}\frac{n+3}{6}\leq k\leq \frac{n+1}{2}. \end{aligned} \right.\end{aligned}$$ It follows that $$\begin{aligned} &\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} -\sum\limits_{1<2k<n-2}(\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k}\\ =&\sum\limits_{k=\frac{n-3}{6}}^{\frac{n-3}{2}}(\sum\limits_{l=n-1-2k}^{n-2k}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{-\frac{n}{2}}X_{1-n}-(q^{\frac{n-2}{2}}+q^{\frac{n}{2}})X_{1+\frac{n}{3}}.\end{aligned}$$ Hence $$\begin{aligned} X_1X_{n+3} =&\sum\limits_{k=\frac{n+3}{6}}^{\frac{n-1}{2}}(\sum\limits_{l=n+1-2k}^{n+2-2k}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{-\frac{n}{2}}X_{1-n}\\ &+\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{\frac{n+1}{2}}X_{1+\frac{n}{3}}X_{2+\frac{n}{3}}\\ =&\sum\limits_{1<2k<n+2}(\sum\limits_{l=1}^{\text{min}(4k,n+2-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{\frac{n+1}{2}}X_{1+\frac{n}{3}}X_{2+\frac{n}{3}}.\end{aligned}$$ If $n\equiv1~(\rm{mod}~3)$, then $$X_1X_{n+1}=\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+1-4k}+ q^{\frac{n-1}{2}}X_{\frac{n+2}{3}}X_{\frac{n+5}{3}},$$ $$q^{-1}X_1X_{n-1}=\sum\limits_{1<2k<n-2}(\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{\frac{n-5}{2}}X_{\frac{n-1}{3}}X_{\frac{n+2}{3}}.$$ It follows that $$\begin{aligned} &q^{-\frac{1}{2}}X_1X_{n+1}X_{\delta}\\ =&\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k}\\ &+q^{\frac{n-3}{2}}X_{\frac{n+2}{3}}X_{\frac{n-1}{3}}+q^{\frac{n-2}{2}}X_{\frac{n-1}{3}} +q^{\frac{n+2}{2}}X_{\frac{n+5}{3}}^{3}.\end{aligned}$$ Note that $\lfloor\frac{n-2}{6}\rfloor=\frac{n-7}{6}$, $\lceil\frac{n-2}{6}\rceil=\frac{n-1}{6}$, $\lfloor\frac{n}{6}\rfloor=\frac{n-1}{6}$, $\lceil\frac{n}{6}\rceil=\frac{n+5}{6}$, $\lfloor\frac{n+2}{6}\rfloor=\frac{n-1}{6}$ and $\lceil\frac{n+2}{6}\rceil=\frac{n+5}{6}$, therefore $$\begin{aligned} \label{reln1} \left\{ \begin{aligned} 4k<n-2-2k,&~\text{if~}1\leq k\leq \frac{n-7}{6},\\ 4k>n-2-2k,&~\text{if~}\frac{n-1}{6}\leq k\leq \frac{n-3}{2},\\ 4k<n-2k,{\hskip 0.6cm}&~\text{if~}1\leq k\leq \frac{n-1}{6},\\ 4k>n-2k,{\hskip 0.6cm}&~\text{if~}\frac{n+5}{6}\leq k\leq \frac{n-1}{2},\\ 4k<n+2-2k,&~\text{if~}1\leq k\leq \frac{n-1}{6},\\ 4k>n+2-2k,&~\text{if~}\frac{n+5}{6}\leq k\leq \frac{n+1}{2}. \end{aligned} \right.\end{aligned}$$ It follows that $$\begin{aligned} &\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} -\sum\limits_{1<2k<n-2}(\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k}\\ =&\sum\limits_{k=\frac{n+5}{6}}^{\frac{n-3}{2}}(\sum\limits_{l=n-1-2k}^{n-2k}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{\frac{n}{2}-2}X_{\frac{n-1}{3}}+q^{-\frac{n}{2}}X_{1-n}\\ =&\sum\limits_{k=\frac{n-1}{6}}^{\frac{n-3}{2}}(\sum\limits_{l=n-1-2k}^{n-2k}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{-\frac{n}{2}}X_{1-n}-q^{\frac{n}{2}-1}X_{\frac{n-1}{3}}.\end{aligned}$$ Hence $$\begin{aligned} X_1X_{n+3} =&\sum\limits_{k=\frac{n+5}{6}}^{\frac{n-1}{2}}(\sum\limits_{l=n+1-2k}^{n+2-2k}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{-\frac{n}{2}}X_{1-n}\\ &+\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{\frac{n+2}{2}}X_{\frac{n+5}{3}}^{3}\\ =&\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n+2-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{-\frac{n}{2}}X_{1-n} +q^{\frac{n+2}{2}}X_{\frac{n+5}{3}}^{3}\\ =&\sum\limits_{1<2k<n+2}(\sum\limits_{l=1}^{\text{min}(4k,n+2-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{\frac{n+2}{2}}X_{\frac{n+5}{3}}^{3}.\end{aligned}$$ If $n\equiv2~(\rm{mod}~3)$, then $$\begin{aligned} X_1X_{n+1}=\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+1-4k}+q^{\frac{n-1}{2}} X_{\frac{n+1}{3}}X_{\frac{n+4}{3}},\end{aligned}$$ and $$\begin{aligned} q^{-1}X_1X_{n-1}=\sum\limits_{1<2k<n-2}(\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{\frac{n-4}{2}} X^{3}_{\frac{n+1}{3}}.\end{aligned}$$ Note that $$\begin{aligned} &q^{-\frac{1}{2}}X_1X_{1+n}X_\delta\\ =&\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k}\\ &+q^{\frac{n-4}{2}}X_{\frac{n+1}{3}}^{3}+q^{\frac{n+1}{2}}X_{\frac{n+4}{3}}X_{\frac{n+7}{3}} +q^{\frac{n}{2}}X_{\frac{n+7}{3}}.\end{aligned}$$ Since $\lfloor\frac{n-2}{6}\rfloor=\frac{n-5}{6}$, $\lceil\frac{n-2}{6}\rceil=\frac{n+1}{6}$, $\lfloor\frac{n}{6}\rfloor=\frac{n-5}{6}$, $\lceil\frac{n}{6}\rceil=\frac{n+1}{6}$, $\lfloor\frac{n+2}{6}\rfloor=\frac{n+1}{6}$, $\lceil\frac{n+2}{6}\rceil=\frac{n+7}{6}$, we have that $$\begin{aligned} \label{reln2} \left\{ \begin{aligned} 4k<n-2-2k,&~\text{if~}1\leq k\leq \frac{n-5}{6},\\ 4k>n-2-2k,&~\text{if~}\frac{n+1}{6}\leq k\leq \frac{n-3}{2},\\ 4k<n-2k,{\hskip 0.6cm}&~\text{if~}1\leq k\leq \frac{n-5}{6},\\ 4k>n-2k,{\hskip 0.6cm}&~\text{if~}\frac{n+1}{6}\leq k\leq \frac{n-1}{2},\\ 4k<n+2-2k,&~\text{if~}1\leq k\leq \frac{n+1}{6},\\ 4k>n+2-2k,&~\text{if~}\frac{n+7}{6}\leq k\leq \frac{n+1}{2}. \end{aligned} \right.\end{aligned}$$ Note that $$\begin{aligned} &\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k} -\sum\limits_{1<2k<n-2}(\sum\limits_{l=1}^{\text{min}(4k,n-2-2k)}q^{-\frac{3}{2}-k+l})X_{n-1-4k}\\ =&\sum\limits_{k=\frac{n+1}{6}}^{\frac{n-3}{2}}(\sum\limits_{l=n-2k-1}^{n-2k}q^{-\frac{3}{2}-k+l})X_{n-1-4k} +q^{-\frac{n}{2}}X_{1-n}\end{aligned}$$ and $$\sum\limits_{l=1}^{\frac{2n+2}{3}}q^{-\frac{1}{2}-\frac{n+1}{6}+l}X_{\frac{n+7}{3}} -\sum\limits_{l=1}^{\frac{2n-1}{3}}q^{-\frac{1}{2}-\frac{n+1}{6}+l}X_{\frac{n+7}{3}}=q^{\frac{n}{2}}X_{\frac{n+7}{3}}.$$ It follows that $$\begin{aligned} &X_1X_{n+3}\\ =&\sum\limits_{k=\frac{n+7}{6}}^{\frac{n-1}{2}}(\sum\limits_{l=n+1-2k}^{n+2-2k}q^{-\frac{1}{2}-k+l})X_{n+3-4k} + q^{-\frac{n}{2}}X_{1-n}\\ &+\sum\limits_{1<2k<n}(\sum_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k} +q^{\frac{n}{2}}X_{\frac{n+7}{3}} +q^{\frac{n+1}{2}}X_{\frac{n+4}{3}}X_{\frac{n+7}{3}}\\ =&\sum\limits_{1<2k<n+2}(\sum_{l=1}^{\text{min}(4k,n+2-2k)}q^{-\frac{1}{2}-k+l})X_{n+3-4k}+ q^{\frac{n+1}{2}}X_{\frac{n+4}{3}}X_{\frac{n+7}{3}}.\end{aligned}$$ This completes the proof of (\[equation4\]). The proof of (\[equation5\]) is similar to the proof of (\[equation4\]). \(4) To prove (\[equation6\]), it suffices to prove that $$\label{equ6} X_1X_{1+2n}=q^{2n}X_{1+n}^{2\langle 1+n\rangle}+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n-2k+1} +\sum\limits_{k\geq1}c_{n,k}F_{2n-2k}(X_{\delta}).$$ When $n=1$, we have that $X_1X_3=q^2X^{4}_{2}+1$ which is the exchange relation. When $n=2$, note that $ X_3F_{2}(X_\delta)=q^{-2}X_1+q^2X_5+(q^{-\frac{3}{2}}+ q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}}) $ by (\[equation2\]), then we have that $$\begin{aligned} &X_1X_5=q^{-2}X_1X_{3}F_{2}(X_\delta)-q^{-4}X_{1}^{2}- (q^{-\frac{7}{2}}+ q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1\\ =&q^{-1}X_{2}^{3}X_{-2}+qX_{2}^{3}X_{6}+q^{-2}F_{2}(X_\delta)-q^{-4}X_{1}^{2}-(q^{-\frac{7}{2}}+ q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1.\end{aligned}$$ Note that 1. $ q^{-1}X_{2}^{3}X_{-2}=q^{-2}X_{2}^{2}X_{0}^{2}+q^{-\frac{1}{2}}X_{2}^{2}X_{\delta}, $ 2. $ qX_{2}^{3}X_{6}=q^{2}X_{2}^{2}X_{4}^{2}+q^{\frac{1}{2}}X_{2}^{2}X_{\delta}, $ 3. $ X_{2}^{2}X_{0}^{2}=q^{-\frac{1}{2}}X_2X_1X_0+X_2X_0=q^{-2}X_{1}^{2}+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1+1, $ 4. $ X_{2}^{2}X_{\delta}=X_2(q^{-\frac{1}{2}}X_0+q^{\frac{1}{2}}X_4)=q^{-1}X_1+qX_3+(q^{-\frac{1}{2}}+q^{\frac{1}{2}}), $ 5. $ X_{2}^{2}X_{4}^{2}=q^{\frac{1}{2}}X_2X_3X_4+X_2X_4=q^{2}X_{3}^{2}+(q^{\frac{1}{2}}+q^{\frac{3}{2}})X_3+1. $ It follows that $$X_1X_5=q^4X_{3}^{2}+(q^{\frac{1}{2}}+q^{\frac{3}{2}}+q^{\frac{5}{2}}+q^{\frac{7}{2}})X_3 +q^{-2}F_{2}(X_\delta) +(q^{-2}+q^{-1}+2+q+q^{2}).$$ Assume that (\[equ6\]) is true. We have that $$\begin{aligned} X_1X_{3+2n}=q^{-2}X_1X_{1+2n}F_{2}(X_\delta)-q^{-4}X_1X_{2n-1}-(q^{-\frac{7}{2}}+ q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1.\end{aligned}$$ If $n$ is even, then $$\begin{aligned} &q^{-2}X_1X_{1+2n}F_{2}(X_\delta)\\ =&q^{2n-2}X_{1+n}^{2}F_{2}(X_\delta) +\sum\limits_{k=1}^{n-1} (\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{5}{2}+l})X_{2n+1-2k}F_{2}(X_\delta)\\ &+ \sum\limits_{k\geq1}q^{-2}c_{n,k}F_{2n-2k}(X_\delta)F_{2}(X_\delta).\end{aligned}$$ For convenience, we set $$\begin{aligned} A=&q^{2n-2}X_{1+n}^{2}F_{2}(X_\delta),\\ B=& \sum\limits_{k=1}^{n-1} (\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{5}{2}+l})X_{2n+1-2k}F_{2}(X_\delta),\\ C=&\sum\limits_{k\geq1}q^{-2}c_{n,k}F_{2n-2k}(X_\delta)F_{2}(X_\delta), \\ D=&q^{-4}X_1X_{2n-1},\\ E=& (q^{-\frac{7}{2}}+ q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1.\end{aligned}$$ Then we have $$\begin{aligned} &A=q^{2n-2}X_{1+n}(q^{-2}X_{n-1}+q^2X_{n+3}+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}}))\\ =&q^{2n-6}X_{n}^{4}+q^{2n+2}X_{n+2}^{4}+q^{2n-4}+q^{2n} + (q^{2n-\frac{7}{2}}+q^{2n-\frac{5}{2}}+q^{2n-\frac{3}{2}}+q^{2n-\frac{1}{2}})X_{n+1},\end{aligned}$$ $$\begin{aligned} &B=\sum\limits_{k=1}^{n-1} \sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{5}{2}+l} (q^{-2}X_{2n-1-2k}+q^2X_{2n+3-2k} +q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}})\\ =&\sum\limits_{k=1}^{n-1} (\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} +\sum\limits_{k=1}^{n-1} (\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}\\ &+\sum\limits_{k=1}^{n-1} \sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+ q^{-3+l}+q^{-2+l}+q^{-1+l}),\end{aligned}$$ $$\begin{aligned} &C=\sum\limits_{k=1}^{n-2}c_{n+1,k}[F_{2n-2-2k}(X_\delta)+F_{2n+2-2k}(X_\delta)]+c_{n+1,n-1}(F_{4}(X_\delta)+2)+c_{n+1,n}F_2(X_\delta)\end{aligned}$$ and $$D=q^{2n-6}X_{n}^{4}+\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{\text{min}(k,n-1-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} +\sum\limits_{k\geq1}c_{n+1,k}F_{2n-2-2k}(X_\delta).$$ It follows that $$\begin{aligned} &X_1X_{3+2n}=A+B+C-D-E\\ =&q^{2n-4}+q^{2n}+q^{2n+2}X_{n+2}^{4}+(q^{2n-\frac{7}{2}}+q^{2n-\frac{5}{2}}+q^{2n-\frac{3}{2}} +q^{2n-\frac{1}{2}}) X_{n+1}\\ &+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} +\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}\\ &+\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+q^{-3+l}+q^{-2+l}+q^{-1+l}) +\sum\limits_{k=1}^{n-2}c_{n+1,k}F_{2n+2-2k}(X_\delta)\\ &+c_{n+1,n-1}(F_{4}(X_\delta)+1)+c_{n+1,n}F_2(X_\delta) -\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{4\text{min}(k,n-1-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}\\ &-(q^{-\frac{7}{2}}+q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})X_1\end{aligned}$$ $$\begin{aligned} =&q^{2n-4}+q^{2n}+q^{2n+2}X_{n+2}^{4}+(q^{2n-\frac{7}{2}}+q^{2n-\frac{5}{2}}+q^{2n-\frac{3}{2}} +q^{2n-\frac{1}{2}}) X_{n+1}\\ &+\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} +\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}\\ &+\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+q^{-3+l}+q^{-2+l}+q^{-1+l}) +\sum\limits_{k=1}^{n}c_{n+1,k}F_{2n+2-2k}(X_\delta)\\ &+c_{n+1,n-1} -\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{4\text{min}(k,n-1-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}.\end{aligned}$$ Since $$\begin{aligned} \text{min}(k,n-1-k)= \left\{ \begin{aligned} k,{\hskip 1.3cm}&~\text{if~}k\leq \frac{n}{2}-1;\\ n-1-k,&~\text{if~} k\geq \frac{n}{2}, \end{aligned} \right.\end{aligned}$$ $$\begin{aligned} \text{min}(k,n-k)= \left\{ \begin{aligned} k,{\hskip 0.6cm} &~\text{if~}k\leq \frac{n}{2}-1;\\ n-k,&~\text{if~} k\geq \frac{n}{2}, \end{aligned} \right.\end{aligned}$$ and $$\begin{aligned} \text{min}(k,n+1-k)= \left\{ \begin{aligned} k,{\hskip 1.3cm}&~\text{if~}k\leq \frac{n}{2};\\ n+1-k,&~\text{if~} k\geq \frac{n}{2}+1, \end{aligned} \right.\end{aligned}$$ we have that $$\begin{aligned} &\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} -\sum\limits_{k=1}^{n-2}(\sum\limits_{l=1}^{4\text{min}(k,n-1-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}\\ =&\sum\limits_{k=\frac{n}{2}}^{n-2}(\sum\limits_{l=1}^{4(n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k} -\sum\limits_{k=\frac{n}{2}}^{n-2}(\sum\limits_{l=1}^{4(n-1-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}\\ =&\sum\limits_{k=\frac{n}{2}}^{n-2}(\sum\limits_{l=4(n-k)-3}^{4(n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}.\end{aligned}$$ Note that $$\begin{aligned} &(q^{2n-\frac{7}{2}}+q^{2n-\frac{5}{2}}+q^{2n-\frac{3}{2}} +q^{2n-\frac{1}{2}}) X_{n+1} +\sum\limits_{k=\frac{n}{2}}^{n-2}(\sum\limits_{l=4(n-k)-3}^{4(n-k)}q^{-\frac{9}{2}+l})X_{2n-1-2k}\\ &+\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}\\ =&\sum\limits_{k=\frac{n}{2}+1}^{n}(\sum\limits_{l=4(n-k)+1}^{4(n+1-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k} +\sum\limits_{k=1}^{n-1}(\sum\limits_{l=1}^{4\text{min}(k,n-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}\\ =&\sum\limits_{k=1}^{n}(\sum\limits_{l=1}^{4\text{min}(k,n+1-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k}.\end{aligned}$$ Therefore $$\begin{aligned} &X_1X_{3+2n}\\ =&q^{2n+2}X_{n+2}^{4} +\sum\limits_{k=1}^{n}(\sum\limits_{l=1}^{4\text{min}(k,n+1-k)}q^{-\frac{1}{2}+l})X_{2n+3-2k} +\sum\limits_{k=1}^{n}c_{n+1,k}F_{2n+2-2k}(X_\delta) \\ &+q^{2n-4}+q^{2n}+\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+q^{-3+l}+q^{-2+l}+q^{-1+l}) +c_{n+1,n-1}.\end{aligned}$$ We need to show that $$\begin{aligned} \label{equation7} q^{2n-4}+q^{2n}+\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+q^{-3+l}+q^{-2+l}+q^{-1+l}) +c_{n+1,n-1} =c_{n+1,n+1}.\end{aligned}$$ Note that $c_{n+1,n-1}=\sum\limits_{i=1}^{n-1}a_i(q^{-2(n-i)-3}+q^{2(n-i)-5})+\sum\limits_{i=1}^{n-2}b_i(q^{-2(n-i)-2}+q^{2(n-i)-6}) +b_{n-1}q^{-4}$. Let $P_n:=\sum\limits_{k=1}^{n-1}\sum\limits_{l=1}^{4\text{min}(k,n-k)}(q^{-4+l}+q^{-3+l}+q^{-2+l}+q^{-1+l})$. We consider the coefficients of $q^{x}$ in the left hand side of (\[equation7\]) for $0\leq x\leq 2n$. When $n=2$, we have that $c_{3,1}=q^{-4}$ and $P_2=q^{-3}+2q^{-2}+3q^{-1}+4+3q^{}+2q^{2}+q^{3}$. Thus $c_{3,3}=1+q^{4}+c_{3,1}+P_2$. When $n=4$, $c_{5,3}=q^{-8}+q^{-7}+2q^{-6}+3q^{-5}+5q^{-4}+3q^{-3}+2q^{-2}+q^{-1}+1$ and $P_4= 3q^{-3}+6q^{-2}+9q^{-1}+12 +10q+8q^{2}+6q^{3}+4q^{4}+3q^{5}+2q^{6}+q^{7}$. Hence $c_{5,5}=q^{4}+q^8+c_{5,3}+P_4$. Let $n\geq6$, we have that 1. when $x\geq 2n-7$, the coefficients of $q^{x}$ in $c_{n+1,n-1}$ are $0$. The coefficients of $q^{x}$ in $P_n$ are $0$ for $x\geq2n$. For $0\leq x\leq 2n-4$, the coefficients of $q^{x}$ in the polynomial $P_n$ are $4n-4-2x$; 2. it is easy to see that the coefficients of $q^{2n-4}$, $q^{2n-3},q^{2n-2},q^{2n-1}$ and $q^{2n}$ in $P_n$ are $4$, $3$, $2$, $1$ and $0$, respectively. Thus the coefficients of $q^{2n-4},q^{2n-3},q^{2n-2},q^{2n-1}$ and $q^{2n}$ in the left hand side of (\[equation7\]) are $5,3,2,1$ and $1$, respectively; 3. when $0\leq x\leq 2n-8$ and $x\equiv0~(\text{mod}~4)$, the coefficients of $q^{x}$ in $c_{n+1,n-1}$ are $b_{n-3-\frac{x}{2}}$, then the coefficients of $q^{x}$ in the left hand side of (\[equation7\]) are $b_{n-3-\frac{x}{2}}+(4n-4-2x)=b_{n+1-\frac{x}{2}}$ since $b_{n-3-\frac{x}{2}}=\frac{1}{2}(n^2-(6+x)n+(\frac{1}{4}x^{2}+3x+10))$ and $b_{n+1-\frac{x}{2}} =\frac{1}{2}(n^2+(2-x)n+(\frac{1}{4}x^{2}-x+2))$; 4. when $0\leq x\leq 2n-8$ and $x\equiv2~(\text{mod}~4)$, the coefficients of $q^{x}$ in $c_{n+1,n-1}$ are $b_{n-3-\frac{x}{2}}$, then the coefficients of $q^{x}$ in the left hand side of (\[equation7\]) are $b_{n-3-\frac{x}{2}}+(4n-4-2x)=b_{n+1-\frac{x}{2}}$ since $b_{n-3-\frac{x}{2}}=\frac{1}{2}(n^2-(6+x)n)+(\frac{1}{4}x^2 +3x+9)$ and $b_{n+1-\frac{x}{2}} =\frac{1}{2}(n^2+(2-x)n+(\frac{1}{4}x^{2}-x+1))$; 5. when $1\leq x\leq 2n-7$ and $x$ are odd, the coefficients of $q^{x}$ in $c_{n+1,n-1}$ are $a_{n-\frac{x+5}{2}}$, so the coefficients of $q^{x}$ in the left hand side of (\[equation7\]) are $a_{n-\frac{x+5}{2}}+(4n-4-2x)=\frac{1}{2}(n^{2}-(x+6)n+(\frac{1}{4}x^2+3x+\frac{35}{4})) +(4n-4-2x) =\frac{1}{2}(n^{2}+(2-x)n+(\frac{1}{4}x^2-x+\frac{3}{4})) =a_{n+\frac{3-x}{2}}$; 6. the coefficient of $q^{2n-6}$ in $P_n$ and $c_{n+1,n-1}$ are $8$ and $0$ respectively. It follows that the coefficient of $q^{2n-6}$ in the left hand side of (\[equation7\]) is $b_4=8$. The coefficient of $q^{2n-5}$ in $P_n$ and $c_{n+1,n-1}$ are $6$ and $0$ respectively. Thus the coefficient of $q^{2n-5}$ in the left hand side of (\[equation7\]) is $a_4=6$. It follows that the coefficients of $q^{x}$ in the left hand side of (\[equation7\]) are $$\begin{aligned} \left\{ \begin{aligned} b_{n+1-\frac{x}{2}},&~\text{if}~ x\in\{0,2,\ldots,2n\},\\ a_{n-\frac{x-3}{2}},&~\text{if}~ x\in\{1,3,\ldots,2n-1\}. \end{aligned} \right.\end{aligned}$$ We consider the coefficients of $q^{-x}$ for $1\leq x\leq 2n$. We have that 1. when $4\leq x\leq 2n$, the coefficients of $q^{-x}$ in the polynomial $P_n$ are $0$; 2. when $x=1,2,3$, the coefficients of $q^{-1},q^{-2},q^{-3}$ in the polynomial $P_n$ are $3(n-1)$, $2(n-1)$ and $n-1$ respectively. Note that the coefficients of $q^{-1},q^{-2},q^{-3}$ in $c_{n+1,n-1}$ are $a_{n-2},b_{n-2}$ and $a_{n-1}$ respectively. Hence the coefficients of $q^{-1},q^{-2},q^{-3}$ in the left hand side of (\[equation7\]) are $a_{n+1},b_n$ and $a_n$ respectively; 3. when $x=4,6,8,\ldots,2n$, the coefficients of $q^{-x}$ in $c_{n+1,n-1}$ are $b_{n+1-\frac{x}{2}}$. Thus the coefficients of $q^{-x}$ in the left hand side of (\[equation7\]) are $b_{n+1-\frac{x}{2}}$ for $x=4,6,8,\ldots,2n$; 4. when $x=5,7,\ldots,2n-1$, the coefficients of $q^{-x}$ in $c_{n+1,n-1}$ are $a_{n-\frac{x-3}{2}}$. We obtain that the coefficients of $q^{-x}$ in the left hand side of (\[equation7\]) are $a_{n-\frac{x-3}{2}}$ for $x=5,7,\ldots,2n-1$. Hence the coefficients of $q^{-x}$ in the left hand side of (\[equation7\]) are $$\begin{aligned} \left\{ \begin{aligned} b_{n+1-\frac{x}{2}},&~\text{if}~ x\in\{2,\ldots,2n\},\\ a_{n-\frac{x-3}{2}}, &~\text{if}~ x\in\{1,3,\ldots,2n-1\}. \end{aligned} \right.\end{aligned}$$ The identity (\[equation7\]) is true. This proceeds the induction and finishes the proof. If $n$ is odd, by a similar detailed calculation, we can proceed the proof. Thus we can draw the conclusion that (\[equation6\]) is true. The entire proof of the theorem is now completed. According to the definition of bar-invariant and Theorem  \[theorem2\], we can easily obtain the similar cluster multiplication formulas for $F_n(X_{\delta})X_{m}$ and $X_{n}X_{m}$ in other cases. As an immediate consequence of the above theorem, we can naturally obtain the following result proved in  [@cds] by a simple way. The sets $\mathcal{B}$, $\mathcal{S}$ and $\mathcal{D}$ are bar-invariant $\ZZ[q^{\pm\frac{1}{2}}]$-bases of $\A_{q}(1,4)$. Since there exist unipotent transformations between $F_{n}(X_\delta)$, $S_{n}(X_\delta)$ and $X^{n}_\delta$ for $n\geq 1$, we only need to prove that the set $\mathcal{B}$ is a bar-invariant $\ZZ[q^{\pm\frac{1}{2}}]$-basis of $\A_{q}(1,4)$. It suffices to show that $\mathcal{B}$ are linearly independent. Note that the denominator vectors of $F_n(X_\delta)$ in $\A_q(1,4)$ are $(n,2n)$ for $n\in\mathbb{N}$ which are bijective to the set of positive imaginary roots of the corresponding Lie algebra denoted by $\Phi_{+}^{\rm{im}}$. By [@SZ Proposition 3.1], there exists a bijection between the set of all denominator vectors of cluster monomials and $\mathcal{Q}-\Phi_{+}^{\rm{im}}$ with $\mathcal{Q}:=\mathbb{Z}^2$ being a lattice of rank 2 with a fixed basis of two simple roots $\{\alpha_1,\alpha_2\}$. Thus the denominator vectors of cluster monomials and $F_n(X_\delta)$ are different from each other. Assume that $S$ is a finite set and $$\sum\limits_{\alpha\in S} a_\alpha X_\alpha=0$$ for $X_\alpha\in\mathcal{B}$ and $a_\alpha\in\ZZ[q^{\pm\frac{1}{2}}]\setminus\{0\}$. Let $V(S)$ denote the set of denominator vectors of $X_\alpha$ for $\alpha\in S$ and let $\leq$ be the partial order on $V(S)$ inherited from $\mathbb{Z}^2$. There exists $\beta\in S$ such that the denominator vector of $X_\beta$ is a maximal denominator vector in $(V(S),\leq)$. We can then deduce that $a_\beta=0$ which is a contradiction. Therefore $\mathcal{B}$ is linearly independent. Recall that an element $Y\in\A_q(1,4)$ is *positive* if the coefficients in its Laurent expansion in the cluster variables from $\{X_m,X_{m+1}\}$ belong to $\NN[q^{\pm\frac{1}{2}}]$. The following result can be deduced from Theorem \[theorem2\]. \[corbasepos\] The elements in the $\ZZ[q^{\pm\frac{1}{2}}]$-bases $\mathcal{B}$, $\mathcal{S}$ and $\mathcal{D}$ of $\A_{q}(1,4)$ are positive. By the definitions of Chebyshev polynomials $F_{n}(x)$ and $S_{n}(x)$, we only need to prove the positivity of the elements in $\mathcal{B}$. Using the fact that $\sigma_2$ is an automorphism, it suffices to prove the positivity in the clusters $\{X_1,X_2\}$ and $\{X_2,X_3\}$. Note that $$\begin{aligned} X_0=&X^{(1,-1)}+X^{(0,-1)},\\ X_{-1}=&X^{(3,-4)}+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}})X^{(2,-4)} +(q^{-2}+q^{-1}+1+q^{}+q^{2})X^{(1,-4)} \\ &+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}})X^{(0,-4)} +X^{(-1,0)}+X^{(-1-4)},\\ X_{-2}=&X^{(-1,1)}+X^{(2,-3)}+(q^{-1}+1+q)X^{(1,-3)}+(q^{-1}+1+q)X^{(0,-3)}+X^{(-1,-3)},\\ X_\delta=&X^{(-1,-2)}+X^{(-1,2)}+X^{(1,-2)}+(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X^{(0,-2)},\\ F_2(X_\delta)=&X^{(-2,-4)}+(q^{-2}+q^{2})X^{(-2,0)}+(q^{-2}+q^{-1}+2+q+q^{2})X^{(0,-4)}\\ &+(q^{-\frac{3}{2}}+q^{-\frac{1}{2}}+q^{\frac{1}{2}}+q^{\frac{3}{2}})(X^{(-1,4)}+X^{(1,-4)}+X^{(-1,0)})+X^{(-2,4)} +X^{(2,-4)}.\end{aligned}$$ Hence $X_{-2},X_{-1},X_0,X_1,X_2,X_\delta$ and $F_2(X_\delta)$ are positive elements in $\{X_1,X_2\}$. Suppose that $X_{-n},X_{-n+1},\ldots,X_{-2},X_{-1},X_0,X_1,X_2,\ldots,X_{n-1},X_{n},X_\delta,F_2(X_\delta),\ldots,$ and $F_{n}(X_\delta)$ are positive. By Theorem \[theorem2\], when $n$ is odd, we have $$\begin{aligned} X_1X_{1+n}=&\sum\limits_{1<2k<n}(\sum\limits_{l=1}^{\text{min}(4k,n-2k)}q^{-\frac{1}{2}-k+l})X_{n+1-4k} +\left\{ \begin{aligned} q^{\frac{n}{2}}X^{3}_{1+\frac{n}{3}},{\hskip 1.8cm}&~n\equiv 0~(\rm{mod}~3);~~~~~~~ \\ q^{\frac{n-1}{2}}X_{\lfloor 1+\frac{n}{3}\rfloor}X_{\lceil 1+\frac{n}{3}\rceil},&~\rm{otherwise,} \end{aligned} \right.\end{aligned}$$ $$\begin{aligned} &X_1X_{-n-1}\\=&\sum\limits_{1<2k<n+2}(\sum\limits_{l=1}^{\text{min}(4k,n+2-2k)}q^{\frac{1}{2}+k-l})X_{-n-1+4k}+\left\{ \begin{aligned} q^{-\frac{n+2}{2}}X^{3}_{\frac{1-n}{3}},{\hskip 1.8cm}&~n\equiv 1~(\rm{mod}~3);~~~~~~~ \\ q^{-\frac{n+3}{2}}X_{\lfloor \frac{1-n}{3}\rfloor}X_{\lceil \frac{1-n}{3}\rceil},&~\rm{otherwise,} \end{aligned} \right.\end{aligned}$$ $$X_1X_{2+n}=q^{n+1}X^{2\langle\frac{n+3}{2}\rangle}_{\frac{n+3}{2}}+\sum\limits_{k=1}^{\frac{n-1}{2}} (\sum\limits_{l=1}^{4\text{min}(k,\frac{n+1}{2}-k)}q^{-\frac{1}{2}+l})X_{n+2-2k}+\sum\limits_{k\geq1}c_{\frac{n+1}{2},k} F_{n+1-2k}(X_\delta)$$ and $$X_{-n-2}X_{1}=q^{n+3}X^{2\langle-\frac{n+1}{2}\rangle}_{-\frac{n+1}{2}}+\sum\limits_{k=1}^{\frac{n+1}{2}} (\sum\limits_{l=1}^{4\text{min}(k,\frac{n+3}{2}-k)}q^{-\frac{1}{2}+l})X_{1-2k}+ \sum\limits_{k\geq1}c_{\frac{n+3}{2},k} F_{n+3-2k}(X_\delta);$$ when $n$ is even, we have $$\begin{aligned} X_1X_{1+n}=&q^{n}X^{2\langle\frac{n}{2}+1\rangle}_{\frac{n}{2}+1}+\sum\limits_{k=1}^{\frac{n-2}{2}} (\sum\limits_{l=1}^{4\text{min}(k,\frac{n}{2}-k)}q^{-\frac{1}{2}+l})X_{n+1-2k}+\sum\limits_{k\geq1}c_{\frac{n}{2},k} F_{n-2k}(X_\delta),\\ X_{-n-1}X_{1}=&q^{n+2}X^{2\langle-\frac{n}{2}\rangle}_{-\frac{n}{2}}+\sum\limits_{k=1}^{\frac{n}{2}} (\sum\limits_{l=1}^{4\text{min}(k,\frac{n}{2}+1-k)}q^{-\frac{1}{2}+l})X_{1-2k}+ \sum\limits_{k\geq1}c_{\frac{n}{2}+1,k} F_{n+2-2k}(X_\delta),\\ X_2X_{2+n}=&q^{\frac{n}{4}}X_{2+\frac{n}{2}}^{\langle 2+\frac{n}{2}\rangle}+\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1} q^{-\frac{n+2}{4}+l})F_{\frac{n}{2}+1-2k}(X_\delta),\\ X_{-n-2}X_{2}=&q^{\frac{n+4}{4}}X_{-\frac{n}{2}}^{\langle-\frac{n}{2}\rangle}+\sum\limits_{k\geq1}(\sum\limits_{l=1}^{2k-1} q^{-\frac{n+6}{4}+l})F_{\frac{n}{2}+3-2k}(X_\delta).\end{aligned}$$ We deduce that $X_{-n-2},X_{-n-1},X_{n+1}$ and $X_{n+2}$ are positive in $\{X_1,X_2\}$. According to Theorem \[theorem2\], we have that $$\begin{aligned} & X_1F_{n+1}(X_\delta)\\=&q^{-(n+1)}X_{-n}^{\langle -n\rangle}+q^{n+1}X_{n+2}^{\langle n+2\rangle}\nonumber+\sum\limits_{k\geq1} (\sum\limits_{l=1}^{k} (q^{-\frac{4l-1}{2}}+q^{-\frac{4l-3}{2}}+q^{\frac{4l-3}{2}}+q^{\frac{4l-1}{2}}))F_{n+1-2k}(X_{\de}).\end{aligned}$$ It follows that $F_{n+1}(X_\delta)$ is positive in $\{X_1,X_2\}$. By induction, each element in $\mathcal{B}$ is positive in $\{X_1,X_2\}$. Similarly, each element in $\mathcal{B}$ is positive in $\{X_2,X_3\}$. The proof is completed. A $\ZZ[q^{\pm\frac{1}{2}}]$-basis $\mathcal{C}$ of $\A_q(1,4)$ is called the *canonical* basis of $\A_q(1,4)$ if every positive element of $\A_q(1,4)$ is the $\NN[q^{\pm\frac{1}{2}}]$-linear combination of elements of $\mathcal{C}$. The basis $\mathcal{B}$ is the canonical basis of $\A_q(1,4)$. Let $Y$ be a positive element of $\A_q(1,4)$. Since $\mathcal{B}$ is a $\ZZ[q^{\pm\frac{1}{2}}]$-basis of $\A_q(1,4)$, we have $Y=\sum\limits_{Z\in\mathcal{B}}a_ZZ$ for $a_Z\in\ZZ[q^{\pm\frac{1}{2}}]\setminus\{0\}$. It suffices to prove that each $a_Z$ equals some coefficient in the Laurent expansion of $Y$ with respect to $X_m$ and $X_{m+1}$ for $m\in\ZZ$. First of all, we consider the coefficient of the cluster monomial $q^{-\frac{1}{2}ab}X_{m}^{a}X_{m+1}^{b}$, where $a,b\in\NN$. Since $\sigma_{2}$ is an automorphism, it suffices to show that the coefficient $a_{X^{(c,d)}}$ of $X^{(c,d)}$ in $Y=\sum\limits_{Z\in\mathcal{B}}a_ZZ$ equals the coefficient of $X^{(c,d)}$ in the Laurent expansion of $Y$ with respect to $\{X_1,X_{2}\}$ (showing that the coefficient of the cluster monomial $q^{-\frac{1}{2}cd}X_{2}^{c}X_{3}^{d}$ in $Y=\sum\limits_{Z\in\mathcal{B}}a_ZZ$ equals the coefficient of $q^{-\frac{1}{2}cd}X_{2}^{c}X_{3}^{d}$ in the Laurent expansion of $Y$ with respect to $\{X_2,X_3\}$ uses the same method). For $m\neq1$, let $$q^{-\frac{1}{2}ab}X_{m}^{a}X_{m+1}^{b}=\sum\limits_{(c,d)\in\ZZ^{2}}M_{cd}(q)X^{(c,d)}$$ be the Laurent expansion of the cluster monomial $q^{-\frac{1}{2}ab}X_{m}^{a}X_{m+1}^{b}$ with respect to $X_1$ and $X_{2}$, where $M_{cd}(q)\in\ZZ[q^{\pm\frac{1}{2}}]$. For $n\geq1$, let $$F_n(X_\delta)=\sum\limits_{(e,f)\in\ZZ^{2}}N_{ef}(q)X^{(e,f)}$$ be the Laurent expansion of $F_n(X_\delta)$ with respect to $X_1$ and $X_2$, where $N_{ef}(q)\in\ZZ[q^{\pm\frac{1}{2}}]$. By Corollary \[corbasepos\], $M_{cd}(q)$, $N_{ef}(q)\in\NN[q^{\pm\frac{1}{2}}]$. We only need to show that if $c,d,e,f\in\NN$ then $M_{cd}(q)=N_{ef}(q)=0$. If there exists some $M_{cd}(q)\neq0$ for some $c,d\in\NN$, then $M_{cd}(1)>0$ which contradicts the statement in [@SZ Proposition 3.6]. Thus $M_{cd}(q)=0$ for $c,d\geq0$. If there exists some $N_{ef}(q)\neq0$ for $e,f\in\NN$, then $N_{ef}(1)>0$ which contradicts the statement in [@SZ Proposition 5.2(2)]. Hence $N_{ef}(q)=0$ for $e,f\geq0$. It remains to consider the coefficient of $F_n(X_\delta)$ in $Y=\sum\limits_{Z\in\mathcal{B}}a_ZZ$ for $n\geq1$. Without loss of generality, we can assume that the cluster variables $X_m$ occurring in $Y=\sum\limits_{Z\in\mathcal{B}}a_ZZ$ satisfy that $m\geq3$ since $\sigma_{2}$ is an automorphism. It suffices to show that, the coefficient of the cluster monomial $X^{(n,-2n)}$ ($n\geq1$) in the Laurent expansion of $F_n(X_\delta)$ with respect to the cluster $\{X_1,X_2\}$ is $1$, but the coefficients of $X^{(n,-2n)}$ in the Laurent expansions of $F_k(X_\delta)$ or $q^{-\frac{1}{2}n_1n_2}X_{m}^{n_1}X_{m+1}^{n_2}$ for $k\neq n$ and $m\geq3$ are $0$. Let $R_1(q)$ denote the coefficient of $X^{(n,-2n)}$ in the Laurent expansion of $F_k(X_\delta)$ with respect to $\{X_1,X_2\}$. Note that $R_1(q)\in\NN[q^{\pm\frac{1}{2}}]$ since $F_n(X_\delta)$ is positive. Using the fact that $F_n(X_\delta)$ is bar-invariant and $R_1(1)=1$ by [@SZ Proposition 5.1], it follows that $R_1(q)=1$. Let $R_2(q)\in\NN[q^{\pm\frac{1}{2}}]$ denote the coefficient of $X^{(n,-2n)}$ in the Laurent expansion of $F_k(X_\delta)$ with respect to $\{X_1,X_2\}$. If $R_2(q)\neq0$, then $R_2(1)\neq0$, which contradicts [@SZ Proposition 5.2(1)]. Thus $R_2(q)=0$. Let $R_3(q)\in\NN[q^{\pm\frac{1}{2}}]$ denote the coefficient of $X^{(n,-2n)}$ in the Laurent expansion of $q^{-\frac{1}{2}}X_{m}^{n_1}X_{m+1}^{n_2}$ with respect to $\{X_1,X_2\}$. If $R_3(q)\neq0$, then $R_3(1)\neq0$, which contracts the statement in [@SZ Corollary 3.7]. Hence $R_3(q)=0$. This finishes the proof. triangular bases ================ In [@BZ2014], the choice of the exchange matrix and the skew-symmetric matrix are $B=\left( \begin{array}{cc} 0 & -b \\ c & 0 \\ \end{array} \right)$ for $b,c>0$ and $\Lambda=\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \\ \end{array} \right)$, respectively. Though our exchange matrix $B$ and the skew-symmetric matrix $\Lambda$ differ from those of [@BZ2014] by a sign, let $v=q^{-\frac{1}{2}}$ to reconcile the formal variables $q$ and $v$, then Theorem \[theorem2\] is still true. The following theorem is a version of Lusztig’s Lemma. [@BZ2014 Theorem 1.1].\[Lusztiglem\] Let $(L,\prec)$ be a partially order set and satisfy that the lengths of chains with the top element $u\in L$ in $L$ are bounded from above. Let $\A$ be a free $\ZZ[q^{\pm\frac{1}{2}}]$-module spanned by the basis $\{E_u~|~u\in L\}$. The bar-involution on $\A$ is the $\ZZ$-linear map $x\mapsto \overline{x}$ such that $\overline{fx}=\overline{f}\overline{x}$ for $f\in\ZZ[q^{\pm\frac{1}{2}}]$ and $x\in\A$, where $\overline{f}(q^{\frac{1}{2}})=f(q^{-\frac{1}{2}})$. If $$\label{conditionlusztig} \bar{E}_{u}-E_u\in \bigoplus\limits_{u^\prime\prec u}\ZZ[q^{\pm\frac{1}{2}}]E_{u^\prime}$$ for $u\in L$, then there exists a unique element $C_u\in\A$ satisfies that $$\label{equlusztiglem1} \bar{C}_u=C_u,$$ $$\label{equlusztiglem2} C_u-E_u\in\bigoplus\limits_{u^\prime\in L}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{u^\prime}.$$ More precisely, (\[equlusztiglem2\]) can be replaced by $$\label{equlusztiglem3} C_u-E_u\in\bigoplus\limits_{u^\prime\prec u}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{u^\prime}.$$ Then $\{C_u~|~u\in L\}$ is a $\ZZ[q^{\pm\frac{1}{2}}]$-basis in $\A$. Let $\A$ be the quantum cluster algebra associated with an acyclic quantum seed. By [@BZ2014 Theorem 1.4, Theorem 1.6], the basis $\{C_u~|~u\in L\}$ of $\A$ is uniquely determined by (\[equlusztiglem1\]) and (\[equlusztiglem2\]). The basis $\{C_u~|~u\in L\}$ is called the *canonical triangular* basis in $\A$ (see [@BZ2014 Section 1] for more detail). For $x\in\ZZ$, the function $[x]_+$ is defined by $[x]_+=x$ if $x>0$ and $[x]_+=0$ otherwise. In the rest of this section, we set $L=\ZZ^2$ and $\A=\A_q(1,4)$. For $(a,b),(a^\prime,b^\prime)\in\ZZ^{2}$, recall that the partial order $\prec$ on $\ZZ^{2}$ used in [@BZ2014 (6.16)] is defined by: $$\label{partialorder} (a^\prime,b^\prime)\prec(a,b)\Longleftrightarrow[-a^\prime]_+<[-a]_+,~[-b^{\prime}]_+<[-b]_+.$$ Using the same argument as in [@BZ2014 Section 2], it follows that $\A_q(1,4)$ satisfies the condition (\[conditionlusztig\]) for the partial order $\prec$. The standard monomials in $\A_q(1,4)$ are defined by $$\label{standardmonomial} E_{(a,b)}=q^{-\frac{1}{2}ab}X_{3}^{[-a]_+}X_{1}^{[a]_+}X_{2}^{[b]_+}X_{0}^{[-b]_+}$$ for $a,b\in\ZZ$ ([@BZ2014 Section 1]). As in [@BZ2014], the *crystal lattice* $\A_+\subset\A_q(1,4)$ is defined by $$\A_+=\bigoplus\limits_{a,b\in\ZZ}\ZZ[q^{-\frac{1}{2}}]E_{(a,b)}.$$ Recall that in [@BZ2014] for the the quantum cluster algebra of Kronecker type $\A_q(2,2)$, for $a,b\in\ZZ$, Berenstein and Zelevinsky defined the elements $$E^{\prime}_{(a,b)}: =q^{-\frac{1}{2}ab}X_{2}^{[-b]_+}X_{0}^{[b]_+}X_{1}^{[a]_+}X_{-1}^{[-a]_+} \in\A_q(2,2).$$ Similarly, we define the elements $E^{\prime}_{(a,b)}$ and $\mu_1E_{(a,b)}$ in $\A_q(1,4)$ as follows $$E^{\prime}_{(a,b)}=q^{-\frac{1}{2}ab}X_{2}^{[-b]_+}X_{0}^{[b]_+}X_{1}^{[a]_+}X_{-1}^{[-a]_+}~\text{and}~ \mu_1E_{(a,b)}=q^{-\frac{1}{2}ab}X_{4}^{[-b]_+}X_{2}^{[b]_+}X_{3}^{[a]_+}X_{1}^{[-a]_+}.$$ A new partial order $\preceq$ on $\ZZ^{2}$ is defined by: $$\label{partialorder} (a^\prime,b^\prime)\preceq (a,b)\Longleftrightarrow[-a^\prime]_+\leq[-a]_+,~[-b^{\prime}]_+\leq[-b]_+.$$ The partial order $\preceq$ is used to prove Lemma \[lem4.1\] and Lemma \[lempsi\] and the partial order $\prec$ is used in Lusztig’s Lemma. \[lem4.1\] For $a,b\in\ZZ$, we have that 1. $q^{\frac{1}{2}a}E_{(a,b)}X_0-E_{(a,b-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a+1,b-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$; 2. $q^{\frac{1}{2}b}X_3E_{(a,b)}-E_{(a-1,b)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a-1,b+4)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$; 3. $q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a+1,b+4)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$; 4. $q^{-\frac{1}{2}a}X_4E_{(a,b)}-E_{(a,b-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a-1,b-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$ if $b>0$; 5. $q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)}-E_{(a-1,b-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a-1,b+3)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$ if $a>0$ and $b\leq0$; 6. $q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)}-E_{(a-1,b-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a,b)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$ if $a\leq0$ and $b\leq0$. \(1) (i) If $a\geq0,b>0$, then $$\label{equ4.1.1i} q^{\frac{1}{2}a}E_{(a,b)}X_0-E_{(a,b-1)}=q^{-\frac{1}{2}b}E_{(a+1,b-1)}.$$ \(ii) If $a\geq0,b\leq0$, then $$\label{equ4.1.1ii} q^{\frac{1}{2}a}E_{(a,b)}X_0=q^{-\frac{1}{2}(ab-b)}X_{1}^{a}X_{0}^{-b+1}=E_{(a,b-1)}.$$ \(iii) If $a<0,b\leq0$, then $$\label{equ4.1.1iii} q^{\frac{1}{2}a}E_{(a,b)}X_0=q^{-\frac{1}{2}(ab-b)}X_{3}^{-a}X_{0}^{-b+1}=E_{(a,b-1)}.$$ \(iv) If $a<0,b>0$, then $$\label{equ4.1.1iv} q^{\frac{1}{2}a}E_{(a,b)}X_0-E_{(a,b-1)}=q^{-\frac{1}{2}b}E_{(a+1,b-1)}+q^{-\frac{1}{2}(b-4a)}E_{(a+1,b+3)}.$$ \(2) (i) If $a\leq0,b\geq0$, then $$\label{equ4.1.2i} q^{\frac{1}{2}b}X_3E_{(a,b)}=q^{-\frac{1}{2}(ab-b)}X_{3}^{1-a}X_{2}^{b}=E_{(a-1,b)}.$$ \(ii) If $a\leq0,b<0$, then $$\label{equ4.1.2ii} q^{\frac{1}{2}b}X_3E_{(a,b)}=q^{-\frac{1}{2}(a-1)b}X_{3}^{1-a}X_{0}^{-b}=E_{(a-1,b)}.$$ \(iii) If $a>0,b\geq0$, then $$\label{equ4.1.2iii} q^{\frac{1}{2}b}X_3E_{(a,b)}-E_{(a-1,b)}=q^{-\frac{1}{2}(ab+8a-b-4)}X_{1}^{a-1}X_{2}^{b+4}=q^{-2a}E_{(a-1,b+4)}.$$ \(iv) If $a>0,b=-1$, then $$\label{equ4.1.2iv} q^{-\frac{1}{2}}X_3E_{(a,-1)}-E_{(a-1,-1)}=q^{-2a}E_{(a-1,3)}+q^{-2a-2}E_{(a,3)}.$$ \(v) If $a>0,b=-2$, then $$\begin{aligned} \label{equ4.1.2v} q^{-1}X_3E_{(a,-2)}-E_{(a-1,-2)}=q^{-2a}E_{(a-1,2)}+q^{-2a}(q^{-\frac{5}{2}}+q^{-\frac{3}{2}})E_{(a,2)} +q^{-2a-4}E_{(a+1,2)}.\end{aligned}$$ \(vi) If $a>0,b=-3$, then $$\begin{aligned} \label{equ4.1.2vi} &q^{-\frac{3}{2}}X_3E_{(a,-3)}-E_{(a-1,-3)}\nonumber \\ =&q^{-2a}E_{(a-1,1)}+q^{-2a}(q^{-3}+q^{-2}+q^{-1})E_{(a,1)}+q^{-2a}(q^{-5}+q^{-4}+q^{-3})E_{(a+1,1)} \nonumber\\ & +q^{-2a-6}E_{(a+2,1)}.\end{aligned}$$ \(vii) If $a>0,b\leq-4$, then $$\begin{aligned} \label{equ4.1.2vii} &q^{\frac{1}{2}b}X_3E_{(a,b)}-E_{(a-1,b)} \nonumber\\ =&q^{-2a}E_{(a-1,b+4)} +q^{-\frac{1}{2}(4a-b-3)}(q^{-3}+q^{-2}+q^{-1}+1)E_{(a,b+4)} \nonumber\\ &+q^{-2a+b+2}(q^{-4}+q^{-3}+2q^{-2}+q^{-1}+1)E_{(a+1,b+4)} \nonumber\\ &+q^{-\frac{1}{2}(4a-3b-3)}(q^{-3}+q^{-2}+q^{-1}+1)E_{(a+2,b+4)}+ q^{-2a+2b}E_{(a+3,b+4)}.\end{aligned}$$ \(3) (i) If $a\geq0,b\geq0$, then $$\label{equ4.1.3i} q^{\frac{1}{2}b}E_{(a,b)}X_1=q^{-\frac{1}{2}(a+1)b}X_{1}^{a+1}X_{2}^{b}=E_{(a+1,b)}.$$ \(ii) If $a\geq0,b<0$, then $$\label{equ4.1.3ii} q^{\frac{1}{2}b}E_{(a,b)}X_1=q^{-\frac{1}{2}(a+1)b}X_{1}^{a+1}X_{0}^{-b}=E_{(a+1,b)}.$$ \(iii) If $a<0,b\geq0$, then $$\label{equ4.1.3iii} q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}=q^{2a}E_{(a+1,b+4)}.$$ \(iv) If $a=-1,b<0$, then $$q^{\frac{1}{2}b}E_{(-1,b)}X_1-E_{(0,b)}=q^{-2}X_{2}^{4}X_{0}^{-b}=q^{-2}E_{(0,4)}X_{0}^{-b}.$$ When $a=-1,b=-1$, we have that $$\label{equ4.1.3iv1} q^{-\frac{1}{2}}E_{(-1,-1)}X_1-E_{(0,-1)}=q^{-2}E_{(0,3)}+q^{-4}E_{(1,3)}.$$ When $a=-1,b=-2$, we have that $$\label{equ4.1.3iv2} q^{-1}E_{(-1,-2)}X_1-E_{(0,-2)}=q^{-2}E_{(0,2)}+(q^{-\frac{9}{2}}+q^{-\frac{7}{2}})E_{(1,2)}+ q^{-6}E_{(2,2)}$$ When $a=-1,b=-3$, we have that $$\begin{aligned} \label{equ4.1.3iv3} &q^{-\frac{3}{2}}E_{(-1,-3)}X_1-E_{(0,-3)} \nonumber \\ =&q^{-2}E_{(0,1)} +(q^{-5}+q^{-4}+q^{-3})E_{(1,1)}+(q^{-7}+q^{-6}+q^{-5})E_{(2,1)} +q^{-8}E_{(3,1)}.\end{aligned}$$ When $a=-1,b\leq-4$, we have that $$\begin{aligned} \label{equ4.1.3iv4} &q^{\frac{1}{2}b}E_{(-1,b)}X_1-E_{(0,b)}\nonumber\\ =&q^{-2}E_{(0,b+4)} +q^{\frac{1}{2}b}(q^{-\frac{7}{2}}+q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})E_{(1,b+4)} \nonumber\\ &+q^{b}(q^{-4} +q^{-3}+2q^{-2}+q^{-1}+1)E_{(2,b+4)}+ q^{\frac{3}{2}b}(q^{-\frac{7}{2}} +q^{-\frac{5}{2}} +q^{-\frac{3}{2}} +q^{-\frac{1}{2}})E_{(3,b+4)} \nonumber\\ &+q^{2b-2}E_{(4,b+4)}.\end{aligned}$$ \(v) If $a\leq-2,b<0$, then $$q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}=q^{-\frac{1}{2}(ab+b+4)}X_{3}^{-a-1}X_{2}^{4}X_{0}^{-b} =q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}.$$ When $a=-2,b=-1$, we have that $$q^{-\frac{1}{2}}E_{(-2,-1)}X_1-E_{(-1,-1)}=q^{-\frac{5}{2}}X_3X_{2}^{4}X_0=q^{-6}E_{(0,3)} +q^{-8}E_{(0,7)}+q^{-4}E_{(-1,3)}.$$ When $a\leq-2$, by repeatedly using (\[equ4.1.1iii\]) and (\[equ4.1.1iv\]), we know that $q^{2a}E_{(a+1,4+b)}$ is a term of $q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}$. We only need to consider the expansion of $$q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b} -q^{2a}E_{(a+1,4+b)}$$ with respect to the standard monomials $E_{(c,d)}$. Note that $$\begin{aligned} \label{equ4.1.3v} \left\{ \begin{aligned} E_{(c,d)}X_0=q^{-\frac{1}{2}c}E_{(c,d-1)}+q^{-\frac{1}{2}(c+d)}E_{(c+1,d-1)},{\hskip 3.3cm}&~\text{if~}c\geq0,d>0;\\ E_{(c,d)}X_0=q^{-\frac{1}{2}c}E_{(c,d-1)},{\hskip 6.85cm} &~\text{if~}c\geq0,d\leq0;\\ E_{(c,d)}X_0=q^{-\frac{1}{2}c}E_{(c,d-1)},{\hskip 6.85cm}&~\text{if~}c<0,d\leq0;\\ E_{(c,d)}X_0=q^{-\frac{1}{2}c}E_{(c,d-1)}+q^{-\frac{1}{2}(c+d)}E_{(c+1,d-1)} +q^{\frac{1}{2}(3c-d)}E_{(c+1,d+3)},&~\text{if~}c<0,d>0. \end{aligned} \right.\end{aligned}$$ By applying (\[equ4.1.3v\]) repeatedly, we have that $$q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}-q^{2a}E_{(a+1,4+b)}\in q^{-\frac{1}{2}}\A_+$$ and every term $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k\in\NN\setminus\{0\}$) in $q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}-q^{2a}E_{(a+1,4+b)}$ satisfies the conditions that $\alpha\geq -4a>0$, $c\geq a+1$ and $d\geq 4+b$. Namely $$q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}=q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a+1,b+4)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.$$ \(4) (i) If $a>0,b>0$, then $$\label{equ4.1.4i} q^{-\frac{1}{2}a}X_4E_{(a,b)}-E_{(a,b-1)}=q^{-\frac{1}{2}b}E_{(a-1,b-1)}+q^{-\frac{1}{2}(4a+b)}E_{(a-1,b+3)}.$$ \(ii) If $a\leq0,b>0$, then $$\label{equ4.1.4ii} q^{-\frac{1}{2}a}X_4E_{(a,b)}-E_{(a,b-1)}=q^{-\frac{1}{2}b}E_{(a-1,b-1)}.$$ \(5) (i) If $a>0,b=0$, then $$\label{equ4.1.5i} q^{-\frac{1}{2}a}X_4E_{(a,0)}-E_{(a-1,-1)}=q^{-2a}E_{(a-1,3)}.$$ \(ii) If $a>0,b=-1$, then $$\label{equ4.1.5ii} q^{-\frac{1}{2}(a+1)}X_4E_{(a,-1)}-E_{(a-1,-2)}=q^{-2a}E_{(a-1,2)}+q^{-\frac{1}{2}(4a+3)}E_{(a,2)}.$$ \(iii) If $a>0,b=-2$, then $$\label{equ4.1.5iii} q^{-\frac{1}{2}(a+2)}X_4E_{(a,-2)}-E_{(a-1,-3)}=q^{-2a}E_{(a-1,1)}+q^{-2a}(q^{-2}+q^{-1})E_{(a,1)}+q^{-2a-3}E_{(a+1,1)}.$$ \(iv) If $a>0,b\leq-3$, then $$\begin{aligned} \label{equ4.1.5iv} &q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)}-E_{(a-1,b-1)}\nonumber\\ =&q^{-2a}E_{(a-1,b+3)}+q^{-\frac{1}{2}(4a-b-3)}(q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})E_{(a,b+3)}\nonumber \\ &+q^{-\frac{1}{2}(4a-2b-3)}(q^{-\frac{5}{2}}+q^{-\frac{3}{2}}+q^{-\frac{1}{2}})E_{(a+1,b+3)} +q^{-\frac{1}{2}(4a-3b)}E_{(a+2,b+3)}.\end{aligned}$$ \(6) Through a direct calculation, we have that $$\begin{aligned} &X_4E_{(0,0)}=X_4=E_{(-1,-1)}-q^{-2}E_{(0,3)},\\ &q^{\frac{1}{2}}X_4E_{(-1,0)}=q^{\frac{1}{2}}X_4X_3=E_{(-2,-1)}-q^{-4}E_{(-1,3)},\\ &q^{-\frac{1}{2}}X_4E_{(0,-1)}=q^{-\frac{1}{2}}X_4X_0=E_{(-1,-2)}-q^{-\frac{5}{2}}E_{(0,2)}-q^{-4}E_{(1,2)}.\end{aligned}$$ We will prove the statement by induction on both value $a$ and $b$. For fixed $a\leq0,b\leq0$, assume that $$q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)}-E_{(a-1,b-1)}=:\Delta\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a,b)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.$$ Note that $$\begin{aligned} &q^{-\frac{1}{2}(a-1-b)}X_4E_{(a-1,b)}=q^{-\frac{1}{2}(1-b)}X_3 q^{-\frac{1}{2}(a-b)} X_4E_{(a,b)} \\ =&q^{-\frac{1}{2}(1-b)}X_3E_{(a-1,b-1)}+q^{-\frac{1}{2}(1-b)}X_3\Delta =E_{(a-2,b-1)}+q^{-\frac{1}{2}(1-b)}X_3\Delta\end{aligned}$$ and $$\begin{aligned} &q^{-\frac{1}{2}(a-b+1)}X_4E_{(a,b-1)}=q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)} q^{-\frac{1}{2}(1-a)}X_0\\ =&q^{-\frac{1}{2}(1-a)}E_{(a-1,b-1)}X_0+q^{-\frac{1}{2}(1-a)}\Delta X_0 =E_{(a-1,b-2)}+q^{-\frac{1}{2}(1-a)}\Delta X_0.\end{aligned}$$ Let $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k\in\ZZ\setminus\{0\}$) be a term in $\Delta$. Note that $\alpha+1-b+d\geq\alpha+1>0$. By Lemma \[lem4.1\](2), we have that $$\begin{aligned} &q^{-\frac{1}{2}(1-b)}X_3 q^{-\frac{1}{2}\alpha}E_{(c,d)}=q^{-\frac{1}{2}(\alpha+1-b+d)}(q^{\frac{1}{2}d}X_3E_{(c,d)}) \\ \in& q^{-\frac{1}{2}(\alpha+1-b+d)} ( E_{(c-1,d)}+\bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(c-1,d+4)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}) \subseteq \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(c-1,d)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.\end{aligned}$$ It follows that $$q^{-\frac{1}{2}(a-1-b)}X_4E_{(a-1,b)}-E_{(a-2,b-1)} \in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a-1,b)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.$$ Note that $\alpha+c+1-a\geq\alpha+1>0$. By Lemma \[lem4.1\](1), we have that $$\begin{aligned} &q^{-\frac{1}{2}\alpha}E_{(c,d)} q^{-\frac{1}{2}(1-a)}X_0=q^{-\frac{1}{2}(\alpha+c+1-a)}(q^{\frac{1}{2}c}E_{(c,d)}X_0)\\ \in& q^{-\frac{1}{2}(\alpha+c+1-a)}(E_{(c,d-1)}+ \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(c+1,d-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})})\subseteq \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(c,d-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.\end{aligned}$$ Thus $$q^{-\frac{1}{2}(a-b+1)}X_4E_{(a,b-1)}-E_{(a-1,b-2)} \in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a,b-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.$$ The proof is completed. \[lemofX3\] Let $a,b\in\ZZ_{<0}$ and $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k\in\NN\setminus\{0\}$) be a term in $q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}$. Then we have that $c\leq a-b+1$ if $d\geq0$ and $c-d\leq a-b+1$ if $d<0$. By (\[equ4.1.3iv1\]), (\[equ4.1.3iv2\]), (\[equ4.1.3iv3\]) and (\[equ4.1.3iv4\]), it follows that $(c,d)\in\{(0,3),(1,3)\}$ if $a=-1$, $b=-1$; $(c,d)\in\{(0,2),(1,2),(2,2)\}$ if $a=-1$, $b=-2$; $(c,d)\in\{(0,1),(1,1),(2,1), (3,1)\}$ if $a=-1$ and $b=-3$; $(c,d)\in \{(0,b+4),(1,b+4),(2,b+4),(3,b+4),(4,b+4)\}$ if $a=-1$ and $b\leq-4$. When $a\leq -2$ and $b<0$, $q^{\frac{1}{2}b}E_{(a,b)}X_1-E_{(a+1,b)}=q^{2a-\frac{1}{2}(a+1)b}E_{(a+1,4)}X_{0}^{-b}$. By (\[equ4.1.3v\]), it follows that $c\in\{a+1,\ldots,a-b+1\}$, and we obverse that there exists some nonnegative integer $n\leq -a-1$ such that $$d=4+3n-(-b-n)=4+4n+b\leq b-4a.$$ If $d=4+4n+b\geq0$, then $c\leq (a+1)+n+(-b-n)=a-b+1.$ If $d=4+4n+b<0$, then $c\leq (a+1)+n+(4+3n)=a+5+4n$, it follows that $c-d\leq a-b+1$. Thus $c\leq a-b+1$ if $d\geq0$ and $c-d\leq a-b+1$ if $d<0$. By using (\[equ4.1.4i\]) and (\[equ4.1.4ii\]), we obtain the following lemma. \[lem4.3+\] For $a\in\ZZ$ and $b\in\ZZ_{>0}$, every term $q^{-\frac{1}{2}\alpha}E_{(c,d)}$ in $q^{-\frac{1}{2}a}X_4E_{(a,b)}-E_{(a,b-1)}$ satisfies that $c=a-1$ and $d\geq b-1\geq0$. \[lem4.3\] For $a\in\ZZ_{>0}$ (respectively, $a\in\ZZ_{\leq0}$) and $b\in\ZZ_{\leq0}$, every term $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k\in\ZZ\setminus\{0\}$) in $q^{-\frac{1}{2}(a-b)}X_4E_{(a,b)}-E_{(a-1,b-1)}$ satisfies that $c\geq a-1$ (respectively, $c\geq a$), $c\leq a-b$ if $d\geq0$ and $c-d\leq a-b$ if $d<0$. \(1) When $a>0$, by using the identities (\[equ4.1.5i\]), (\[equ4.1.5ii\]), (\[equ4.1.5iii\]) and (\[equ4.1.5iv\]), we have that $(c,d)=(a-1,3)$ if $b=0$; $(c,d)\in\{(a-1,2),(a,2)\}$ if $b=-1$; $(c,d)\in\{(a-1,1),(a,1),(a+1,1)\}$ if $b=-2$; $(c,d)\in\{(a-1,b+3),(a,b+3),(a+1,b+3),(a+2,b+3)\}$ if $b\leq-3$. It is easy to see that $c\geq a-1$, $c<a-b$ if $d\geq0$ and $c-d< a-b$ if $d<0$. \(2) When $a\leq0$, we will prove the statement by induction. By Lemma \[lem4.1\](6), it follows that $c\geq a$. When $a=0$, note that $$\begin{aligned} &q^{-\frac{1}{2}}X_4E_{(0,-1)}=q^{-\frac{1}{2}}X_4X_0=E_{(-1,-2)}-q^{-\frac{5}{2}}E_{(0,2)}-q^{-4}E_{(1,2)},\\ &q^{-1}X_4E_{(0,-2)}=q^{-1}X_4X_{0}^{2}=E_{(-1,-3)}-q^{-3}E_{(0,1)}-(q^{-5}+q^{-4})E_{(1,1)}-q^{-6}E_{(2,1)},\end{aligned}$$ and $$\begin{aligned} q^{-\frac{1}{2}n}X_4E_{(0,-n)}=&E_{(-1,-n-1)}-q^{-\frac{1}{2}(n+4)}E_{(0,3-n)}-q^{-n}(q^{-3}+q^{-2}+q^{-1})E_{(1,3-n)} \nonumber\\ &-q^{-\frac{3}{2}n}(q^{-3}+q^{-2}+q^{-1})E_{(2,3-n)} -q^{-2n-2}E_{(3,3-n)}\end{aligned}$$ for $n\geq3$. It is easy to see that $c\leq n= a-b$ if $d\geq0$, $c-d\leq a-b$ if $d<0$. Note that $q^{\frac{1}{2}}X_4E_{(-1,0)}=E_{(-2,-1)}-q^{-4}E_{(-1,3)}$. When $a=-m\leq-1$ and $b=0$, we have that $$\begin{aligned} &q^{-\frac{1}{2}m}X_4E_{(-m,0)}=q^{-\frac{1}{2}(m-1)}X_{3}^{m-1}(q^{-\frac{1}{2}}X_3X_4) \\ =&q^{-\frac{1}{2}(m-1)}X_{3}^{m-1}E_{(-2,-1)}-q^{-\frac{1}{2}(7+m)}X_{3}^{m-1}E_{(-1,3)} \\ =&E_{(-m-1,-1)}-q^{-2(1+m)}E_{(-m,3)}.\end{aligned}$$ From now on, we assume that $a,b\leq-1$ for the rest of the proof. When $a=b=-1$, we have that $$X_4E_{(-1,-1)}=q^{-1}X_4X_3X_0=E_{(-2,-2)}-q^{-\frac{9}{2}}E_{(-1,2)}-q^{-6}E_{(0,2)}-q^{-8}E_{(0,6)}.$$ When $a=-m,b=-1$, we have that $$q^{-\frac{1-m}{2}}X_4E_{(-m,-1)}=E_{(-m-1,-2)}-q^{-\frac{4m+5}{2}}E_{(-m,2)} -q^{-2m-4}E_{(1-m,2)}-q^{-4m-4}E_{(1-m,6)}.$$ For $a=-m\in\ZZ_{<0}$ and $b=-n\in\ZZ_{<0}$, assume that every term $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ in $$\nabla:=q^{-\frac{n-m}{2}}X_4E_{(-m,-n)}-E_{(-m-1,-n-1)}$$ satisfies that $c\leq n-m$ if $d\leq0$ and $c-d\leq n-m$ if $d<0$. When $a=-m$ and $b=-n-1$, we have that $$\begin{aligned} &q^{-\frac{1}{2}(n+1-m)}X_4E_{(-m,-n-1)}=q^{-\frac{1}{2}(m+1)}(q^{-\frac{1}{2}(n-m)}X_4E_{(-m,-n)})X_0\\ =&q^{-\frac{1}{2}(m+1)}(E_{(-m-1,-n-1)}+\nabla)X_0=E_{(-m-1,-n-2)}+q^{-\frac{1}{2}(m+1)}\nabla X_0.\end{aligned}$$ Let $k^\prime q^{-\frac{1}{2}\alpha^{\prime}}E_{(c^{\prime},d^{\prime})}$ ($k^\prime\in\ZZ\setminus\{0\}$) be a term in $q^{-\frac{m+1}{2}}(kq^{-\frac{1}{2}\alpha}E_{(c,d)}) X_0$. When $c\geq0,d>0$, it follows that $(c^{\prime},d^{\prime})\in\{(c,d-1),(c+1,d-1)\}$ by (\[equ4.1.1i\]), i.e., $d^{\prime}\geq0$ and $c\leq c^{\prime}\leq c+1\leq n+1-m$. If $d\leq0$ then $(c^{\prime},d^{\prime})=(c,d-1)$ by (\[equ4.1.1ii\]) and (\[equ4.1.1iii\]), i.e., $c^\prime=c$, $d^{\prime}<0$ and $c^{\prime}-d^{\prime}= c-d+1\leq n-m+1$. If $c<0,d>0$, then $(c^{\prime},d^{\prime})\in\{(c,d-1),(c+1,d+3),(c+1,d-1)\}$ by (\[equ4.1.1iv\]), i.e., $d^{\prime}\geq0$ and $c\leq c^{\prime}\leq c+1\leq n-m+1$. The proof is completed. By [@BZ2014 Theorem 3.1, Proposition 4.1], we have the following lemma. \[lemvarphi\] For each $(a,b)\in\ZZ^2$, we have that $$E^{\prime}_{(a,b)}-E_{\varphi(a,b)}\in q^{-\frac{1}{2}}\A_+,$$ where $\varphi:\ZZ^2\rightarrow\ZZ^2$ defined by $\varphi(a,b)=(a,-4[-a]_+-b)$ is a bijection. Similarly, let $\psi:\ZZ^2\rightarrow\ZZ^2$ be a map defined by $\psi(a,b)=(-a-[-b]_+,b)$. It is easy to see that $\psi$ is a bijection. \[lempsi\] For each $(a,b)\in\ZZ^{2}$, we have that $$\mu_1E_{(a,b)}-E_{\psi(a,b)}\in q^{-\frac{1}{2}}\A_+.$$ Let $a_1,a_2\in\ZZ_{\geq0}$, it is clear that $$\mu_1E_{(a_1,a_2)}=q^{-\frac{1}{2}a_1a_2}X_{2}^{a_2}X_{3}^{a_1}=E_{(-a_1,a_2)}\text{~and~} \mu_1E_{(-a_1,a_2)}=q^{-\frac{1}{2}a_1a_2}X_{1}^{a_1}X_{2}^{a_2}=E_{(a_1,a_2)}.$$ We need only consider $\mu_1E_{(a_1,-a_2)}-E_{(-a_1-a_2,-a_2)}$ and $\mu_1E_{(-a_1,-a_2)}-E_{(a_1-a_2,-a_2)}$. \(1) When $a_1=1,a_2=0$, we have that $\mu_1E_{(1,0)}=X_3=E_{(-1,0)}$. When $a_1=0,a_2=1$, $$\mu_1E_{(0,-1)}-E_{(-1,-1)}=X_4-q^{-\frac{1}{2}}X_3X_0=-q^{-2}X_{2}^{3}=-q^{-2}E_{(0,3)}.$$ When $a_1=1,a_2=1$, we have that $$\mu_1E_{(1,-1)}-E_{(-2,-1)}=q^{-\frac{1}{2}}X_3X_4-q^{-1}X_{3}^{2}X_0=-q^{-\frac{11}{2}}X_{2}^{3}X_3=-q^{-4}E_{(-1,3)}.$$ We will prove the statement by induction on both $a_1$ and $a_2$. Assume that $$\begin{aligned} &\mu_1E_{(a_1,-a_2)}-E_{(-a_1-a_2,-a_2)}\\ =&q^{-\frac{1}{2}a_1a_2}X_{3}^{a_1}X_{4}^{a_2}-q^{-\frac{1}{2}(a_1+a_2)a_2}X_{3}^{a_1+a_2}X_{0}^{a_2} =:C_1\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(-a_1-a_2,-a_2)}q^{-\frac{1}{2}} \ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})},\end{aligned}$$ i.e., every term $kq^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k\in\ZZ\setminus\{0\}$) in $C_1$ satisfies $[-c]_+\leq a_1+a_2$ and $[-d]_+\leq a_2$. \(i) By Lemma \[lem4.1\](2), we have that $$\mu_1E_{(a_1+1,-a_2)}=q^{-\frac{1}{2}a_2}X_3E_{(-a_1-a_2,-a_2)}+q^{-\frac{1}{2} a_2}X_3C_1 =E_{(-a_1-a_2-1,-a_2)}+q^{-\frac{1}{2}a_2}X_3C_1$$ and $q^{-\frac{1}{2}(\alpha+a_2)}X_3E_{(c,d)}=q^{-\frac{1}{2}(\alpha+a_2+d)}(q^{\frac{1}{2}d}X_3E_{(c,d)})$. Note that $\alpha+a_2+d\geq\alpha>0$, by Lemma \[lem4.1\](2), we have that $q^{-\frac{1}{2}(\alpha+a_2)}X_3E_{(c,d)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(c-1,d)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}$. Thus $$\mu_1E_{(a_1+1,-a_2)}-E_{(-a_1-a_2-1,-a_2)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(-a_1-a_2-1,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}.$$ \(ii) Note that $$\mu_1E_{(a_1,-a_2-1)}=q^{\frac{a_1}{2}}X_4(q^{-\frac{1}{2}a_1a_2}X_{3}^{a_1}X_{4}^{a_2}) =q^{\frac{a_1}{2}}X_4E_{(-a_1-a_2,-a_2)}+q^{\frac{a_1}{2}}X_4C_1.$$ By Lemma \[lem4.1\](6), $$q^{\frac{a_1}{2}}X_4E_{(-a_1-a_2,-a_2)}\in E_{(-a_1-a_2-1,-a_2-1)}+ \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(-a_1-a_2,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}.$$ When $d\geq0$, we have that $q^{-\frac{1}{2}(\alpha-a_1)}X_4E_{(c,d)}=q^{-\frac{1}{2}(\alpha-a_1-c)}(q^{-\frac{1}{2}c}X_4E_{(c,d)})$. When $d<0$, we have that $q^{-\frac{1}{2}(\alpha-a_1)}X_4E_{(c,d)}=q^{-\frac{1}{2}(\alpha-a_1-c+d)}(q^{-\frac{1}{2}(c-d)}X_4E_{(c,d)})$. By Lemma \[lem4.3\], we know that $c\leq -a_1$ if $d\geq0$, and $c-d\leq -a_1$ if $d<0$. Thus $\alpha-a_1-c\geq \alpha>0$ if $d\geq0$, and $\alpha-a_1-c+d\geq \alpha>0$ if $d<0$. By using the induction hypothesis and Lemma \[lem4.1\](4), (5), (6), we have that $$q^{\frac{1}{2}a_1}X_4C_1\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(-a_1-a_2-1,-a_2-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}.$$ Hence $\mu_1E_{(a_1,-a_2-1)}-E_{(-a_1-a_2-1,-a_2-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(-a_1-a_2-1,-a_2-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$. \(2) When $a_1=0,a_2>0$, we have that $$\mu_1E_{(0,-a_2)}-E_{(-a_2,-a_2)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq (-a_2,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}$$ by (1). When $a_1>0,a_2=0$, we have that $\mu_1E_{(-a_1,0)}=X_{1}^{a_1}=E_{(a_1,0)}$. When $a_1=a_2=1$, we have that $\mu_1E_{(-1,-1)}=X_0+q^{-2}X_{2}^{3}=E_{(0,-1)}+q^{-2}E_{(0,3)}$. When $a_1=1,a_2=2$, we have that $\mu_1E_{(-1,-2)}-E_{(-1,-2)}=q^{-4}E_{(-1,2)}-q^{-4}E_{(1,2)}$. When $a_1=2,a_2=1$, we have that $\mu_1E_{(-2,-1)}-E_{(1,-1)}=q^{-4}E_{(1,3)}$. When $a_1=2,a_2=2$, we have that $$\mu_1E_{(-2,-2)}-E_{(0,-2)}=(q^{-6}+q^{-2})E_{(0,2)}+q^{-8}E_{(0,6)} +(q^{-\frac{9}{2}}+q^{-\frac{7}{2}})E_{(1,2)}.$$ When $a_1=3,a_2=1$, we have that $\mu_1E_{(-3,-1)}-E_{(2,-1)}=q^{-6}E_{(2,3)}$. When $a_1=3,a_2=2$, we have that $$\mu_1E_{(-3,-2)}-E_{(1,-2)}=(q^{-8}+q^{-4})E_{(1,2)}+q^{-12}E_{(1,6)}+(q^{-\frac{13}{2}}+q^{-\frac{11}{2}})E_{(2,2)}.$$ We will proceed the proof by induction on both $a_1$ and $a_2$. \(i) When $a_1\geq a_2\geq1$, we assume that $$\begin{aligned} &\mu_1E_{(-a_1,-a_2)}-E_{(a_1-a_2,-a_2)} =q^{-\frac{1}{2}a_1a_2}X_{4}^{a_2}X_{1}^{a_1}- q^{-\frac{1}{2}(a_1-a_2)a_2}X_{0}^{a_2}X_{1}^{a_1-a_2}\\ =:&C_2\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}\end{aligned}$$ and every term $k_1q^{-\frac{1}{2}\alpha}E_{(c,d)}$ ($k_1\in\ZZ\setminus\{0\}$) in $C_2$ satisfies that $c\geq0$, $c\leq a_1$ if $d\geq0$ and $c-d\leq a_1$ if $d<0$. Note that $$\begin{aligned} &\mu_1E_{(-a_1-1,-a_2)}=q^{-\frac{1}{2}a_2}\mu_1E_{(-a_1,-a_2)}X_1 =q^{-\frac{1}{2}a_2}E_{(a_1-a_2,-a_2)}X_1 +q^{-\frac{1}{2}a_2}C_2X_1 \\ =&E_{(a_1+1-a_2,-a_2)}+q^{-\frac{1}{2}a_2}C_2X_1.\end{aligned}$$ By (\[equ4.1.3i\]) and (\[equ4.1.3ii\]), $q^{-\frac{1}{2}(a_2+\alpha)}E_{(c,d)}X_1=q^{-\frac{1}{2}(a_2+\alpha+d)} E_{(c+1,d)}$. Note that $a_2+\alpha+d\geq\alpha>0$ and $c+1>c\geq0$. Thus $$q^{-\frac{1}{2}a_2}C_2X_1\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,-a_2)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})} \subseteq q^{-\frac{1}{2}}\A_+.$$ For $a_1\geq a_2\geq1$, it follows that $$\mu_1E_{(-a_1-1,-a_2)}- E_{(a_1+1-a_2,-a_2)} \in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})} \subseteq q^{-\frac{1}{2}}\A_+$$ and every term $k_1^{\prime}q^{-\frac{1}{2}\alpha^{\prime}}E_{(c^{\prime},d^{\prime})}=k_1q^{-\frac{1}{2}(a_2+\alpha+d)}E_{(c+1,d)}$ in $\mu_1E_{(-a_1-1,-a_2)}- E_{(a_1+1-a_2,-a_2)}$ satisfies that $c^\prime=c+1>0$, $c^\prime\leq a_1+1$ if $d^\prime=d\geq0$ and $c^\prime-d^\prime\leq a_1+1$ if $d^\prime=d<0$. \(ii) When $a_2>a_1\geq1$, we assume that $$\begin{aligned} &\mu_1E_{(-a_1,-a_2)}-E_{(a_1-a_2,-a_2)} \\ =&q^{-\frac{1}{2}a_1a_2}X_{4}^{a_2}X_{1}^{a_1}- q^{-\frac{1}{2}(a_2-a_1)a_2}X_{3}^{a_2-a_1}X_{0}^{a_2} =:C_3\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a_1-a_2,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.\end{aligned}$$ and every term $k_2q^{-\frac{1}{2}\beta}E_{(e,f)}$ ($k_2\in\ZZ\setminus\{0\}$) in $C_3$ satisfies that $e\leq a_1$ if $f\geq0$ and $e-f\leq a_1$ if $f<0$. Note that $$\begin{aligned} \mu_1E_{(-a_1-1,-a_2)}=q^{-\frac{1}{2}a_2}\mu_1E_{(-a_1,-a_2)}X_1=q^{-\frac{1}{2}a_2}E_{(a_1-a_2,-a_2)}X_1 +q^{-\frac{1}{2}a_2}C_3X_1.\end{aligned}$$ By Lemma \[lem4.1\](3), we obtain that $$q^{-\frac{1}{2}a_2}E_{(a_1-a_2,-a_2)}X_1\in E_{(a_1+1-a_2,-a_2)}+\bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a_1+1-a_2,4-a_2)}q^{-\frac{1}{2}} \ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.$$ Note that $a_2+\beta+f>0$ since $[-f]_+\leq a_2$ and $\beta>0$. Then we have that $$\begin{aligned} &q^{-\frac{1}{2}(a_2+\beta)}E_{(e,f)}X_1\in q^{-\frac{1}{2}(a_2+\beta+f)}E_{(e+1,f)} +\bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(e+1,f+4)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}\\ \subseteq& \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(e+1,f)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}.\end{aligned}$$ We obtain that $q^{-\frac{1}{2}a_2}C_3X_1\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a_1+1-a_2,-a_2)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}$. Let $k_{2}^{\prime}q^{-\frac{1}{2}\beta^\prime}E_{(e^\prime,f^\prime)}$ ($k_2\in\ZZ\setminus\{0\}$) be a term in $\mu_1E_{(-a_1-1,-a_2)}-E_{(a_1-a_2+1,-a_2)}$. When $k_{2}^{\prime}q^{-\frac{1}{2}\beta^\prime}E_{(e^\prime,f^\prime)}$ is a term in $q^{-\frac{1}{2}a_2}E_{(a_1-a_2,-a_2)}X_1-E_{(a_1+1-a_2,-a_2)}$, we obtain that $e^\prime\leq a_1+1$ if $f^\prime\geq0$ and $e^\prime-f^\prime\leq a_1+1$ if $f^\prime<0$, by Lemma \[lemofX3\]. Now we consider the case that $k_{2}^{\prime}q^{-\frac{1}{2}\beta^\prime}E_{(e^\prime,f^\prime)}$ is a term in $k_2q^{-\frac{1}{2}(a_2+\beta)}E_{(e,f)}X_1$, where $k_2q^{-\frac{1}{2}\beta} E_{(e,f)}$ is a term in $C_3$. Note that $a_2+\beta +f\geq\beta>0$ and $$q^{-\frac{1}{2}(a_2+\beta)}E_{(e,f)}X_1=q^{-\frac{1}{2}(a_2+\beta+f)}(q^{\frac{1}{2}f}E_{(e,f)}X_1).$$ When $e\geq0$, we have $(e^{\prime},f^{\prime})=(e+1,f)$ by (\[equ4.1.3i\]) and (\[equ4.1.3ii\]). When $e<0$ and $f\geq0$, we have $(e^{\prime},f^{\prime})=(e+1,f)$ or $(e+1,f+4)$ by (\[equ4.1.3iii\]). When $e<0$ and $f<0$, we obtain that $e^\prime\leq e-f+1\leq a_1+1$ if $f^\prime\geq0$ and $e^\prime-f^\prime\leq e-f+1\leq a_1+1$ if $f^\prime<0$ by Lemma \[lemofX3\]. Hence $$\mu_1E_{(-a_1-1,-a_2)}-E_{(a_1-a_2+1,-a_2)}\in\bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a_1-a_2+1,-a_2)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}\subseteq q^{-\frac{1}{2}}A_+$$ for $a_2>a_1\geq1$, and every term $k_{2}^{\prime}q^{-\frac{1}{2}\beta^\prime}E_{(e^\prime,f^\prime)}$ in $\mu_1E_{(-a_1-1,-a_2)}-E_{(a_1+1-a_2,-a_2)}$ satisfies that $e^\prime\leq a_1+1$ if $f^\prime\geq0$, $e^\prime-f^\prime\leq a_1+1$ if $f^\prime<0$. \(iii) When $a_1>a_2\geq1$, we assume that $$\begin{aligned} \mu_1E_{(-a_1,-a_2)}-E_{(a_1-a_2,-a_2)}=:C_4\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}\subseteq q^{-\frac{1}{2}}\A_+\end{aligned}$$ and every term $k_3q^{-\frac{1}{2}\gamma}E_{(g,h)}$ ($k_3\in\ZZ\setminus\{0\}$) in $C_4$ satisfies that $g\geq a_1-a_2$, $g\leq a_1$ if $h\geq0$ and $g-h\leq a_1$ if $h<0$. We obverse that $$\mu_1E_{(-a_1,-a_2-1)}=q^{-\frac{1}{2}a_1}X_4\mu_1E(-a_1,-a_2)=q^{-\frac{1}{2}a_1}X_4E_{(a_1-a_2,-a_2)} +q^{-\frac{1}{2}a_1}X_4C_4$$ and $q^{-\frac{1}{2}a_1}X_4E_{(a_1-a_2,-a_2)}\in E_{(a_1-a_2-1,-a_2-1)}+ \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,3-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^{\prime},b^{\prime})}$ by Lemma \[lem4.1\](5). Similar to the proof of Case (2)(ii), according to Lemma \[lem4.3+\] and Lemma \[lem4.3\], we can prove that every term $k^{\prime}_{3}q^{-\frac{1}{2}\gamma^{\prime}}E_{(g^{\prime},h^{\prime})}$ in $\mu_1E_{(-a_1,-a_2-1)}-E_{(a_1-a_2-1,-a_2-1)}$ satisfies the conditions that $g^\prime\geq a_1-a_2-1\geq0$, $g^{\prime}\leq a_1$ if $h^{\prime}\geq0$, $g^{\prime}-h^{\prime}\leq a_1$ if $h^{\prime}<0$ and $$\mu_1E_{(-a_1,-a_2-1)}-E_{(a_1-a_2-1,-a_2-1)}\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(0,-a_2-1)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}\subseteq q^{-\frac{1}{2}}\A_+$$ for $a_1>a_2\geq1$. \(iv) When $a_2\geq a_1\geq1$, we assume that $$\begin{aligned} &\mu_1E_{(-a_1,-a_2)}-E_{(a_1-a_2,-a_2)}=q^{-\frac{1}{2}a_1a_2}X_{4}^{a_2}X_{1}^{a_1}- q^{-\frac{1}{2}(a_2-a_1)a_2}X_{3}^{a_2-a_1}X_{0}^{a_2}\\ =:&C_5\in \bigoplus\limits_{(a^{\prime},b^{\prime})\preceq(a_1-a_2,-a_2)}q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}] E_{(a^{\prime},b^{\prime})}\subseteq q^{-\frac{1}{2}}\A_+\end{aligned}$$ and every term $k_4q^{-\frac{1}{2}\delta}E_{(i,j)}$ ($k_4\in\ZZ\setminus\{0\}$) in $C_5$ satisfies that $i\leq a_1$ if $j\geq0$ and $i-j\leq a_1$ if $j<0$. Then $$\mu_1E_{(-a_1,-a_2-1)}=q^{-\frac{1}{2}a_1}X_4\mu_1E(-a_1,-a_2)=q^{-\frac{1}{2}a_1}X_4E_{(a_1-a_2,-a_2)} +q^{-\frac{1}{2}a_1}X_4C_5$$ and $q^{-\frac{1}{2}a_1}X_4E_{(a_1-a_2,-a_2)}\in E_{(a_1-a_2-1,-a_2-1)} +\bigoplus\limits_{(a^\prime,b^\prime)\preceq(a_1-a_2,-a_2)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^\prime,b^\prime)}$ by Lemma \[lem4.1\](6). Let $k_{4}^{\prime}q^{-\frac{1}{2}\delta^\prime}E_{(i^\prime,j^\prime)}$ ($k_{4}^{\prime}\in\ZZ\setminus\{0\}$) be a term in $\mu_1E_{(-a_1,-a_2-1)}-E_{(a_1-a_2-1,-a_2-1)}$. Similar to the proof of Case (2)(ii), according to Lemma \[lem4.1\](5), (6), Lemma \[lem4.3+\] and Lemma \[lem4.3\], it follows that $$q^{-\frac{1}{2}a_1}X_4C_5\in\bigoplus\limits_{(a^\prime,b^\prime)\preceq(a_1-a_2-1,-a_2-1)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^\prime,b^\prime)}$$ and every term $k_{4}^{\prime}q^{-\frac{1}{2}\delta^\prime}E_{(i^\prime,j^\prime)}$ in $q^{-\frac{1}{2}a_1}X_4C_5$ satisfies that $i^\prime\leq a_1$ if $j^{\prime}\geq0$ and $i^\prime-j^\prime\leq a_1$ if $j^{\prime}<0$. In a summary, we obtain that $$\mu_1E_{(-a_1,-a_2-1)}-E_{(a_1-a_2-1,-a_2-1)}\in\bigoplus\limits_{(a^\prime,b^\prime)\preceq(a_1-a_2-1,-a_2-1)} q^{-\frac{1}{2}}\ZZ[q^{-\frac{1}{2}}]E_{(a^\prime,b^\prime)}\subseteq q^{-\frac{1}{2}}\A_+$$ and every term $k_{4}^{\prime}q^{-\frac{1}{2}\delta^{\prime}}E_{(i^\prime,j^\prime)}$ in $\mu_1E_{(-a_1,-a_2-1)}-E_{(a_1-a_2-1,-a_2-1)}$ satisfies that $i^\prime\leq a_1$ if $j^{\prime}\geq0$ and $i^\prime-j^\prime\leq a_1$ if $j^{\prime}<0$. This completes the proof of Lemma \[lempsi\]. For $n\in\ZZ$, the notation $\alpha(n)$ is defined by $$\begin{aligned} \langle n\rangle\alpha(n)=\left\{ \begin{aligned} (2-n,2(3-n)),&~\rm{if}~n\geq2; \\ (n,2(n-1)),{\hskip 0.5cm}&~\rm{if}~n\leq1. \end{aligned} \right.\end{aligned}$$ Recall that $C_{(a,b)}$ is the element of the canonical triangular basis for any $a,b\in\ZZ$. \[propstandardmonomial\] For $n\in\ZZ$ and $a_1,a_2\in\ZZ_{\geq0}$, we have $$C_{a_1\alpha(n)+a_2\alpha(n+1)}=q^{-\frac{1}{2}a_1a_2}X_{n}^{a_1}X_{n+1}^{a_2}.$$ Through a direct calculation, we obtain that $\sigma_{-2}(\mu_1E_{(a,b)})=E^{\prime}_{(a,b)}$. Recall that $\varphi$ and $\psi$ are bijective. By Lemma \[lempsi\], $\mu_1E_{\psi(a,b)}-E_{(a,b)}\in q^{-\frac{1}{2}}\A_+$. Thus $E^{\prime}_{\psi(a,b)}-\sigma_{-2}(E_{(a,b)})\in q^{-\frac{1}{2}}\A_+$. By Lemma \[lemvarphi\], we obtain that $$\label{equ4.1} E_{\varphi\psi(a,b)}-\sigma_{-2}(E_{(a,b)})\in q^{-\frac{1}{2}}\A_+.$$ By Lemma \[lempsi\], $E^{\prime}_{(a,b)}-\sigma_{-2}(E_{\psi(a,b)})\in q^{-\frac{1}{2}}\A_+$. It follows that $E_{\varphi(a,b)}-\sigma_{-2}(E_{\psi(a,b)})\in q^{-\frac{1}{2}}\A_+$ and $E_{(a,b)}-\sigma_{-2}(E_{\psi\varphi(a,b)})\in q^{-\frac{1}{2}}\A_+$. Hence $$\label{equ4.2} \sigma_{2}(E_{(a,b)})-E_{\psi\varphi(a,b)}\in q^{-\frac{1}{2}}\A_+.$$ Using (\[equ4.1\]) and (\[equ4.2\]), it follows that $$\label{equ4.3} \sigma_{-2}(C_{(a,b)})=C_{\varphi\psi(a,b)}=C_{(-a-[-b]_+,-4[a+[-b]_+]_+-b)}$$ and $$\label{equ4.4} \sigma_{2}(C_{(a,b)})=C_{\psi\varphi(a,b)}=C_{(-a-[4[-a]_++b]_+,-4[-a]_+-b)}.$$ According to the definition of the triangular basis, it follows that $$\begin{aligned} \label{equtribasis3} \left\{ \begin{aligned} &C_{(a_1,a_2)}=E_{(a_1,a_2)}=q^{-\frac{1}{2}a_1a_2}X_{1}^{a_1}X_{2}^{a_2}; \\ &C_{(a_1,-a_2)}=E_{(a_1,-a_2)}=q^{-\frac{1}{2}a_1a_2}X_{0}^{a_2}X_{1}^{a_1};\\ &C_{(-a_1,a_2)}=E_{(-a_1,a_2)}=q^{-\frac{1}{2}a_1a_2}X_{2}^{a_2}X_{3}^{a_1} \end{aligned} \right.\end{aligned}$$ for $a_1,a_2\in\ZZ_{\geq0}$ From (\[equ4.3\]), (\[equ4.4\]) and (\[equtribasis3\]), it is deduced that $C_{a_1\alpha(n)+a_2\alpha(n+1)}=q^{-\frac{1}{2}a_1a_2}X_{n}^{a_1}X_{n+1}^{a_2}$ for $a_1,a_2\in\ZZ_{\geq0}$ and $n\in\ZZ$. Now we need only consider $C_{(-n,-2n)}$ for positive integers $n$. \[lemsn\] If $n$ is a positive integer, then $$\begin{aligned} \label{equsn} S_{n}(X_\de) =&q^{-n}X_{n+2}^{\langle n\rangle}X_{0}^{2}-q^{-(n+2)}X_{n+1}^{\langle n+1\rangle}X_1 -q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}}) \sum\limits_{k=1}^{\lfloor \frac{n}{2}\rfloor+1}X_{n+3-2k}^{\langle n+1\rangle}\nonumber \\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^{2}S_{n-2k}(X_\de).\end{aligned}$$ We will prove the lemma by induction on $n$. When $n=1$, by [@cds Lemma 3.6], we have that $$X_\de=q^{-1}X_3X_{0}^2-q^{-3}X_{2}^2X_1-q^{-2}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{2}^2.$$ When $n$ is even, by the induction hypothesis, $$\begin{aligned} &S_{n+1}(X_\de)=X_\de S_{n}(X_\de)-S_{n-1}(X_\de) \\ =& q^{-n}X_\de X_{n+2}^2X_{0}^2-q^{-(n+2)}X_\de X_{n+1}X_1 -q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}}) \sum\limits_{k=1}^{\frac{n}{2}+1}X_\de X_{n+3-2k}\\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2X_\de S_{n-2k}(X_\de) -q^{-(n-1)}X_{n+1}X_{0}^{2}+q^{-(n+1)}X_{n}^2X_1\\ &+q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n}{2}}X_{n+2-2k}^{2} +\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2S_{n-1-2k}(X_\de).\end{aligned}$$ A direct calculation shows that $$\begin{aligned} &q^{-n}X_\de X_{n+2}^2X_{0}^2=q^{1-n}X_{n+1}X_{0}^2+q^{-n-1}X_{n+3}X_{0}^2+q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{0}^2,\\ &q^{-(n+2)}X_\de X_{n+1}X_1=q^{-(n+1)}X_{n}^2X_1+q^{-(n+3)}X_{n+2}^2X_1,\\ &X_\de X_{n+3-2k}=qX_{n+2-2k}+q^{-1}X_{n+4-2k}.\end{aligned}$$ Note that $$q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n}{2}+1} X_{n+2-2k}^2= q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n}{2}} X_{n+2-2k}^2 +q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{0}^{2}.$$ Thus $$\begin{aligned} S_{n+1}(X_\de) =&q^{-(n+1)}X_{n+3}X_{0}^2-q^{-(n+3)}X_{n+2}^2X_1 -q^{-(n+2)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n}{2}+1}X_{n+4-2k}^2 \\ &- \sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2S_{n+1-2k}(X_\de).\end{aligned}$$ When $n$ is odd, by the induction hypothesis, $$\begin{aligned} S_{n+1}(X_\de)=&X_\de S_{n}(X_\de)-S_{n-1}(X_\de) \\ =& q^{-n}X_\de X_{n+2}X_{0}^2-q^{-(n+2)}X_\de X_{n+1}^2X_1 -q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}}) \sum\limits_{k=1}^{\frac{n+1}{2}}X_\de X_{n+3-2k}^2\\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2X_\de S_{n-2k}(X_\de) -q^{-(n-1)}X_{n+1}^2X_{0}^{2}+q^{-(n+1)}X_{n}X_1\\ &+q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n+1}{2}}X_{n+2-2k} +\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2S_{n-1-2k}(X_\de).\end{aligned}$$ We have that $$\begin{aligned} &q^{-n}X_\de X_{n+2}X_{0}^2=q^{-(n-1)}X_{n+1}^2X_{0}^{2} +q^{-(n+1)}X_{n+3}^2X_{0}^{2},\\ &q^{-(n+2)}X_\de X_{n+1}^2X_1=q^{-(n+1)}X_nX_1+q^{-(n+3)}X_{n+2}X_1+q^{-(n+2)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_1,\\ &q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n+1}{2}}X_\de X_{n+3-2k}^2 \\ =&q^{-n}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n+1}{2}}X_{n+2-2k} +q^{-(n+2)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n+1}{2}}X_{n+4-2k} +\frac{n+1}{2}q^{-(n+1)} (q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2\end{aligned}$$ and $$\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2\big(X_\de S_{n-2k}(X_\de)-S_{n-1-2k}(X_\de)\big) =\sum\limits_{k=1}^{\frac{n-1}{2}}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2S_{n+1-2k}(X_\de).$$ Hence we obtain that $$\begin{aligned} S_{n+1}(X_\de)=&q^{-(n+1)}X_{n+3}^2X_{0}^2-q^{-(n+3)}X_{n+2}X_1 -q^{-(n+2)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n+3}{2}}X_{n+4-2k}\\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^2S_{n+1-2k}(X_\de).\end{aligned}$$ The proof is completed. \[lemsn2\] For $n\in\ZZ_{\geq0}$, we have that $C_{(-n,-2n)}=S_n(X_\delta)$. We will prove the statement by induction on $n$. It is trivial for $n=0$. For $n=1$, by Theorem \[theorem2\], we have that $$E_{(-1,-2)}=X_\de+(q^{-\frac{5}{2}}+q^{-\frac{3}{2}})E_{(0,2)}+q^{-4}E_{(1,2)}.$$ Therefore $C_{(-1,-2)}=X_\de$. Assume that $C_{(-r,-2r)}=S_r(X_\de)$ for $0\leq r\leq n$. When $n$ is even, we have that $$\begin{aligned} S_n(X_\de)=&q^{-n}X_{n+2}^{2}X^{2}_0-q^{-(n+2)}X_{n+1}X_1 -q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})\sum\limits_{k=1}^{\frac{n}{2}+1}X_{n+3-2k} \\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^{2}S_{n-2k}(X_\delta).\end{aligned}$$ By Proposition \[propstandardmonomial\], we know that $X_{n+2}^{2}=C_{2\alpha(n+2)}=C_{(-n,2-2n)}$ and $X_{n+1}=C_{\alpha(n+1)}=C_{(1-n,4-2n)}$. By using Lemma \[lem4.1\](1), (3) and the condition (\[equlusztiglem3\]) in Lusztig’s Lemma, we obtain that $$q^{-n}C_{(-n,2-2n)}X_{0}^2\in q^{-n}E_{(-n,2-2n)}X_{0}^2+q^{-\frac{1}{2}}\A_+$$ and $$q^{-(n+2)}C_{(1-n,4-2n)}X_1\in q^{-(n+2)}E_{(1-n,4-2n)}X_{1}+q^{-\frac{1}{2}}\A_+.$$ For $n\geq 2$, $q^{-n}E_{(-n,2-2n)}X_{0}^{2}=q^{-\frac{n}{2}}(q^{-\frac{n}{2}}E_{(-n,2-2n)}X_0)X_0 =q^{-\frac{n}{2}}E_{(-n,1-2n)}X_0=E_{(-n,-2n)}$ and $q^{-(n+2)}E_{(1-n,4-2n)}X_1\in q^{-3}(E_{(2-n,4-2n)} +q^{-\frac{1}{2}}\A_+)\subseteq q^{-\frac{1}{2}}\A_+$ by (\[equ4.1.1iii\]) and Lemma \[lem4.1\](3). Hence we have that $$q^{-n}X_{n+2}^{2}X^{2}_{0}=q^{-n}C_{(-n,2-2n)}X^{2}_0\in E_{(-n,-2n)}+q^{-\frac{1}{2}}\A_+$$ and $$q^{-(n+2)}X_{n+1}X_1=q^{-(n+2)}C_{(-n,4-2n)}X_1\in q^{-\frac{1}{2}}\A_+.$$ For $1\leq k\leq \frac{n}{2}+1$, it is clearly that $$\begin{aligned} &q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})X_{n+3-2k}\\ =&q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})C_{\alpha(n+3-2k)}\in q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})(E_{(2k-n-1,4k-2n)}+q^{-\frac{1}{2}}\A_+)\subseteq q^{-\frac{1}{2}}\A_+.\end{aligned}$$ By using the induction hypothesis, we have that $q^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^{2}S_{n-2k}(X_\de)\in q^{-\frac{1}{2}}A_+$ for $1\leq k\leq\frac{n}{2}$. When $n$ is even, we have that $$S_{n}(X_\de)\in E_{(-n,-2n)}+q^{-\frac{1}{2}}\A_+.$$ When $n$ is odd, we have that $$\begin{aligned} S_n(X_\de)=&q^{-n}X_{n+2}X^{2}_0-q^{-(n+2)}X_{n+1}^{2}X_1-q^{-(n+1)}(q^{-\frac{1}{2}}+q^{\frac{1}{2}}) \sum\limits_{k=1}^{\frac{n+1}{2}}X_{n+3-2k}^2 \\ &-\sum\limits_{k\geq1}kq^{-2k}(q^{-\frac{1}{2}}+q^{\frac{1}{2}})^{2}S_{n-2k}(X_\de),\end{aligned}$$ $X_{n+2}=C_{\alpha(n+2)}=C_{(-n,2-2n)}$, $X_{n+1}^2=C_{2\alpha(n+2)}=C_{(1-n,4-2n)}$ and $ X^{2}_{n+3-2k}=C_{2\alpha(n+3-2k)}\\ =C_{(2k-n-1,4k-2n)} $ for $1\leq k\leq \frac{n+1}{2}$. The rest of the proof is quite similar to that given above for the case of $n$ being even and so is omitted. Since $S_n(X_\de)$ is bar-invariant, we are led to the conclusion that $C_{(-n,-2n)}=S_n(X_\de)$ for $n\in\ZZ_{\geq0}$. Proposition \[propstandardmonomial\] and Lemma \[lemsn2\] imply the following theorem. The basis $\mathcal{S}=\{q^{-\frac{1}{2}ab}X^{a}_{m}X^{b}_{m+1}~|~m\in\ZZ,(a,b)\in\ZZ^{2}_{\geq0}\}\cup \{S_{n}(X_\delta)\}$ is the triangular basis in $\A_{q}(1,4)$. Acknowledgments {#acknowledgments .unnumbered} =============== The first draft of the paper was written during X. Chen’s visit at Nankai University from May 19 to June 18, 2017. He thanks Nankai University for the hospitality and for creating ideal working environment. We appreciate Prof. Fan Qin’s comments on the relation between the bases obtained in our paper and some other existed bases. [99]{} L. Bai, X. Chen, M. Ding and F. Xu. *A quantum analogue of generalized cluster algebras.* Algebr. Represent. Theory (2017), accepted. https://doi.org/10.1007/s10468-017-9743-7 A. Buan, R. Marsh, M. Reineke, I. Reiten and G. Todorov. *Tilting theory and cluster combinatorics.* Adv. Math. **204** (2006), 572–618. A. Berenstein and A. Zelevinsky. *Quantum cluster algebras*. Adv. Math. **195** (2005), no. 2, 405–455. A. Berenstein and A. Zelevinsky. *Triangular bases in quantum cluster algebras.* Int. Math. Res. Not. IMRN 2014, no. **6**, 1651–1688. P. Caldero and F. Chapoton. *Cluster algebras as Hall algebras of quiver representations.* Comment. Math. Helv. **81** (2006), 596–616. X. Chen, M. Ding and J. Sheng. *Bar-invariant bases of the quantum cluster algebra of type $A_{2}^{(2)}$.* Czechoslovak Math. J. **61**(136) (2011), no. 4, 1077–1090. P. Caldero and B. Keller. *From triangulated categories to cluster algebras.* Invent. Math. **172** (2008), 169–211. P. Caldero and B. Keller. *From triangulated categories to cluster algebras II.* Ann. Sci. [É]{}cole Norm. Sup. (4) **39** (2006), no. 6, 983–1009. P. Caldero and A. Zelevinsky. *Laurent expansions in cluster algebras via quiver representations.* Mosc. Math. J. **6** (2006), no. 3, 411–429, 587. B. Davison. *Positivity for quantum cluster algebras.* Ann. of Math. (2) **187** (2018), no. 1, 157–219. B. Davison, D. Maulik, J. Schürmann and B. Szendröi. *Purity for graded potentials and quantum cluster positivity.* Compos. Math. **151** (2015), no. 10, 1913–1944. H.  Derksen, J. Weyman and A. Zelevinsky. *Quivers with potentials and their representations II: applications to cluster algebras.* J. Amer. Math. Soc. **23** (2010), no. 3, 749–790. M. Ding and F. Xu. *Bases of the quantum cluster algebra of the Kronecker quiver.* Acta Math. Sin. (Engl. Ser.) **28** (2012), no. 6, 1169–1178. M. Ding and F. Xu. *A quantum analogue of generic bases for affine cluster algebras.* Sci. China Math. **55** (2012), no. 10, 2045–2066. M. Ding, J. Xiao and F. Xu. *Integral bases of cluster algebras and representations of tame quivers.* Algebr. Represent. Theory. **16** (2013), no. 2, 491–525. S. Fomin and A. Zelevinsky. *Cluster algebras. I. Foundations.* J. Amer. Math. Soc. **15** (2002), no. 2, 497–529. S. Fomin and A. Zelevinsky. *Cluster algebras. II. Finite type classification.* Invent. Math. **154** (2003), no. 1, 63–121. A. Hubery. *Acyclic cluster algebras via Ringel-Hall algebras*, preprint. Y. Kimura and F. Qin. *Graded quiver varieties, quantum cluster algebras and dual canonical basis.* Adv. Math. **262** (2014), 261–312. K. Lee, L. Li, D. Rupel and A. Zelevinsky. *Greedy bases in rank 2 quantum cluster algebras.* Proc. Natl. Acad. Sci. USA **111** (2014), no. 27, 9712–9716. Y. Palu. *Cluster characters II: a multiplication formula.* Proc. Lond. Math. Soc. (3) **104** (2012), no. 1, 57–78. F. Qin. *Quantum cluster variables via Serre polynomials. With an appendix by Bernhard Keller.* J. Reine Angew. Math. **668** (2012), 149–190. F. Qin. *Triangular bases in quantum cluster algebras and monoidal categorification conjectures.* Duke Math. J. **166** (2017), no. 12, 2337–2442. F. Qin. *Compare triangular bases of acyclic quantum cluster algebras,* arXiv:1606.05604v2 \[math.QA\]. D. Rupel. *On quantum analogue of the Caldero-Chapoton Formula.* Int. Math. Res. Not. IMRN 2011, no. **14**, 3207–3236. P. Sherman and A. Zelevinsky. *Positivity and canonical bases in rank $2$ cluster algebras of finite and affine types.* Mosc. Math. J. **4** (2004), no. 4, 947–974, 982. J. Xiao and F. Xu. *Green’s formula with $\mathbb{C}^{*}$-action and Caldero-Keller’s formula for cluster algebras.* Representation theory of algebraic groups and quantum groups, 313–348, Progr. Math., **284**, Birkhauser/Springer, New York, 2010. [^1]: Liqian Bai was supported by NPU (No. 3102017OQD033), Ming Ding was supported by NSF of China (No. 11771217) and Specialized Research Fund for the Doctoral Program of Higher Education (No. 20130031120004) and Fan Xu was supported by NSF of China (No. 11471177).
--- abstract: 'Symbolic equations are one of the many representations used in physics. Understanding these representations is important for students because they are how students access knowledge in physics. In this paper I build off of the work by Redish and Kuo [@Redish-15], which described the cultural differences between how math is used in math class and physics class, and the work by Fredlund et al [@Fredlund-14], which described the importance of unpacking representations for physics student. I will describe how differences in the goals of numeric and symbolic problem solving lead to different set of affordances. In particular, the inability to distinguish variables, knowns, and unknowns in symbolic problem solving is a benefit when describing a generalized physical system. I also present evidence that observed errors when trying to solve symbolic problems are due to students acting on inappropriate cues associated with numeric problem solving.' author: - 'Eugene T. Torigoe' title: Unpacking Symbolic Equations in Introductory Physics --- Introduction ============ There has been a great deal of research about the difficulties students encounter when using math in introductory physics. Many physics instructors find that their students are unable to apply the appropriate mathematical tools even when they have passed the required prerequisite math courses. One tool researchers have used to study this difficulty is the math diagnostic exam. The purpose of these exams is to measure the mathematical skills students possess when they begin introductory physics. The results allow the instructor to tailor his or her lessons to the weaknesses of the students. The results of these exams have been shown to correlate to student success in introductory physics. [@Hudson; @Halloun; @Meltzer] These mathematical difficulties are mirrored in the math education research literature. Researchers have studied difficulties students encounter when they make the transition from arithmetic to algebra. [@Kieran-90; @Kieran-92; @Filloy; @Goodson-Espy] While arithmetic focuses mainly on numeric computation, algebra subsumes arithmetic and also incorporates symbolic representation. While this research mostly focuses on elementary, and high school students taking algebra courses, this work has also been expanded, and found similar difficulties with college students. [@Trigueros-03; @Clement-82; @Cohen-05] There are many college age students who have difficulties applying basic algebraic concepts. My prior work has focused on measuring the difficulties students encounter while trying to solve symbolic physics questions in the context of an introductory physics class. Studies of introductory physics students found that they performed far better on questions with numbers, than otherwise analogous questions using only symbols. [@Torigoe-06; @Torigoe-11] These difficulties mirror the types of difficulties seen by students first learning algebra, and making a transition from arithmetic to algebra. These results bolster the claim by some physics instructors that many students are not sufficiently prepared mathematically for the tasks required in introductory physics. While the evidence of difficulty is compelling, Redish and Kuo [@Redish-15] argue that the inability of students to transfer from math class to physics class is not the appropriate focus. They argue that while the math in math class may be superficially similar to the math used in physics class, it is actually quite different. These two systems originate from culturally different disciplines with different goals. > *More precisely, in the culture of physics, the use of mathematical expressions is complex, because the ancillary physical meaning of symbols is used to convey information omitted from the mathematical structure of the equation. This is because we have a different purpose for the math: to model real physical systems. [@Redish-15]* In physics the underlying goal of the mathematical equations is to represent some aspect of physical reality. They argue that the transfer of mathematical skills learned in math classes into physics class is of limited value as a target for instructors or researchers. > *Even if students have learned the relevant mathematical tools in their math courses, they still need to learn a component of physics expertise not present in math class tying those formal mathematical tools to physical meaning. [@Redish-15]* Because math in physics is used as a representational tool reflecting the properties of a physical system, the way mathematical symbols are interpreted is completely different in physics classes than in math classes. Expert representational systems are designed by experts for the community of experts that use them. Such complex representational systems can be opaque to novices. Fredlund et al [@Fredlund-14] call the process where representations become generalized, and obscured, the rationalization process. > *The rationalization process has led to a more generalized representation. However, from a student point of view, using such generalized representations is even more problematic since it calls for an in-depth understanding of how these representations relate to the particular situations at hand [@Fredlund-14]* The process of rationalization has two main mechanisms: nominalization and rank shifting. Fredlund borrows both concepts from the linguistics literature. Nominalization acts to transform verbs into nouns, which increases the flexibility of the language used. For example, the transformation of the statement “kinetic energy is conserved" into the “conservation of kinetic energy", leads to a more general and flexible concept of conservation. Rank shifting transforms a more complex unit of language into a less complex one. The simplified notation often makes the meaning less accessible to novices because interpreting the meaning of these rank shifted representation depends on expert defined implicit cues. Learning physics occurs within the context of the representations defined by physics experts. According to Fredlund et al, in order for students to gain access to this disciplinary knowledge, they must first be able to appropriately use and interpret these various representations, which they call the appreciation of the disciplinary affordances of the representation. > *I define the disciplinary affordances of a given representations as the inherent potential of that representation to provide access to disciplinary knowledge. Thus, it is these disciplinary affordances that enable certain representations to become legitimate within a discipline such a physics. Physics learning then, involves coming to appreciate the disciplinary affordances of representations. [@Fredlund-12]* Fredlund et al argue that a primary goal of the instructor is the unpacking of these expert systems to novice students. This is a difficult process for experts within the discipline because the function of the representations have been internalized, and automated. > *In many cases teachers have become so familiar with the disciplinary representations that they use that they no longer “notice” the learning hurdles involved in interpreting the intended meaning of those representations. [@Fredlund-14]* The rationalization process can be demonstrated by the representations used in math classes. Sfard describes historically common processes, similar to nominalization, in which mathematical processes evolve into mathematical structures/objects. [@Sfard] For example, the square root of a negative number was first conceived of as a type of process, but later conceived as an object, namely an imaginary number. As a nominalized object, the concept has much more flexibility and utility. Now it can be acted upon by different processes or combined to form other more complex quantities. When students make the transition from arithmetic to algebra the equations undergo a process similar to rank shifting. In arithmetic all equations are computational equations, which cue the computation of a particular quantity within the equation. In algebra the equations may also contain structural objects that can be used in the same ways as numbers would. This is a complex process because the structural interpretation does not replace the process interpretation but instead becomes a dual interpretation that can switch back and forth depending on the context. Gray and Tall [@Gray-94; @Tall-01] have coined the term procept to describe this phenomena. > *An elementary procept is the amalgam of three components: a process that produces a mathematical object, and a symbol that represents either the process or the object. [@Gray-94]* The procept is a type of rank shifting because instead of having a complex notation that clearly demarcates the process from the conceptual object, the notation is left ambiguous as a way for mathematicians to flexibly move from one interpretation to the other without the burden of separate notations. > *Instead of having to cope consciously with the duality of concept and process, the good mathematician thinks ambiguously about the symbolism for product and process. I contend that the mathematician simplifies matters by replacing the cognitive complexity of process-concept duality by the notational convenience of process-product ambiguity. [@Gray-94]* In this paper I examine the disciplinary affordances of symbolic physics equations, and I contrast that with the disciplinary affordances of equations using numbers. I argue that problems involving numbers form a representational system with different goals than the representational system for problems involving only symbols. While a mapping can be created between numeric and symbolic solutions, the notational systems serve different purposes and therefore have developed different representational affordances. I also adopt the theoretical framework of resources. [@Hammer] Under this framework, the context of the activity and the cues that one perceives activate a web of resources that one brings to bear on a particular activity. These resources can includes pieces of factual knowledge as well as epistemological resources about the nature of the task. From this perspective a particular error may be due to the absence of a resource or the failure to activate the appropriate resources. The activation of a resource can act as a cue to activate other resources. And so for an expert, the features of a particular problem very likely activate a whole web of resources appropriate to the problem. I will argue that one aspect of difficulty with symbolic equations is the surface similarity to numeric problems. Further, that when novice students see symbolic equations, they activate a web of resources appropriate for solving numeric problems, which are inappropriate, and fail to sensitize them to the cues for the resources appropriate for symbolic problem solving. The goals of numeric and symbolic problem solving ================================================= The evolution of expert representational systems is driven by the affordance of particular representations to serve the goals of the discipline. In this section I will discuss the differences in the goals between numeric and symbolic problem solving, and speculate on how the affordances of the representations have evolved to suit these goals. The use of numbers in physics is a reflection of the importance of experimental evidence within the discipline. This is the connection between quantitative measurements with mathematical models. An equation can be used with particular numeric measurements with units to make a computation that can be directly compared to another measurement of the world. This affords the testing of mathematical models to the world, as well as predictions of future measurements based on a mathematical model. The use of purely symbolic equations in physics serves a different but complementary purpose in the discipline. Symbolic equations demonstrate the underlying relationships between quantities within the mathematical model. They allow the expert to have a deeper understanding of the mathematical model, and its connection to an abstract and generalized physical system. Symbolic equations are not constrained to a particular experimental context, but are representative of a generalized physical system, which has embedded within it a range of possible physical systems. While numeric computations obscure the relations among various variables, symbolic equations display these relationships. Even in cases when a particular variable is canceled out of the equation it is an indicator that it has no relationship to the other independent variables in that context. One consequence of this goal is that symbolic physics equations must allow flexibility in how the symbols in an equation are to be interpreted. The equations must function as a model of a specific physical context, as when it is used in problem solving, but they must also afford a description of the class of similar contexts, a generalized system, in which the symbols are variable. For example, when experts examine limiting cases to compare their conceptual expectations of the physical system, and the mathematical model when it has been stretched to its limits. The differences in these goals lead to a different affordances for the two mathematical representations. In addition, I will argue that experts in the discipline have different sets of resources and resource activation cues when working on different types of problems. Unpacking equations used during numeric problem solving ======================================================= Equations with numbers are embedded in a context of a specific physical situation. The goal is to incorporate one or more numeric measurements into a mathematical model to predict the numeric value of a target quantity, which can be compared to an actual measurement. With this goal in mind the affordances of the representation must clearly distinguish the structure of the mathematical model, the unknown quantities, and the known quantities. The known quantities could be numeric measurements or physical constants. The structure of the mathematical model is represented as a purely symbolic equation. For example, > $v_{f} = v_{o} + a t$ This particular model for motion applies to systems traveling at a constant acceleration. Numeric measurements and constant with units are plugged into the equation as follows: > $20 m/s = 5 m/s + a (3 s)$ In this equation the numbers in the equation clearly represent the known quantities, and the target quantity stands out as the remaining non-numeric symbol (excluding units). The target quantity is isolated, and the known quantities can be combined to give a single numeric result. > $a=3 m/s^{2}$ If this is not the final target quantity, then it is at least clear that it can serve as an intermediate target, in which the quantity can be treated as a known quantity to be plugged into other equations. This system of letters and numbers serves the purpose of calculating a value for a quantity from a mathematical model in the context of a particular physical situation. It clearly demarcates the mathematical model from the specific known and unknown quantities. Numeric problem solving is associated with a sensitivity to particular cues. These cues trigger the activation of a web of resources to perform the task at hand. In particular, in many cases numeric problems involve an equation that contains a single unknown symbol from the mathematical model that hasn’t been replaced by a number. A single unknown is to be isolated in the equation to determine its value. Unpacking equations used during symbolic problem solving ======================================================== The goal of symbolic equations is to represent the relationships between variables, and the properties of a generalized physical system. In the context of problem solving, this goal leads to a conflict between a representation of the specific physical system under examination, and a generalized form of that system. A problem is situated within a specific physical system, with known quantities that stand in for specific measurements, and unknown quantities which serve as target quantities. But to serve as a mathematical model of a generalized version of the system, the same symbols must also represent variables. While the lack of a notation to distinguish a known, unknown, or variable quantity obscures the meaning for novices, that notational ambiguity is an affordance of the representation for the expert who prefers flexibility. This is essentially the same argument by Gray and Tall for the procept idea. For example, in the equation $v_{f} = v_{o} + a t$ there is no way of distinguishing it as a general form of a mathematical model, a mixture of known and unknown quantities relating to the specific physical system in the problem, or a generalized version of the system. When compared to the notational system for numeric problem solving there are clear differences. First, it is not clear which of the symbols should serve as the target unknown. The determination of which symbol should be isolated is dependent on the context of the problem, and perhaps the knowledge of which quantities are easily experimentally measured, and which are not. Second, there is no clear demarcation between the general mathematical model and the quantities relating to the physical system. Further complicating the interpretation of the equation, a single equation can contain multiple symbols that are associated with different objects, intervals of space, or intervals of time. This complication occurs when equations associated with different object or intervals are combined into a single equation. This is a type of confusion in symbolic problem solving that is not present in numeric problem solving, because only numbers are passed from one equation to the next. When solving symbolic problems one must be aware of the associations of each of the symbols being used. This level of information about the relation of various variables with one another allows the expert to make frequent check-ins with the generalized physical system. The expert can engage in various methods of interrogating the mathematical model with their expectations of the generalized physical system. Inappropriate resource activation: how students view symbolic equations through the lens of numeric problem solving. ==================================================================================================================== There is a growing body of evidence that students are able to solve numeric problems, but not the analogous symbolic version of that problem. [@Torigoe-06; @Torigoe-11; @Kortemeyer; @Brahmia]. Clearly the format of the question does not influence the students’ knowledge of physics. I argue that the source of the discrepancy is due to the activation of an inappropriate set of resources for solving symbolic problems. I have argued in the preceding sections that the goals, cues, and resources appropriate for numeric problem solving are inappropriate for symbolic problem solving. So errors are bound to occur when a student applies the procedures appropriate for numeric problem solving in an attempt to solve a symbolic problem. Both numeric and symbolic problem solving begin with a symbolic equation that is representative of the mathematical model to be applied. In both numeric and symbolic problems the equations are the same. The next step involves the identification of known quantities, which can include numeric measurements or numeric constants. This is difficult to do in the symbolic equation because the notation does not distinguish known, unknown, and variable quantities. The following rules work for numeric problem solving but do not work for symbolic problem solving. - Symbolic equations are a general mathematical model - The remaining non-numeric symbol is the target unknown to be isolated It would make sense, then to see errors related to the following: - Treating all symbolic equations as generalized mathematical models - Treating (non-numeric) symbols representing known quantities as unknown quantities The cues for resource activation that students rely on to solve numeric problems fail them. After plugging in the numeric values, they cannot rely on the normal visual cue of the remaining non-numeric symbol to identify the unknown. They may not be able to distinguish the general mathematical model from an equation relating specifically to the problem at hand. When solving for a target unknown, the expression will not be easily identified as a known quantity without first identifying its constituent parts as also known quantities. Evidence of confusion with symbolic problems ============================================ Speak-aloud problem solving interviews with thirteen introductory physics students were performed. During the interviews each student was first given the symbolic version of a question to solve while speaking aloud about their method to reach the solution. Whether correct or incorrect, the subjects were asked questions to gauge their understanding of the symbols in the problem. If a subject had difficulties with the symbolic version, they were asked to solve the numeric version of the same question. If that subject was then able to solve the numeric version, then the subject was then asked to use their numeric solution to find the correct symbolic expression. The students were never told whether they found the correct or incorrect result. The physics questions used in this study were modified versions of the questions used in the earlier final exam study. [@Torigoe-06] The questions used during the interviews can be seen in Figures \[fig:TandH\] and \[fig:Plane\]. The structure of the questions were the same but with different surface features. During these interviews it was common to observe a students who were unable to correctly solve the symbolic problem, but when given the numeric problem immediately afterward, could find their earlier mistakes and could easily find the correct answer. Confusion of known and unknown symbols -------------------------------------- Many students exhibited difficulties distinguishing known and unknown symbols when working on the symbolic versions. During the interviews students often lost track of the known and unknown quantities. For example, while solving the Tortoise and Hare symbolic problem three students started with the equations $d = v_0t$ and $d = \tfrac{1}{2}a t^2$, and incorrectly eliminated the known quantity ($d$) leaving the two unknowns ($a$ and $t$). As a result they ended up with one equation and two unknowns. All three were surprised when their final result was not one of the answer options. The following quote was from one of these students. > ***Subject:** \[Starts with equations $d = v_0t$ and $d = \tfrac{1}{2}a t^2$\] um now solving for $a$, I actually solved for, so $v_ot = \tfrac{1}{2}a t^2$, let’s start over, $vt$, $2vt = at^2$ divided $t^2$ \[on one side of the equation\], divided by $t^2$ \[on the other side of the equation\], equals $a$, cross those guys out, $2v_o/t = a$, and that would give you none of the answers given, which stinks!... \[Answer contains the unknowns $a$ and $t$ because he eliminated the known quantity $d$\]* These types of errors were not observed in the numeric versions because known and unknown quantities were easily identified by either using a number or a letter, respectively. Variable confusion ------------------ In order to apply a general mathematical model to a specific system, quantities representing the system must be plugged into the general equation. While this is easy with numbers, it is much more difficult with purely symbolic equations. There is evidence from the interviews some students never specified the general mathematical model to match the properties of the system described in the problem. Those students seemed to interpret any purely symbolic equation as a general mathematical model. This can be seen from student errors in the Tortoise and the Hare problem (Figure \[fig:TandH\]). In this problem a Tortoise moving with a constant speed passes a Hare at rest a certain distance from the finish line. The instant the Tortoise passes, the Hare accelerates toward the finish line. Given the speed of the Tortoise and the distance from the finish line the question asks you to find the minimum acceleration needed for the Hare to catch up to the Tortoise before the finish line. The most common error was to use the equation $v_f^2 = v_i^2 + 2a\Delta x$, to get the incorrect result $a = v^2/(2L)$. This is incorrect because the symbol $v$ is given as the velocity of the Tortoise, but is used as if it were the velocity of the Hare when it reaches the finish line. There were five students who were observed to make this error. Those five students were questioned in an attempt to determine if they had a consistent interpretation of the symbol. Two of the five said that the symbol represented the velocity of the Tortoise, and showed great flexibility in their interpretations of that symbol. In the following exchange, one of those students switched their interpretation of v in less than a minute. > *\[Student selected the answer $a_{min} = v^2/(2L)$\]\ > **Interviewer:** OK and what does $v$ represent? \[points to the selected answer\]\ > **Subject:** umm, the veloc, the **constant velocity of the tortoise**\ > **Interviewer:** Alright, so let’s say that instead of this question asking for the minimum acceleration, asked you to find ... um... the vel, the final velocity of the hare, do you think you could write down an equation for the final velocity of the hare when it reaches the finish line?\ > **Subject:** umm, I think I would just, probably rearrange this equation \[referring to her final answer of $a_{min} = v^2/(2L)$\]\ > **Interviewer:** OK\ > **Subject:** Because it, blah, the acceleration does not change, I mean its constantly accelerating, but in this scenario it’s still the minimum acceleration, and distance doesn’t change, so I would just rearrange the equation to find the final velocity.\ > **Interviewer:** And that would be the **final velocity of the hare?**\ > **Subject:** **Right*** These two students did not seem to hold any firm connection between the symbol $v$ and any particular property of the physical system. They treated the equation in an overly general way. There is some evidence from the Airliner question (shown in Figure \[fig:Plane\]) that the plugging in numbers acted as a cue to specify the variable and apply the symbol with particular associations. In this problem an airliner on a runway accelerates at a constant rate from rest. The airliner’s final velocity and the time to reach that velocity are given, and the question asks you to find the distance traveled when the airliner reaches half the final velocity. One of the observed errors for this problem was when the students used the symbol $t$, which is defined as the time to reach the final velocity, as the time to reach half the final velocity instead. Two interview subjects made this specific error. These two students started by using the general equation $x = x_i + v_i t + \tfrac{1}{2}at^2$. They made the replacements $x_{i} = 0$, $v_{i} = 0$, $x = L$, and $a = v/t$ to get $L = \tfrac{1}{2}(v/t)t^2$. The students simplified this to the equation $L = vt/2$. Unfortunately the symbol $t$ is used to represent two different times. The $t$ from the general equation should properly represent the time to reach a speed $v/2$, and the $t$ in the acceleration equation should represent the time the reach a velocity $v$. The students erred when they canceled the t’s after combining the two equations. Even though they made this error on the symbolic version, both were able to correctly solve the numeric version. The act of plugging in numbers seemed to act as a cue to specify the meaning of the variable they were replacing. Both attempted the same procedure as they had in the symbolic version, but as they were about to plug in a value for the time into the kinematics equation, they realized that it was necessary to solve for the specific time when the jet airliner reached a speed of 40 m/s (half the final speed), and that it was inappropriate to cancel the $t$’s as they had in the symbolic version. > ***Subject:** Umm, $x= \tfrac{1}{2}a t^2$, and then $a = v/t$ ... so when I plugged in the $x$ equals, uhhh, $a = v/t$ in my equation $x = \tfrac{1}{2}a t^2$, **I crossed out the times, but \[the t in the acceleration equation\] was for when it was 90 \[s\] and \[the t in the general equation\] is when, we don’t know how long it took**. So maybe I should...figure out... how long it takes for the plane to get to 40 m/s...\[Subject then correctly solves the numeric version\]\ > **Interviewer:** OK so how confident do you feel about that?\ > **Subject:** Umm, I was pretty confident, but I kind of got sidestepped over what time I should use, so I went to the side and solved for it.* Another aspect of the numeric problem solving that the students seemed to benefit from was the isolation of symbols with different associations. In this example the inclusion of numbers allowed the students to isolate each meaning of the symbol $t$ from the other definition by the use of separate numeric equations. Discussion ========== I use Fredlund et al’s rationalization framework to make the claim that numeric and symbolic notational systems, while similar, have different affordances within the discipline of physics. The goal of the mathematical representations used during numeric problem solving is to relate a mathematical model with a specific physical system. Either in the sense of using the mathematical model to make predictions about the physical system, or using the physical system to test the applicability of the mathematical model. The goal of the symbolic mathematical representation is to identify the relationship between variables, as well as the relationship of the equation to a generalized version of the physical system being analyzed. As a result of the different goals they have developed different affordances, resources, and cues for resource activation. It is very common for experts in physics to prefer to solve problems symbolically, even when numbers are given in the problem. One reason this might be the case is because the symbolic equations give more information about the mathematical model of the generalized system. Symbolic equations allow for a broader analysis of the physical situation, and if required, the model of the generalized system can then be specified with the numeric measurements and constants of the specific system being studied. I also use this distinction to explain observed difficulties that students have while attempting to solve symbolic problems. My claim is that the difficulty students have with symbolic problem solving originates from the application of cues and resources for numeric problem solving to symbolic problems, where such a set of resources leads to confusion and error. An understanding of the affordances of these representations should not be expected to be understood by students on the first day they take physics. The use of math in physics is culturally specific to the discipline of physics. Fredlund emphasizes that to have access to disciplinary knowledge requires that students understand the affordances of the representations used within the discipline. One of the goals of teaching physics is to unpack the representations which have been rationalized over time. One aspect of this unpacking is to help students appreciate the benefits of each representation’s affordances. To do this a match must exist between the affordances of the representation and the goals of the instructional activity. Numeric problem solving should be used to either make predictions on a physical system, or to test the applicability of a mathematical model using measurements of the physical system. And symbolic problems should be used to find the properties of the generalized physical system, including the relationship between variables, and/or the testing of the model using limiting cases. In activities where there is a mismatch between the affordances of the representation being used and the result of the activity, then it seems inevitable that students will be confused about the utility of the representations. Physics instructors commonly advise introductory physics students to solve all physics problems symbolically, and then plug in the given values as the last step. If the goal of the activity is the determination of a numeric value, then the students will be justifiably confused by the necessity of having solved the problem symbolically. The students will not see the benefit of such a method unless they fully utilize the affordances of the representation. In the case of symbolic problem solving the students must be able to use the symbolic expression to gain an understanding of the generalized physical system. Prompts to connect the symbolic equations to generalized physical system may serve to show the benefits of working symbolically. Traditional text book problems that use the correct numeric result as the metric for success advantage numeric problem solving over symbolic problem solving. Based on the evidence that students have much more difficulty with symbolic versions of questions than the analogous numeric versions, students will likely to experience success when working with numbers, and failure and frustration when working with symbols. Even if the instructor tells them to work symbolically before plugging in numbers, many students will find that they are much more successful plugging in numbers as soon as they can. This is a major barrier to students learning the benefits of symbolic problem solving. One way of approaching the issue is the change the metric of success in problem solving so that the advantages of symbolic problem solving over numeric problem solving are clear. Some possibilities include: - Examination of the generalized system using multiple sets of possible measured values - Examine Limiting Cases - Interpretation of the relationship between variables - Interpreting the association between the symbolic equation and the physical system The chain of association between the physical system and the mathematical equation is an affordance of symbolic equations in physics that is important to emphasize to students. Redish and Kuo suggest that instructors focus on the connection of the equations to the physical system, which differentiates the use of math in physics from its use in math class. Specifically they suggest that instruction should begin with physical intuition, and then connect physical intuition and the mathematical models. > *It might well be preferable to ‘‘teach physics standing on your head’’ by beginning with the physical meaning and creating a chain of association to the math, both strengthening the students’ skills of ‘‘seeing physical meaning’’ in equations and helping them develop the epistemological stance that equations in physics should be interpreted physically. [@Redish-15]* Brahmia et al. [@Brahmia-2], describe the blending and translating between mathematical equations and the physical world as “mathematization". To promote this type of thinking they employ invention tasks where the students are encouraged to create their own mathematical descriptions of carefully chosen physical phenomena. In order for students to appreciate symbolic problem solving, they must understand the affordances of the representation. The representations need to be unpacked so that the students can understand how the physical system is represented by the mathematical model. While the state of a symbol as a variable, known, or unknown may appear to be a hindrance to the use of symbolic equations, it affords the flexibility in the equation to describe a more generalized version of the physical system being studied. It allows for more correspondences between the physical system and the mathematical model, than numeric equations. Conclusion ========== I make the claim that numeric and symbolic problem solving serve different goals within the discipline of physics. These disparate goals lead to different representational affordances. As instructors it is important that we unpack these representations, and demonstrate the value of each representation. Of particular importance is the flexibility of symbolic equations to represent a generalized physical system. Such a representation allows one to more easily connect physical intuition to the mathematical model. H.T. Hudson and D. Liberman. The combined effect of mathematics skills and formal operational reasoning on student performance in the general physics course. American Journal of Physics, 50(12):1117–1119, 1982. I.A. Halloun and D. Hestenes. The initial knowledge state of college physics students. American Journal of Physics, 53(11):1043–1054, 1985. D.E. Meltzer. The relationship between mathematics preparation and conceptual learning gains in physics: A possible “hidden variable” in diagnostic pretest scores. American Journal of Physics, 70(12):1259–1268, 2002. C. Kieran, “Cognitive processes involved in learning school algebra," in *Mathematics and Cognition: A Research Synthesis by the International Group for the Psychology of Mathematics Education*, edited by P. Nesher and K. Kilpatrick (Cambridge University Press, Cambridge, 1990), pp. 96–112. C. Kieran, “The learning and teaching of school algebra," in *Handbook of Research on Mathematics Learning and Teaching*, edited by D. Grouws (MacMillan, New York, NY, 1992), pp. 390–419. E. Filloy and T. Rojano. Solving equations: the transition from arithmetic to algebra. For the Learning of Mathematics, 9(2):19–25, 1989. C. Goodson-Espy. The roles of reification and reflective abstraction in the development of abstract thought: Transitions from arithmetic to algebra. Educational Studies in Mathematics, 36(3):219–245, 1998. M. Trigueros, and S. Ursini, “First-year Undergraduates’ Difficulties in Working with Different Uses of Variable," in *CBMS Issues in Mathematics Education Volume 12*, edited by A. Seldon, E. Dubinsky, G. Harel, and F. Hitt (American Mathematical Society, Providence, 2003), pp. 1–28. J. Clement, “Algebra word problem solutions: Thought processes underlying a common misconception,” J. Res. Math. Educ. [**13**]{}(1), 16—30 (1982). E. Cohen and S. E. Kanim, “Factors influencing the algebra ‘reversal error’,” Am. J. Phys. [**73**]{}(11), 1072–1078 (2005). E. Torigoe, G. Gladding, “Same to Us, Different to Them: Numeric Computation versus Symbolic Representation," in 2006 Physics Education Research Conference, edited by L. McCullough et al.(AIP Press, NY, 2007), pp. 153-156. E. Torigoe, and G. Gladding, “Connecting Symbolic Difficulty with Failure in Physics," Amer. J. of Phys., [**79(1)**]{}, 133–140 (2011). Redish, Edward F., and Eric Kuo. “Language of physics, language of math: Disciplinary culture and dynamic epistemology.” Science and Education 24.5-6 (2014): 561-590. Fredlund, Tobias, John Airey, and Cedric Linder. “Exploring the role of physics representations: an illustrative example from students sharing knowledge about refraction.” European journal of physics 33.3 (2012): 657. Fredlund, Tobias, et al. “Unpacking physics representations: Towards an appreciation of disciplinary affordance.” Physical Review Special Topics-Physics Education Research 10.2 (2014): 020129. A. Sfard. On the dual nature of mathematical conceptions: Reflections on processes and objects as different sides of the same coin. Education Studies in Mathematics, 22(1):1–36, 1991. E.M. Gray and D.O. Tall. Duality, ambiguity, and flexibility: A “proceptual” view of simple arithmetic. Journal for Research in Mathematics Education, 25(2):116–140, 1994. D. Tall, E. Gray, M. Bin Ali, L. Crowley, P. DeMarois, M. McGowan, D. Pitta, M. Pinto, M. Thomas, and Y. Yusof. Symbols and the bifurcation between procedural and conceptual thinking. Canadian Journal of Science Mathematics and Technology Education, 1(1):82–104, 2001. D. Hammer, A. Elby, R. Scherr, and E. Redish. Resources, framing and transfer. In J. Mestre, editor, Transfer of Learning from a Modern Multidisciplinary Perspective, pages 89–119. Information Age Publishing, Greenwich, CT, 2005. G. Kortemeyer. “The Losing Battle Against Plug-and-Chug." The Physics Teacher 54.1 (2016): 14-17. S. White Brahmia, A. Boudreax, S.E. Kanim. “Obstacles to Mathematization in Introductory Physics." arXiv Preprint. arXiv:1601.01235 \[physics.ed-ph\]. S. White Brahmia, A. Boudreax, S.E. Kanim. “Developing Mathematization with Physics Invention Tasks." arXiv Preprint. arXiv:1602.02033 \[physics.ed-ph\].
= 580pt = 390pt .3cm .3cm Quantum field theory .2cm with and without conical singularities: .2cm Black holes with cosmological constant .2cm and the multihorizon scenario .3cm .3cm Feng-Li Lin${}^a$ .2cm Department of Physics, University of Utah, Salt Lake City, UT84112-0830, U.S.A. .2cm and 0.2cm Chopin Soo${}^b$ .2cm National Center for Theoretical Sciences, P. O. Box 2-131, Hsinchu, Taiwan 300, Taiwan. .2cm PACS number(s): 4.70.Dy, 4.62.+v, 4.70.-s, 4.20.Gz .5cm Published in Class. Quantum Grav. [**16**]{}, 551-562 (1999) .2cm .2cm .2cm .2cm Boundary conditions and the corresponding states of a quantum field theory depend on how the horizons are taken into account. There is ambiguity as to which method is appropriate because different ways of incorporating the horizons lead to different results. We propose that a natural way of including the horizons is to first consider the Kruskal extension and then define the quantum field theory on the Euclidean section. Boundary conditions emerge naturally as consistency conditions of the Kruskal extension. We carry out the proposal for the explicit case of the Schwarzschild-de Sitter manifold with two horizons. The required period $\beta$ is the interesting condition that it is the lowest common multiple of $2\pi$ divided by the surface gravity of both horizons. Restricting the ratio of the surface gravity of the horizons to rational numbers yields finite $\beta$. The example also highlights some of the difficulties of the off-shell approach with conical singularities in the multihorizon scenario; and serves to illustrate the much richer interplay that can occur among horizons, quantum field theory and topology when the cosmological constant is not neglected in black hole processes. Electronic addresses:${}^a$ linfl@mail.physics.utah.edu; ${}^b$ cpsoo@phys.nthu.edu.tw I. Introduction. {#i.-introduction. .unnumbered} ================ The problem of how quantum field theory with Schwarzschild-de Sitter (S-dS) base manifold[@de; @Sitter] is defined is interesting from many different angles. Recent high redshift Type Ia supernovae observations strongly support the presence of a positive cosmological constant [@Perlmutter]. In black hole processes, it is physically relevant to take into account the effects of the cosmological constant, $\lambda$. It inevitably arises as the coefficient of a counterterm for quantized matter fields in background spacetimes. There are various indications that the inclusion of the cosmological constant may affect even the qualitative features of black hole processes. Topologically, the Euclidean S-dS manifold with two conical singularities has Euler number $\chi= 4$ and is not deformable to the pure Schwarzschild solution, which has $\chi =2$, by tuning the cosmological constant. Naive thermodynamic arguments suggest that the pure black hole configuration cannot be obtained as the smooth thermodynamic limit $\lambda \rightarrow 0$ of the S-dS configuration with two horizons since the size of the outer cosmological horizon becomes infinitely large then and should contribute infinite entropy as the cosmological constant goes to zero, whereas the pure black hole lacks the outer horizon altogether. In recent years, methods have been developed to regularize the contributions of conical manifolds [@Dowker; @Miele]. They allow, for instance, the discussion of the thermodynamics of black holes in the off-shell approach in which the Hawking temperature for the Schwarzschild black hole is derived from the thermal equilibrium condition given by the extremum of the Euclidean Einstein-Hilbert action [@Fursaev]. The off-shell approach also makes it feasible to decouple the inverse of the temperature, $\beta$, from the Hamiltonian which depends on the mass of the black hole by lifting the on-shell restriction $\beta = 8\pi m$. If conical singularities are allowed, we can consider more complicated scenarios to test if the formalism leads to difficulties which are not encountered in the case of the pure black hole. S-dS with two horizons appears to be a particularly relevant example. In the region between but not including the horizons of S-dS, a single global coordinate patch exists. [*The question is how one takes into account the horizons.*]{} In the method with conical singularities, the horizons of S-dS are to be included as conical singularities. However, an important difference for this multihorizon situation is that there is no straightforward way to define the usual on-shell thermal equilibrium temperature as in the case of the pure black hole[@Hawking] because it is impossible to simultaneously eliminate the conical defects on both horizons by a single choice of periodicity within the formalism with conical singularities. From this perspective, the strategy of allowing for conical singularities therefore seems rather pertinent and also needed for a multihorizon scenario such as the S-dS since it permits adopting a [*single*]{} off-shell periodicity which does not need to coincide with either of the values required to remove conical defects at the horizons. It also seems to imply that an off-shell discussion of thermodynamics is possible despite the apparent unequal intrinsic periodicities of the horizons. However, as we shall see, this is not the only way to resolve the impasse. Actually, the method does not seem to give the correct results even for the pure de Sitter case. There are also questions with regard to the consistency, or at least ambiguity, of quantum field theories defined on manifolds with conical singularities. In the one-loop efffective action in background spacetimes, there are terms involving the square of the curvatures which are divergent and cannot be removed in the off-shell approach with conical singularities precisely because in this formulation, no single choice of $\beta$ can simultaneously get rid of both conical defects at the horizons. For the pure black hole, one can take the on-shell limit after the computations are done to eliminate the unwanted terms[@Fursaev]. In Ref.[@Hawking], it is suggested that we can consider partitioning the volume into two regions which are in equilibrium with the respective inner and outer horizons. We are then able to do thermodynamics without conical singularities. However, the partition is by no means natural. Moreover, it is very much unlike a patching condition in that the physics depends on how the partition is chosen. Half of the total volume at each natural temperature of the horizons is clearly different from one-third of the volume at one temperature and two-thirds of it at the other. On the other hand, we may even argue that the physical situation of S-dS may correpond more closely to a situation with temperature gradient and even non-equilibrium physics since the natural surface temperatures of the horizons are different. It is interesting to note that either extremes can have dramatic implications for black hole processes. If conical singularities are allowed, they may be potentially significant, both as remnants of black hole evaporation and seeds for black hole condensation in a de Sitter universe with conical singularities, and can actually serve to preserve the information of the topological Euler number during these processes. It may be possible for a black hole of the S-dS type to achieve the zero mass limit with two conical singularities and a remaining outer horizon, and still maintain the $\chi = 4$ condition. Moreover the remaining outer horizon could be larger than the sum of the initial black hole and cosmological horizons. This could be consistent with information loss without violating topological conservation laws. On the other hand, if conical singularities are to be excluded, then the mere introduction of the cosmological constant, which can also be induced from quantized matter, could lead to non-equilibrium processes with deviations from blackbody spectrum and its implications for the information loss paradox, due to the presence of two horizons with unequal surface gravity. But neither of these simple extreme scenarios may be entirely correct. The issue of how to define, say quantum field theory, on such a background with multiple horizons has yet to be settled. The imposed boundary conditions and corresponding states of the quantum field theory depend on how the horizons are accounted for. So it is pertinent to ask if there are [*natural*]{} ways to incorporate the horizons. In this paper, we compare the scenarios with and without conical singularities and illustrate some of the difficulties that are present in the former. We return to the Kruskal extension of the pure black hole solution and observe that there is a generalization for S-dS which will [*naturally*]{} incorporate the horizons. The Euclidean quantum field theory is then defined without conical singularities but with patching and consistency conditions which determine the feasible states. When applied to the pure black hole and $S^4$ de Sitter configurations, the proposal yields the correct Gibbons-Hawking temperatures. II. Conical singularities and QFT in S-dS spacetime. {#ii.-conical-singularities-and-qft-in-s-ds-spacetime. .unnumbered} ==================================================== In finite temperature quantum field theories in flat spacetime, temperature dependence of the effective action is introduced through radiative loop corrections and resummation [@Kapusta]. However, in curved spacetimes there is temperature dependence in the action even at tree level through the periodicity of the Euclideanized metric. In the formulation with conical singularities, [*the horizons are accounted for as conical singularities*]{}[@Fursaev]. The Euclidean Einstein-Hilbert action which includes contributions from conical singularities is $$I_g =-{1\over 16\pi}\int_{M/\Sigma} d^4 x {\sqrt g}(R-2\lambda) -{1\over 16\pi}\int_{\Sigma} d^4x {\sqrt g}R -{1\over 8\pi}\int_{\partial M} d^3x {\sqrt h}(K-K_0).$$ Here, $\Sigma$ denotes the singular set of the horizons due to conical defects, and $K$ is the second fundamental form. The partition function and effective action are defined through the path-integral with $$e^{-I_{eff}(\beta)}=Z(\beta)=\int [{\cal D}g][{\cal D}\phi]e^{-I_g-I_m} .$$ It is assumed that the period of the Euclidean time variable (which does not need to coincide with either of the values required to remove conical defects at the horizons) is $\beta \equiv 1/T$. $I_m$ is the matter action for $\phi$ while $I_g$ is the gravitational action. Thermodynamic information may be extracted from the partition function. For example, Fursaev et al [@Fursaev] derived the Hawking temperature and Bekenstein-Hawking entropy for the pure Schwarzschild black hole through this formulation from the extremum of the effective action. We are interested in the case of a black hole with positive cosmological constant i.e. the S-dS configuration, and the Euclidean region between the two horizons. There are two horizons with the larger cosmological horizon at $r_+$ and the inner black hole horizon at $r_-$ if we impose the restriction $9m^2 \lambda < 1$ [@de; @Sitter; @Hawking]. The region between the inner black hole and outer cosmological horizons also serves as a natural volume for thermodynamic considerations. For the Euclidean S-dS configuration, there are no boundary terms in Eq.(1). This is in contradistinction with the pure Schwarzschild case where the boundary term at infinity contributes to the Arnowitt-Deser-Misner mass of the black hole. The Euclidean section of interest has a conical singularity at each horizon and their contributions to the Einstein-Hilbert action are taken into account by the second term in Eq.(1). Specifically, the Euclidean S-dS metric is $$\begin{aligned} ds_{E}^2 = h(r) d\tau^2 + {dr^2 \over h(r)} + r^2 d\Omega^2 , \\ h(r)=1- {2m\over r} - {\lambda r^2 \over 3}. \end{aligned}$$ Here $\tau$ is the periodic coordinate with periodicity equal to $\beta$. As stated earlier, there are conical singularities on the horizons when $T$ is not equal to the individual Gibbons-Hawking temperatures associated with the horizons. To reveal the conical singularities on the horizons, we may choose the local coordinate patches and change variables through $$h(r) = k^2 X^2.$$ The metric becomes $$ds^2_{E} = X^2 d\left({k\tau}\right)^2 + {dX^2 \over ({ h^{'} \over {2k}})^2} + r^2 d\Omega^2.$$ $V^{'}$ denotes the first derivative of $V(r)$ with respect to $r$. Near the horizons at $r_{\pm}$, the topology reduces to ${\cal C}^2 \times \ {\cal S}^2$ if we set $$k_\pm = \frac{1}{2}\left| {h^{'}(r_\pm)} \right|.$$ $r_{\pm}$ are the solutions of $h(r_\pm)= 0$ and are related to $m$ and $\lambda$ through $$r_+ ={\sqrt \frac{4}{\lambda}}\cos(\frac{\xi +4\pi}{3}), \qquad r_- ={\sqrt \frac{4}{\lambda}}\cos(\frac{\xi}{3}), \qquad \cos(\xi)= -3m{\sqrt \lambda} ,$$ with $\xi$ in the range $(\pi,\frac{3\pi}{2}]$ and, $0 \leq 9m^2\lambda < 1$ [@de; @Sitter]. Note that $k_\pm$ are the values of the surface gravity[^1] on the horizons, and the Gibbons-Hawking temperatures $T_\pm$ associated with the respective horizons are $$T_\pm={k_\pm \over 2\pi}\label{local temp}.$$ It is thus clear for the manifold defined this way that there are conical defects if $T$ is not equal to $T_\pm$. Following Ref.[@Fursaev], the contributions of the conical singularities to the action are $$\begin{aligned} -{1\over 16\pi}\int_{\Sigma} d^4x {\sqrt g}R &=& -{1\over 4}(1-{T_- \over T})A_- +{1\over 4} (1-{T_+ \over T})A_+. \end{aligned}$$ The relative sign difference in the above expression reflects the opposite orientations of the normals at the horizons with respect to $dr$. The conical contributions vanish when the area of the corresponding horizons $A_{\pm}$ coincide and $T_- = T_+$. The conical contributions are obviously nontrivial otherwise. We also need the contribution from the non-singular set $M/\Sigma$ to compute the total contribution to $I_g$ in Eq.(1). By integrating $r$ from $r_-$ to $r_+$ and $\tau$ from $0$ to $\beta$, the contribution to the Einstein-Hilbert action from the non-singular set is $$\begin{aligned} -{1\over {16\pi}} \int_{M/{\Sigma}} d^4x \sqrt{g}(R-2\lambda)& =& -{\lambda \over {8\pi}} \int_{M/\Sigma} d^4 x \sqrt{g}\cr \nonumber\\ &=&-{{\lambda\beta} \over 6}(r^3_+- r^3_-).\end{aligned}$$ Summing the contributions of Eqs.(10) and (11), we have the tree level action (with $I_m \equiv 0$) with temperature dependence as $$I_g=-\beta(\lambda r_+^3/3 - m)-\pi (r_-^2 - r_+^2).$$ We may consider extremizing and doing thermodynamics with this action, but it does not seem to yield sensible results in the off-shell approach. For instance, the entropy in this approach with conical singularities is $$S_{Con. sing.}= \beta {\partial I_g \over {\partial \beta}}- I_g = \pi (r^2_- - r^2_+).$$ At this point, it is appropriate to spell out a few subtleties and difficulties associated with the formulation with conical singularities. First of all, there is a subtlety with regard to the correct sign of the action. When the entropy is evaluated using this approach for the pure de Sitter configuration[^2], the result is [*negative*]{}. Explicitly, $$S_{Con. sing.}= \beta {\partial I_g \over {\partial \beta}}- I_g = -\pi r^2_+.$$ The absolute value is the correct Gibbons-Hawking entropy for pure de Sitter manifold with cosmological horizon at $r_+$. This is in contradistinction with the pure Schwarzschild case where the action as in Eq.(1) gives the correct sign and magnitude of the Bekenstein-Hawking entropy and also the correct positive energy equal to the Arnowitt-Deser-Misner mass $m$[@Fursaev]. We emphasize that [*on-shell*]{} calculations for the de Sitter solution with $\beta = 2\pi r_+$ gives the correct positive result for the entropy because $I_g$ in Eq. (1) [*without conical singularities*]{} leads to $I_g = -S = -\pi r^2_+$[@Euclidean]. We may try to choose the action to be the negative of that in Eq.(1) but that convention will lead to problems with the pure Schwarzschild case. The entropy calculated from the method with conical singularities therefore may or may not coincide with the on-shell value even in situations where there is but a single horizon. Moreover, it can even lead to non-positive values of $S_{Con. Sing.}$. Secondly, there are difficulties associated with the formulation with conical singularities if we were to apply it to QFT of matter fields in curved spacetimes with more than one horizon. Physically, it is important to include matter but from Eq.(2), on integrating out the quantum field $\phi$ in a fixed background metric, the effective action is naively expected to be [@Birrell] $$\begin{aligned} I_{eff}[g_{\mu\nu}, \beta,\lambda]&=& \int_M d^4x {\sqrt g}\{{-1 \over 16\pi G_{ren}} (R- 2\lambda_{ren})\cr \nonumber\\ &+&c_1 R^2+c_2 R^{\mu \nu} R_{\mu \nu}+c_3 R^{\mu \nu \alpha \beta} R_{\mu \nu \alpha \beta}\} + {\rm finite \,\, terms}.\end{aligned}$$ This is for smooth manifolds. Curvature-squared quantum corrections also contribute to the conformal anomaly which is related to the Hawking radiation, and are thus physically relevant to the thermal feature of spacetime [@Christensen]. However, when conical singularities are present, they result in Dirac $\delta$ singularities in the curvatures[@Fursaev]. Components of the curvature tensor have to be defined as distributions and integral characteristics of quadratic and higher powers of the curvature do not have strict meaning. We may assume all the higher-order renormalized coefficients of curvature-squared terms vanish identically to bypass this difficulty i.e. to assume that the only effect of quantized matter is just to renormalize the gravitational constant $G$ (which has been set to unity in our convention) and the cosmological constant $\lambda$. However this is due more to expediency than to compelling physical arguments. As was pointed out in Ref.[@Miele], the trace of the heat kernel operator turns out to be well-defined, and we may compute the QFT contributions from the asymptotic expansion of the trace of the heat kernel operator $$Tr(K) = Tr(\exp(-s\triangle)) = {1\over {4\pi s}^2} (a_0 + s a_1 + s^2 a_2 +...).$$ However it is still necessary to [*assume*]{} potential terms in the Laplacian operator are defined only on $M/\Sigma$ and do not include singular terms. In general, the conical contributions to the coefficients $a_i$ do not vanish. Moreover, for spin 3/2 and spin 2 fields, even when the on-shell value of $\beta$ is taken afterwards, the trace of the heat kernel differs from the trace on smooth manifolds. In the case of the pure black hole where there is but a single horizon, “renormalizations"[^3] can be done and the contributions of curvature-squared terms at equilibrium temperature (at which the conical defect disappears) can be taken into account [@Fursaev]. The crucial difference is that this cannot be done for the relevant multihorizon scenario here because it is impossible to [*simultaneously*]{} eliminate conical defects at both horizons. While these hurdles do not conclusively show that there is no sensible way to define QFT in S-dS spactime via the conical method, it is nevertheless true that that different ways of accounting for the horizons lead to different results. There is thus ambiguity as to which of the methods is “appropriate". Some of the methods are covered in a review of on-shell vs. off-shell computations[@Offshell]. The discussion includes the “brick-wall" method with Dirichlet boundary conditions and a cut-off distance from the horizon, the “blunt cone" method where the conical singularities are smoothened away by a deformation parameter, the “volume cut-off" formalism, the method with conical singularities, and on-shell computations for the pure black hole. Actually, the very meaning of “on-shell" for the case of Schwarzschild-de Sitter with two horizons is problematic from the perspective of the conical method, and is so far undefined. But this will be pursued in the next section. III. Kruskal extension of the S-dS spacetime. {#iii.-kruskal-extension-of-the-s-ds-spacetime. .unnumbered} ============================================= We emphasize that different ways of accounting for the horizons lead to different results for the effective action. We propose that a more natural way to account for the horizons is to consider the Kruskal extension of the manifold and then define QFT on the Euclidean section. We draw a lesson first from Kruskal extension of the pure Schwarzschild solution and see why it leads to $\beta = 8\pi m$ naturally. \(a) The pure black hole metric is $$ds^2 = -h(r)dt^2 + h^{-1}(r) dr^2 + r^2 d\Omega^2$$ with $h(r) = {{(r - 2m)} \over r}$. By defining $u = t- r^* $ and $v= t+ r^*$, with $$\begin{aligned} r^* &=& \int {r\over (r-2m)} dr \cr \nonumber\\ &=& r + 2m\ln (r-2m),\end{aligned}$$ the metric can be transformed to $$ds^2 = -h(r) du dv + r^2 d\Omega^2.$$ The Kruskal extension can be done with coordinates $$u' = -e^{-ku}, \qquad v' = e^{kv}$$ where $k = {1\over 2} {dh/dr}|_{r=2m} = \frac{1}{4m}$ is the surface gravity at the horizon. In terms of Kruskal coordinates, the metric becomes $$\begin{aligned} ds^2&=& -h(r){du \over du'}{ dv \over dv'} du' dv' + r^2 d\Omega^2 \cr \nonumber\\ &=& -{1\over k^2 r}e^{-2kr} du' dv' + r^2 d\Omega^2.\end{aligned}$$ By rewriting $$t' = (u' + v')/ 2 \qquad r' = (v' - u') /2,$$ we have $$ds^2 ={1\over {k^2r}}e^{-2kr}({dr'}^2 -{dt'}^2) + r^2 d\Omega^2$$ and $ {r'}^2 - {t'}^2 = -u' v' = e^{2kr}(r - 2m) $. So the Euclidean section for which the metric is positive-definite can be defined by a Wick rotation of $t$ which makes $t'$ pure imaginary; and hence for $r \geq 2m$ only. Moreover, $\tau = it $ has period $2\pi/ k$ since $$t'= (u' + v')/2= e^{k r^*}\sinh(-ik\tau),$$ and $(u', v')$ defines $\tau$ up to multiples of $2\pi/ k = 8\pi m$. This periodicty for $\tau$ is precisely the condition for the manifold to be free of conical singularity, but [*it emerges naturally as a consistency condition of the Euclidean section of the Kruskal extension.*]{} In the Kruskal extension, the form of the metric is non-singular at the horizon. Euclidean QFT can be constructed for the Kruskal extension with the on-shell restriction of $\beta = 8\pi m$. Note that if we neglect the spherically symmetric $d\Omega^2$-part, only one coordinate patch is required for the Kruskal extension because the topology is $R^2$ and the topology of the four-manifold is $R^2 \times S^2$. The Euclidean section has Euler number $\chi =2$. For the Schwarzschild-de Sitter metric, the Kruskal extension needs more than one coordinate patch even if we neglect the spherically symmetric part of the metric because there are now two horizons with unequal surface gravity. However it is clear that the Euclidean section of the Kruskal extension of the black hole [*includes the horizon*]{} and also yields [*consistency conditions*]{} on the periodicity of $\tau$. We shall therefore use the Kruskal extension to consistently incorporate both horizons of the Schwarzschild-de Sitter manifold in the Euclideanization, and also to deduce the consistency conditions that are required. In this manner, boundary conditions for quantized matter fields in the Schwarzschild-de Sitter background will emerge naturally from the Euclidean quantum field theory. Moreover, as we shall see, the conditions that arise are rather interesting. (b)The Schwarzschild-de Sitter metric has the form $$ds^2 = -h(r)dt^2 + h^{-1}(r)dr^2 + r^2 (d\theta^2 + \sin^2\theta d\phi^2 ),$$ with $$h(r) = {\lambda \over {3r}}(r_+ - r) (r- r_-)(r+ r_+ + r_-).$$ It is a solution of Einstein’s equations with cosmological constant $\lambda$ if $$3/\lambda = r^2_+ + r_+r_- + r^2_- ,\qquad 6m/\lambda = r_+r_-(r_+ + r_-).$$ The metric is a priori defined for the region between the horizons but there can be a Kruskal extension.[^4] The surface gravity at the horizons are given by $$k_{\pm} = {1\over2 }\left|dh(r)/dr\right|_{r = r_\pm},$$ or $$k_{\pm} = {\lambda\over {6 r_{\pm}}} (r_+ - r_-)(2r_{\pm} + r_{\mp}).$$ The horizons have different values of surface gravity and their ratio satisfy $ 0 < k_+/k_- \equiv \alpha \leq 1$. Similarly, we define $$u \equiv t - r^* \qquad v \equiv t + r^*$$ with $$\begin{aligned} r^*&=& \int h^{-1}(r) dr \cr \nonumber\\ &=& {1\over {2k_-}}\ln(r- r_-) -{1\over {2k_+}}\ln(r_+ - r) + ({1\over {2k_+}} - {1\over {2k_-}})\ln (r + r_+ + r_-). \nonumber\\\end{aligned}$$ In terms of these coordinates, $$ds^2 = -h(r(u, v))dudv + r^2(u, v) (d\theta^2 + \sin^2\theta d\phi^2 ),$$ with $r$ defined implicitly by $ r^*(r) = (v-u)/2$. We may cover the Kruskal extension by two coordinates patches $(u_{\pm}, v_{\pm})$. $(u_+, v_+)$ is valid for $ r > r_- $, which includes the outer or cosmological horizon but not the inner or blackhole horizon; while $(u_-, v_-)$ is valid for $ r < r_+ $, and includes the inner but not the outer horizon. These coordinates are $$u_\pm = \pm e^{\pm k_{\pm} u}, \qquad v_\pm = \mp e^{\mp k _\pm v}.$$ Thus $$\begin{aligned} du_{\pm} dv_{\pm} &=& k^2_\pm e^{\pm k_\pm (u-v)} du dv \cr \nonumber\\ &=& k^2_\pm e^{\mp 2k_\pm r^*}(dt^2 - {dr^*}^2),\end{aligned}$$ and $$\begin{aligned} ds^2 &=& - h(r){du \over du_\pm }{dv \over dv_\pm}du_\pm dv_\pm + r^2 d\Omega^2 \cr \nonumber\\ &=& -h_{\pm} du_\pm dv_\pm + r^2 d\Omega^2 ,\end{aligned}$$ where $$h_- = {\lambda\over 3k^2_- r}(r_+ - r)^{(1+\alpha^{-1})} (r + r_+ + r_-)^{(2-\alpha^{-1})},$$ and $$h_+ = {\lambda\over 3k^2_+ r}(r - r_-)^{(1+\alpha )} (r + r_+ + r_-)^{(2-\alpha)}.$$ $h_-$ is valid for the patch with $0 < r < r_+$ and $h_+$ for that with $r > r_-$. Therefore it is clear that the metric is nonsingular (except at $r=0$) and also nonvanishing in each of the respective coordinate patch. The overlap of the patches occurs in the region $ r_- < r < r_+$ where the coordinates are related by $$u_+ = -e^{(k_+ + k_-)u}u_- , \qquad v_+ = -e^{-(k_+ + k_-)v} v_- .$$ We may also note that by a Wick rotation of $t$, the metric becomes positive definite for $r_- \leq r \leq r_+$ since with $\tau = it$ the metric is $$ds^2 = h_\pm k^2_\pm e^{\mp 2k_\pm r^*}(d\tau^2 + {dr^*}^2) + r^2 d\Omega^2.$$ In terms of Kruskal coordinates $(u_\pm, v_\pm)$, the metric is $$ds^2 = h_\pm |du_\pm|^2 + r^2 d\Omega^2.$$ since $u_\pm$ become complex conjugates of $-v_\pm$ after Euclideanization. To satisfy $|u_\pm|^2 \geq 0$, the Euclidean section is defined only for $ r_- \leq r \leq r_+$; and it can be shown that the horizons at $r_\pm$ correspond to the origins $u_\pm = 0$. Expression (40) shows that the extended Kruskal Riemannian manifold exhibits no singular behaviour at the horizons. In the overlap region with $r_- < r < r_+$, $$\left[\matrix{u_+ \cr v_+}\right] = \left[\matrix{-e^{(k_+ + k_-)(-i\tau - r^*)} & 0\cr 0 &-e^{-(k_+ + k_-)(-i\tau + r^*)} }\right] \left[\matrix{u_- \cr v_-}\right].$$ So the transition function is single-valued only if $\beta $, the period of $\tau$, is an integer multiple of ${2\pi} \over {(k_+ + k_-)}$. However, there are stronger consistency conditions. Since $$(u_\pm + v _\pm)/2 = e^{\mp k_\pm r^*} \sinh(-ik_\pm \tau),$$ this means that $(u_\pm, v_\pm)$ only define values of $\tau$ up to integer multiples of $2\pi /k_+$ and $2\pi/k_-$ in each patch. But $(u_\pm, v_\pm)$ are also well-defined coordinates in the overlap. Therefore the translation $\tau \rightarrow$ $\tau + \beta $ which leaves [*both*]{} sets of coordinates $(u_\pm, v_\pm)$ invariant must be such that $\beta$ has to be an integer multiple of [*both*]{} $2\pi/k_+ $ and $2\pi/k_-$ . This means that in fact $\beta$, the period of $\tau$, is therefore the [*lowest common multiple*]{} of ${2\pi} \over {k_+} $ and ${2\pi} \over {k_-}$. It is easy to check that this is sufficient (although not necessary) for the transition function to be single-valued. The latter is a weaker condition. There is however an interesting relation: Let $n_\pm$ be relatively prime positive integers such that $\beta \equiv 2\pi n_\pm /k_\pm$. Thus $ 0 < \alpha = k_+/k_- = n_+/n_- \leq 1$ is rational.[^5] Then $(n_+ + n_-) = {\beta\over {2\pi}}(k_+ + k_-)$ or $$\beta = 2\pi {{(n_+ + n_-)}\over{(k_+ + k_-)}}.$$ By comparing with Eqs.(38) and (40), we see that under $\tau \rightarrow \tau + \beta$, the transition functions of the $u_\pm$ and $v_\pm$ coordinates gets multiplied by $\exp[\mp i2\pi(n_+ + n_-)]$; and $(n_+ + n_-)$ is the [*winding number*]{} of the transition function. IV. Topological considerations. {#iv.-topological-considerations. .unnumbered} =============================== The Lorentzian Kruskal extension of S-dS is known to exhibit a multi-sheeted structure with the Penrose diagram showing repeating units [@Euclidean]. Thus there is the question of what one means by the Euclidean section with $r_- \leq r \leq r_+$, and what the actual topology (specifically the Euler number) is. The consistency condition that we uncovered in the previous section is related to these issues. We first compute the Euler number of the Euclidean manifold [*with conical singularities*]{} [@Fursaev]. This is given by $$\chi[M]=\chi[M/\Sigma]+\chi[\Sigma],$$ with the regular contribution $$\chi[M/\Sigma]= {1 \over 32\pi^2}\int_{M/\Sigma} d^4x {\sqrt g} (R^2-4R_{\mu \nu}^2+R_{\mu \nu \alpha \beta}^2),$$ and the contributions from the conical singularities, $$\chi[\Sigma]=\sum_\pm (1-\frac{\beta}{\beta_\pm})\chi_2[\Sigma_\pm].$$ $\beta_\pm = 1/T_\pm = 2\pi/ k_\pm$ are the periods associated with the horizons. For the Euclidean S-dS manifold, the explicit computation is straightforward. The results are $$\begin{aligned} \chi[M/\Sigma]&=& 2\beta({1\over \beta_+} + {1\over \beta_-}), \\ \chi[\Sigma]&=& \chi_2[{S}^2] [2-\beta({1\over \beta_+} + {1\over \beta_-})] \nonumber\\ &=& 4 -2\beta({1\over \beta_+} + {1\over \beta_-})\end{aligned}$$ since $\chi_2[{S}^2] = 2$. Thus, as expected, $\chi[M]$ for S-dS is [*always*]{} equal to $4$ and is [*independent of $\beta$*]{} if the horizons are incorporated as conical singularities[^6]. In this sense, allowing for conical singularities preserves the toplogical information which remains constant under deformations of the parameters $m , \lambda$ and $\beta$; and conical singularities seem rather appealing as seeds for condensation and remnants of black hole evaporation. For instance, the specific relation between $\beta$ and the value of $m$ which extremizes (at fixed $\lambda$) the action of Eq.(12) can be worked out. There are actually critical values of $\beta$ for which $m$ approaches $0^+$ and becomes larger as $\beta$ is varied. However, when we wish to consider higher order curvature terms, there are ambiguities and difficulties associated with QFT contributions if horizons are accounted for as conical singularities. In contrast, the Euclidean section of the Kruskal extension gives a different result for the Euler number. Recall that $\beta$ has to be the lowest common multiple of $2\pi/k_+$ and $2\pi/k_-$ to satisfy the consistency condition discussed in the previous section so that the translation $\tau \rightarrow \tau + \beta$ is a symmetry. In the Kruskal extension there are no conical singularities, and Eq.(44) yields $$\chi[M]= 2\beta({1\over \beta_+} + {1\over \beta_-}) = 2(n_+ + n_-).$$ In the last step above, we have substituted the values of $\beta$ from Eq.(43) and $\beta_\pm = 2\pi/k_\pm$. The Euler number is therefore integer and even. The former is consistent with the Euler number of Riemannian manifolds, and the latter is due to the spherical symmetry as $\chi_2[{S}^2] =2$. As mentioned previously, $(n_+ + n_-)$ is the winding number of the transition function displayed in Eq.(41). Therefore we may also write Eq.(43) as $$\beta = {{\pi \chi}\over{(k_+ + k_-)}}.$$ However the Euler number is not fixed to be exactly 4. The reason is that basic repeating Euclidean unit for which $\tau \rightarrow \tau +\beta $ is a symmetry depends on both $k_+$ and $k_-$. In the approach with conical singularities, the Euler number is divided between the singular and regular parts of the manifold. These two values can be adjusted by changing $\beta$ although their sum is always 4. Computations of other invariants further differentiate between the alternatives. With conical singularities, the four-volume is ${{4\pi \beta} \over 3} (r^3_+ - r^3_-)$ and is a function of $\beta$ which is independent, while $\beta$ is not arbitrary for the case with Kruskal extension. In the method with conical singularities, conical contributions to the action given by Eq.(10) also do not vanish in general. Thus even if some invariants can be matched by certain choices of $\beta$, others will not be. Only the limiting case of $k_+ = k_-$ allows for a correspondence between the formalism with conical singularities and the results from using the Kruskal extension, since in this limiting case conical defects on both horizons can be eliminated by a single choice of $\beta$ in the formalism with conical singularities. V. Remarks {#v.-remarks .unnumbered} ========== We have discussed some of the difficulties with QFT contributions in the off-shell approach if the horizons are to be accounted for as conical singularities of the Euclidean section. The problems becomes more transparent and acute in scenarios with more than one horizon; and we have considered the explicit example of Schwarzschild-de Sitter. A more natural way to incorporate the horizons emerges from considering the Kruskal extension and then constructing the QFT on the Euclidean section. In this manner no conical singularites are introduced but the horizons with their unequal surface gravity lead to natural selection rules or consistency conditions on the periodicity of the Euclidean time variable; and suggests that these are the natural boundary conditions that should be imposed upon such a QFT and the quantum states. Moreover, this implies that thermal states with $\beta$ being lowest common multiple of $2\pi/k_+$ and $2\pi/k_-$ exist. In this approach, $\beta$ can no longer assume arbitrary off-shell values but is completely determined by the stated consistency condition. Since there are no conical singularities, the one-loop effective action on integrating out quantized matter fields will contain the usual terms (and counterterms) without arbitrary $\beta$-dependent contributions and Dirac delta singularities in terms quadratic in the curvatures. Although we have not set up an explicit quantum field theory and completed the calculation of the stress tensor in 4-d, there is support for our conjecture. The existence of quantum states for S-dS whose stress tensor is static has been shown explicitly for the 2-d case of S-dS for which the angular dependence is neglected [@Tadagi]. Our results therefore also offers an understanding of this from the Euclidean approach. For the pure black hole and de Sitter configurations, our proposal is equivalent to the on-shell requirements but it is interesting to note that the proposal also serves to give meaning to the concept of “on-shell" for the Schwarzschild-de Sitter manifold; and may be generalizable to even more complicated scenarios. The Schwarzschild-de Sitter example also illustrates the much richer interplay among horizons, QFT and topology that can occur when the cosmological constant is [*not neglected*]{} in black hole scenarios. It will be interesting to investigate the stability when back reactions are taken into account. For instance since $\beta$ is given by the lowest common multiple condition, it can vary wildly with deformations of $k_\pm$ if there are no further restrictions. However, it is important to note that conservation of topological Euler number implies that $(n_+ + n_-)$ should be constant. Thus within each topological sector where this number is conserved, $\beta$ varies inversely with $(k_- + k_+)$ (see Eq. (50)). More chaotic behaviour can of course happen in quantum gravity when tunneling between different sectors and also violations of toplogical conservation laws are allowed. Finally, on possible interesting oscillating behaviour for evaporating black holes with cosmological constant, we feel that it is important to distinguish between the eternal and the evaporating case. The S-dS solution that we have is already a [*four*]{}-manifold. Its mass parameter m does not increase or decrease with the time variable t, barring for instance superspace descriptions in quantum gravity where some other degree of freedom is chosen as “time". An evaporating or anti-evaporating black hole with cosmological constant for which the size of the inner horizon increases or decreases with time is a different four-manifold from the eternal case we have considered. Therefore the requirements on the periodicity (if there are any obvious ones following our prescription) are quite clearly different since the Kruskal extension will be that of another four-manifold. Thus our arguments do not necessarily imply that an evaporating black hole has arbitrarily large jumps in its Euclidean period. As for the superspace or quantum gravity context, we are neither able to prove nor disprove possible large jumps in the period. 0.2in Acknowledgments {#acknowledgments .unnumbered} =============== The research for this work has been supported in part by the Physics Department of Virginia Tech and the Natural Sciences and Engineering Research Council of Canada. We are grateful to L. N. Chang for discussions during the progress of this work. C.S. would like to thank G. Kunstatter for helpful comments. [99]{} K. Lake and R. C. Roeder, Phys. Rev. [**D15**]{}, 3513 (1977). S. Perlmutter et. al, ApJ [**483**]{}, 565 (1997); Nature, [**391**]{}, 51 (1998); B. P. Schmidt et. al, astro-ph/9805200; astro-ph/9805201. M. Bordag, K. Kirsten and J. S. Dowker, Commun. Math. Phys. [**182**]{}, 371 (1996); D. V. Fursaev and S. N. Solodukhin, Phys. Rev. [**D52**]{}, 2133 (1995), Phys. Lett. [**B365**]{}, 51 (1996); J. S. Dowker and K. Kirsten, hep-th/ 9608189, hep-th/9803094. D. V. Fursaev and G. Miele, Nucl. Phys. [**B484**]{}, 697 (1997). D. V. Fursaev, Phys. Rev. [**D51**]{}, 5352 (1995); S. N. Solodukhin, Phys. Rev. [**D51**]{}, 609 (1995). G. W. Gibbons and S. W. Hawking, Phys. Rev. [**D15**]{}, 2738-2751 (1977). J. Kapusta, [*Finite-temperature field theory*]{} (Cambridge University Press, 1989). R. Bousso and S. W. Hawking, Phys. rev. [**D54**]{}, 6312 (1996). L. Smolin and C. Soo, Nucl. Phys. B[**449**]{}, 289 (1995). See, for instance, [*Euclidean Quantum Gravity*]{}, edited by G. W. Gibbons and S. W. Hawking, (World Scientific, Singapore, 1993). See, for instance, [*Quantum fields in curved space*]{} by N. D. Birrell and P. C. W. Davies, (Cambridge University Press, 1982). S. M. Christensen, S. A. Fulling, Phys. Rev. [**D15**]{}, 2088 (1977) V. P. Frolov, D. V. Fursaev and A. I. Zelnikov, hep-th/9512184. S. Tadaki and S. Takagi, Prog. Theor. Phys. [**83**]{}, 941 (1990). [^1]: It may be more natural to include a normalization factor in defining the surface gravity(see for instance Ref.[@Bousso]). We thank R. Bousso for drawing our attention to this. However, according to Eqs.(A6)-(A9) of Ref.[@Bousso] the factor cancels in required periods and therefore none of our conclusions will be affected. [^2]: The $S^4$ de Sitter configuration is interesting from the thermodynamic viewpoint in a number of ways. In quantum gravity, it may be necessary to have a nonvanishing cosmological constant. The Gibbons-Hawking temperature of the de Sitter solution, which is the configuration with the greatest symmetry and a possible ground state in quantum cosmology, is proportional to $\sqrt \lambda$, and the exact vanishing $\lambda$ limit may be a physically unattainable zero temperature limit in quantum gravity [@smolin]. The de Sitter solution also appears to violate Nernst’s theorem explicitly since its entropy which is proportional to the area of the horizon and inversely proportional to $\lambda$ does not go to zero with vanishing temperature. [^3]: In the case of quantized matter in Schwarzschild [*background*]{} of fixed mass, the black hole mass is treated as a macroscopic parameter so that $Z(\beta, G, m)$. [^4]: See, for instance, Ref.[@Tadagi]. [^5]: The metric of Eqs.(35)-(37) is still well-defined for irrational exponents through $a^x = \exp(x\ln a) , a > 0$. Restricting $\alpha = k_+/k_-$ to rational numbers yields finite $\beta$. Rational numbers are also dense in the system of real numbers. If $\alpha$ is irrational, $\beta$ becomes larger and larger with improving approximations of irrationals by rationals. [^6]: For the pure Schwarzschild black hole, similar computations yield $\chi=2$.
epsf Introduction ============ It is worthwhile to examine and probe the deuteron as a target from many viewpoints because of the role it plays as our main source of information on the neutron. This has become of increasing interest in recent years with unexpected results being found for the numerous sum rules in which neutron structure functions enter. These include the results for the Ellis-Jaffe sum rules [@ash89; @bau83], for the Gottfried sum rule [@ama91], and for the Bjorken sum rule [@ade93; @ant93], prompting us to re-examine assumptions made in most analyses of deuteron data to extract neutron information. The deuteron is more than the sum of its proton and neutron parts and since experiments on free neutrons can not be done, this nonadditive portion is important to evaluate by any means available. We believe, in fact, that earlier experiments [@aub83; @arn84] make it clear that the deuteron is the simplest nucleus exhibiting the EMC effect which, in turn, must affect the extraction of the neutron structure function that enters in these sum rules [@las91]. In the theoretical part of the present paper, we enlarge upon the work we introduced earlier [@car91] and apply it to some old data and to new, previously unpublished data. We have studied deep inelastic $\nu$ and $\bar \nu$ scattering from various targets, focusing on reactions that produce high momentum backward protons. Backward means relative to the incoming neutrino or antineutrino and high momentum means relative to kinematic limits upon backward momentum imposed in terms of quantities such as the target mass and momentum fraction carried by the struck quark. We concluded, based on deuteron target data [@mat89] obtained at CERN with $\nu$ and $\bar \nu$ beams, that the momentum distribution of the backward protons was consistent with production from a multiquark component (6q) in the deuteron and was difficult to explain if produced via break-up of a two nucleon (2N) quantum mechanical state with a simple conventional wave function. The particular problem with the latter picture, where the neutron and proton substantially maintain their character as nucleons [@fra77], was that there were more large momentum backward protons than expected from typical neutron-proton wave functions. However, in our previous work we were only able to calculate the shape of the backward proton spectrum in the two cases and not the absolute normalization or even the relative normalization of the 6q and 2N contributions. In this paper, we will present normalized calculations for backward proton production in deep inelastic experiments for both the 2N and 6q deuteron components. The 6q calculation includes factors of the fraction of the deuteron that is 6q and of the fragmentation rate of the residue of the 6q cluster into protons. A numerical uncertainty in the latter rate for high momentum protons leads to a factor circa 2 uncertainty in fixing the fraction of 6q state in the deuteron; otherwise the calculation is well determined. We denote the probability of finding the 6q configuration in the deuteron as $f$, with $f$ expected to be between 0.01 and 0.07. Possible values for $f$ have been calculated from deuteron wave functions as the probability for the nucleons to overlap, and Sato et al. [@sat86] claim a value for $f$ in the middle of this range for a typical nucleon-nucleon potential. The largest deuteron probability, $f = 0.074$, was used [@yen89] in describing high energy SLAC electron-deuteron data at $x > 1.0$. Also, studies of the deuteron electromagnetic structure functions have been used to estimate $f$ to be a few percent [@kobushkin]. We here will find that the 6q contribution in deep inelastic scattering, even with $f$ only one or two percent, can be quite large for energetic backward protons. Throughout, although we are mindful of many possibilities (see e.g., [@manton]), we simplify by speaking of the short range baryon number correlation as either 6q or 2N. Within this limitation, we can further say that for most of the conventional 2N wave functions, the 6q state not only can but also must contribute a major share of the cross section for energetic backward protons. We will show that at 500 MeV/c backward proton momentum the modern nucleon-nucleon potential that comes closest to our data needs at least a 60% additional contribution. On the experimental side, we shall in the next Section discuss a Fermilab neutrino-deuterium 15-ft bubble chamber experiment (E545) which has obtained data (previously unpublished) on backward production of protons. Also, we shall comment on some published experiments which likewise measured high-momentum backward proton events. In Sect. \[three\], we give the theoretical formulae used in calculation of backward proton spectra to compare with the Fermilab and CERN deuteron break-up cross sections for backward protons. A comparison of the new data with previously published Argonne, Brookhaven, CERN, and Fermilab backward proton production data is also given. The comparisons of the prediction given by modern potentials with our data is in Sect. \[four\]. The final Section is devoted to further discussion and presentation of conclusions. Fermilab E545 neutrino-deuterium data {#two} ===================================== Previous analyses of the E545 neutrino-deuterium data have concentrated on either extracting $\nu n$ interaction data, or on the distributions of lepton or hadron variables in $\nu d$ scattering. A typical analysis would separate the data into even-prong “$\nu n$” and odd-prong “$\nu p$” events, where a visible proton spectator is ignored in the prong count. The observed “$\nu n$” events were assumed to be a sample of $\nu n$ interactions depleted by “rescattering” within the deuteron nucleus. The “rescattered” $\nu n$ events, in turn, would appear in the “$\nu p$” event sample. In extraction of ratios of $\nu n$ to $\nu p$ cross sections the fraction of such rescattered events was estimated to be in the 6–12% range. An excess of high momentum proton spectators, compared to standard deuteron wave function predictions, is known to be associated with the “$\nu n$” event sample. When commented on in previous analyses, it would generally be noted that the origin of this excess is unknown, but given that the excess accounts for less than 1% of the “$\nu n$” events any effect on the overall distributions or measured quantities would be negligible. In the present analysis, we explicitly examine the $\nu d$ target fragments in the E545 data, and compare the distribution of backward protons with a deuteron wave function plus a small six-quark component. Many details of the E545 experiment have been published [@cha83]. The data discussed here were previously presented at an APS meeting [@kaf83], where the emphasis was on establishing the existence of a “rescattering” phenomena which depleted the observed spectator proton (neutron target) event sample, and in estimating its frequency. The E545 data are from a 320,000 frame exposure of the deuterium-filled Fermilab 15-ft bubble chamber to a wide-band single-horn focused neutrino beam produced by $4.8 \times 10^{18}\ 350$ GeV/c protons incident on a beryllium oxide target. The anti-neutrino component of the beam is $\approx 14$%. The film was scanned twice, and events with two or more charged tracks produced by incident neutral particles in a 15.6 m$^3$ fiducial volume were accepted for analysis. All charged tracks were digitized and geometrically reconstructed. Topology-dependent weights are applied to the data to compensate for scanning and processing losses and for those events failing geometric reconstruction. The average processing times scanning efficiency is 0.80. Cuts are applied to the two-prong events to remove $K^0$ and $\Lambda$ decays and $\gamma$ conversions from the data. A kinematic technique which uses only the measured momenta of the charged particles is used to select a sample of charge current events. Only events for which $\sum p_L > 5$ GeV/c, where $p_L$ is the component of laboratory momentum in the beam direction and the sum is taken over all charged particles, are included in the analysis. The muon candidate is identified as that negative track in the event with the largest component of momentum transverse to the incident neutrino direction. Those events for which the component of the $\mu^-$ candidate’s momentum transverse to the vector sum of the momenta of the other charged particles in the event is greater than 1.0 GeV/c are accepted as charge current events. The incident neutrino energy of the selected charge current events is estimated using transverse momentum balance: $E_\nu = p_L^\mu + p_L^H +|\vec p_T^{\,\mu} +\vec p_T^H| p_L^H/p_T^H$, where the symbols $p^\mu$ and $p^H$ refer to the muon momentum and the vector sum of the charged hadron momenta, respectively. Only events with $E_\nu > 10$ GeV are accepted for analysis. A Monte Carlo simulation indicates that the sample selected according to the above criteria includes 79% of the $\nu d$ charge current events, with the $\mu^-$ correctly identified in 98% of the cases, and with a 3% contamination due to $\nu d$ neutral-current events and 1% due to $\bar\nu d$ events. The corrected number of $\nu d$ events in the sample is 15,129, with an average neutrino energy $\langle E_\nu \rangle = 50$ GeV. Of these events, 459 have an identified proton with momentum magnitude greater than 160 MeV/c whose direction is backward with respect to the incident neutrino direction. (Significant visibility losses occur for protons with momentum less than 160 MeV/c, and hence are not presented.) The identity of the backward protons was verified by re-examining all such tracks on the scan table. The momentum distribution of the backward protons is given in Table \[table\]. This data will be discussed in Sect. 3, together with the proton spectrum from a $\nu d$ and $\bar\nu d$ exposure of BEBC by the WA25 collaboration at CERN [@mat89]. The published Fermilab E545 spectator proton spectra from quasi-elastic $\nu d$ scattering ($\nu d \rightarrow \mu^- p p_s$) [@kit83] will also be discussed, together with similar distributions from $\nu d$ bubble chamber experiments at Brookhaven (80-in) [@baker] and Argonne (12-ft) [@bar77]. Theoretical Discussion {#three} ====================== We will give the expressions for the charge current inclusive cross sections of neutrinos hitting a deuteron and producing a backward proton, $p_B$, a forward lepton, $\ell^-$, and anything else, $X$, $$\nu + d \rightarrow \ell^- + p_B + X,$$ in both the 2N and 6q models. We will see that the shape of the backward proton spectrum is different in the two cases, and will see if the calculations can match the data. The two contributions are not mutually exclusive, and we shall add them incoherently, weighting the 6q contribution by fraction $f$ and the 2N contribution by $(1 - f)$. We expect that $f$ will be on the order of a few percent, but that none-the-less the 6q contribution could be large at large backward proton momenta. The fully differential cross section is differential in $x$, $y$, $\alpha$, and $p_T$, which are the experimentally measurable variables. These variables are the struck quark momentum fraction, $$x = Q^2/2m_N \nu = Q^2/2m_N (E_\nu - E_\ell);$$ the fractional lepton energy loss, $$y = {E_\nu - E_\ell \over E_\nu };$$ the light front momentum fraction of the backward proton, $$0 \leq \alpha = {E_p + p_z \over m_N } \leq 2$$ (with $p_z$ defined positive for backward protons); and the transverse momentum of the proton relative to the direction of the incident neutrino, $p_T$. For the 2N model, using quark distribution functions appropriate to describe striking the neutron, with the argument changed from $x$ to $\xi= x/(2 - \alpha)$ because the neutron is moving, and having the proton emerge with probability given in terms of the deuteron wave function, we find the cross section $$\begin{aligned} {{d\sigma _{2N}} \over {dx\,dy\,d\alpha\,d^2p_T}} &=& \sigma_0 \times \\ \nonumber &\times& \left( {D_n(\xi)+S_n(\xi)+(1-y)^2 \bar U_n(\xi)} \right) \\ \nonumber &\times& {(2-\alpha ) \over \gamma} \left| {\psi (\alpha,p_T)} \right|^2 ,\end{aligned}$$ where $D_n(\xi)$ is $\xi$ times the distribution function of down quarks in the neutron ($n$), etc., $\gamma = E_p/m_N$, $\sigma_o$ is the point Fermi weak interaction (with strenght $G_F$) cross section, $$\sigma_0 \equiv {{2G_F^2 m_N E_\nu}\over \pi } ,$$ and $\psi$ is the wave function of the deuteron normalized by $$\int d\alpha \, d^2 p_T \ \left| {\psi (\alpha,p_T)} \right|^2 =1.$$ The corresponding cross section for the 6q component of the target can be written in terms of the probability distribution of a quark in the 6q cluster and in terms of the probability, $D_{p/5q}$, for the residuum of the 6q state to fragment into the proton. This time, since the deuteron or 6q cluster is stationary in the lab, $x$ is directly—in the scaling limit—the momentum fraction of the struck quark. We have $${{d\sigma _{6q}} \over {dx\,dy\,d\alpha \,d^2p_T}} = \sigma_0 D_6(x) \cdot {1 \over {2-x}} D_{p/5q}(z,p_T) ,$$ where we have included just $D_6$, the down quark distribution for the 6q cluster times $x$, on the grounds that we will need the literal 5q residuum (which comes from the 6q Fock component of the nominal 6q cluster) to get the highest momenta backward protons. The first argument of the fragmentation function is the light front momentum fraction of the proton relative to the five quark residuum, or $$z = {\alpha \over 2-x}.$$ ------------------------ ------------------ Momentum range (MeV/c) Number of events 160-200 187 200-240 100 240-280 61 280-320 37 320-360 30 360-400 14 400-440 10 440-480 10 480-520 9 520-560 1 560-600 0 ------------------------ ------------------ : Momentum distribution of backward protons in 15,129 $\nu d$ charge current events from Fermilab experiment E545.[]{data-label="table"} Presently reported data on the backward proton momentum spectrum uses protons gathered from the entire backward hemisphere. Hence, we too will integrate over the backward hemisphere, to allow direct comparison to experiment. We will integrate over $x$ and $y$ also. The term with explicit $y$ dependence gives a small contribution. Then, $$\begin{aligned} {{E_p \over p^2} {d\sigma_{2N}\over dp}} &=& \int_{bkwd} d\Omega\, dx\ E_p {{d\sigma _{2N}} \over {d^3p} \, dx} \nonumber \\ &=& \sigma_0 {\bar \xi_{\nu n}} \int_{bkwd} d\Omega \, dx \ \gamma^{-1}\alpha (2-\alpha )^2 \left| {\,\psi \,} \right|^2\end{aligned}$$ where $$\bar \xi_{\nu n} = \int_0^1 d\xi \, \left(D_n(\xi)+S_n(\xi)+{1\over 3} \bar U_n(\xi) \right)$$ and $$\begin{aligned} {{E_p \over p^2} {d\sigma_{6q}\over dp}} &=& \int_{bkwd} d\Omega\, dx \ E_p{{d\sigma _{6q}} \over {d^3p} \, dx} \nonumber \\ &=& \int_{bkwd} d\Omega \, dx \ \alpha D_6(x) D_{p/5q}(z,p_T),\end{aligned}$$ where $p = |\vec p \,|$ is the backward proton momentum (and we used $\alpha d\sigma /d\alpha\,d^2p_T = Ed \sigma /d^3p$). What we want is the weighted sum of the 2N and 6q contributions, $${d\sigma \over dp} = (1-f) {d\sigma_{2N} \over dp} + f {d\sigma_{6q} \over dp} .$$ The plotted curves are based on the above formulas plus some choices for the deuteron wave function, the quark distribution functions, and the fragmentation function of the 5q residuum. The wave function will be a light front wave function. (A simple use of a non-relativistic wave function conflicts with the kinematic bound on maximum backward proton momentum.) It is related to non-relativistic wave functions by $$\left| \psi(\alpha, p_T) \right|^2 = \left| \psi_{LF}(\alpha, p_T) \right|^2 = {E_k\over \alpha (2-\alpha)} \left| \psi_{NR}(k_z, k_T) \right|^2$$ where the arguments of the non-relativistic wave function are obtained from $$k_T = p_T$$ and $$\alpha = {\sqrt{m_N^2 + \vec k^2}+ k_z \over \sqrt{m_N^2 + \vec k^2} }.$$ The normalization is $$\int d^3k\, \left| \psi_{NR}(k_z, k_T) \right|^2 =1$$ and the factor above comes from the Jacobian in $$d^3k = {E_k\over \alpha (2-\alpha)} d\alpha \, d^2p_T .$$ We use several different deuteron wave functions, but start with a Hulthén wave function, which is still in common use [@tenner; @kit83; @baker], $$\psi_{NR}(\vec k) \propto {1\over {\vec k}^2 + (45.6 {\rm\ MeV})^2} -{1\over {\vec k}^2 + (260 {\rm\ MeV})^2}.$$ The quark distributions for the nucleon are the set CTEQ1L [@cteq93]. (Some old and simple quark distributions [@ch83] give results about the same.) For the 6q cluster, we use the Lassila-Sukhatme model “B” quark distributions [@ls88]. These distributions are based on quark counting rules and physical logic and describe the EMC data. Models “A” and “C” are not very different for the present purposes and are omitted from the figures mainly to avoid clutter. Model “B” has $$D_6(x) = 3 \times 1.85 \sqrt{x\over 2} \left( 1 - {x\over 2} \right)^{10}.$$ The fragmentation function for the 5q residuum is taken in a factorized form, $$D_{p/5q}(z,p_T) = { {(N+4)! \over N! \, 3!}} \ z^N (1-z)^3 \cdot {2\over \pi \lambda^2} \left( 1+{p_T^2 \over \lambda^2} \right)^{-3} ,$$ where $\lambda = 310$ MeV. The spectrum of protons for $z \rightarrow 1$ is given by the counting rules. For this limit, the two quarks not in the proton must give their momentum to the three that are, and this leads to the factor $(1-z)^3$. Then, barring effects external to the $5q$ residuum, the proton should have 3/5 of the residuum’s momentum and this requires $N=5$. There is, however, some pull from the struck quark which could increase the probability of protons going in the forward direction. This can be accommodated in the above fragmentation function by reducing $N$, and we shall quote results for both $N=3$ and $N=5$. Lower values of $N$ increase the cited values of $f$. Fig. \[han53\] shows the comparison of the E545 data and WA25 data [@mat89] with the sum of the 6q and 2N contributions, using $N=5$ and $f = 2\%$, or equivalently for $N=3$ and $f = 4\%$. The E545 data is absolutely normalized, $d\sigma / dp = (\sigma_{tot}^{CC}(\nu d) / N_{tot}) N_{bin}/ \Delta p$ where $\Delta p$ is the bin width and $\sigma_{tot}^{CC}(\nu d)$ is obtained from [@kit82a] and evaluated at the 50 GeV average $E_\nu$ in the E545 experiment. The BEBC data WA25 is scaled to agree with the E545 data at the lower momenta. The match between the calculation and the data is quite good. A 2N contribution alone, with this wave function, could not match the data. The 6q contribution, though it is overall only 2–4% of the normalization of the deuteron state, contributes the major share of the cross section for energetic backward protons. The crossover momentum is about 300 MeV, and above this momentum a larger and larger majority of the protons come from the 6q cluster. to further elaborate this point, we note that at 500 MeV ($c=1$) backward proton momentum, the Hulthén contribution needs 800% additional contribution to be in agreement with the new data. 0.1in 3.45in Backward proton data from neutrino scattering is also available from Argonne [@bar77], Brookhaven [@baker], and again Fermilab E545 [@kit83] for the “quasi-elastic” reaction, $$\nu + d \rightarrow \mu^- + p + p_s ,$$ where $p_s$ is a label for the “slow protons” or “spectators.” Fig. \[all53\] shows the backward proton spectra from the three sets of “quasi-elastic” data, together with the spectra from the inelastic data of E545 and the WA25 data. There is reasonable consistency within errors among all the data sets for the proton spectrum, despite great differences in the incoming neutrino energy. One may think that the material struck by the incoming probe goes forward and that any backwardly emerging hadrons have spectra governed only by the distribution of constituents in the target. The consistency among the data sets for backward protons supports this view. To repeat the main point of this section, a calculation modeling the deuteron as 2N plus a small amount of 6q is able to match the data out to about 720 MeV backward hemisphere proton momentum, again within experimental uncertainty. .1in 3.45in Use of other wave functions {#four} =========================== The two-nucleon wave function that we have used in the above discussion is fairly simple, and one may inquire what happens if a more realistic wave function is used. There are many wave functions derived from nucleon-nucleon potentials that are fit to nucleon-nucleon scattering data, and sometimes also to electron-deuteron scattering data. The assumption is made that only nucleon-nucleon, or sometimes only baryon-baryon, degrees of freedom are needed. If there are other degrees of freedom present—and that is a crucial question we are trying to address—they are ignored. But if they exist, their effects are present in nature, and they must be included in the fitted baryon-baryon potential. And if one calculates deuteron structure from such a potential, who is to be sure if small effects in the (high momentum) tail of the wave function are really due to the two nucleons, or due to the fitted wave function trying to emulate another degree of freedom? (We should quote the Paris group’s remark that “there is no compelling theoretical reason to believe the validity of our potential in the region $r \leq 0.8$ fm. since the short range (SR) part of the interaction is related to exchange of heavier systems and/or to effects of subhadronic constituents such as quarks, gluons, etc.” [@paris80] There apparently would be no physical significance in the invention of a nucleon-nucleon potential model giving enhanced high momentum components by modifing this $r \leq 0.8$ fm. region. But, the important point is: It is precisely this region that this our new data probes.) .15in 3.3in Thus, if one speaks of more realistic wave functions in the present context, one may object to the phrase “more realistic.” We shall however take the Paris and Bonn wave functions [@paris81; @bonn87] as representative of more sophisticated wave functions and see what happens when we use them. A first plot, Fig. \[compare\], shows the Paris, Bonn, and Hulthén wave function in momentum space. They are essentially the same at low momenta, but at several hundred MeV, thanks mainly to the shape and 5.77% size of the D-state, the Paris wave function is considerably larger. Among other wave functions, both the Reid wave function [@reid] and the relatively new wave function of Van Orden, Devine, and Gross [@cebaf95] are rather close to the Paris wave function. Fig. \[all53paris\]a shows what one of the Bonn wave functions, the energy independent OBEPQ [@bonn87] produces for the backward proton spectrum. Though a more complete and realistic wave function, the results are not strikingly different than those from the Hulthén wave function. Fig. \[all53paris\]b shows a corresponding plot for the Paris wave function. All these potential models of the nucleon-nucleon interaction give representations of our new data which are well below the data for the high momentum backward protons. At a momentum of 500 MeV (c = 1), a considerable contribution must be added to each to bring them close to the data being presented: For the Hulthén, as noted above, the additional amount needed is 800%; for the Bonn model an increase of 350% is needed (see Fig. \[all53paris\]a) and for ths Paris model, an additional 62% is needed to bring theory and experiment into agreement. The logarithmic scale for $d\sigma / dp$ in these figures misleads the eye. But, in Fig. \[all53paris\]b, it is clear that the Paris curve is below the majority of the data error bars. To make a more compelling statement, we give a simple statistical comparison of the Paris curves with our data in the momentum range, $0.2 < p < 0.5$ GeV. The squares of the differences between the center of the data points and the dashed Paris and solid Paris + 6q curves divided by the error bar is calculated. The result is a value of $\chi^2$ per data point of 0.8 for the solid curve and and 3 for the dashed curve, corresponding to confidence levels of 0.5 and 0.003, respectively, where the Particle Data Groups, graphs are used as a most universal particle physics convention. 3.45in 3.45in Discussion and Conclusions ========================== We have seen that a small amount of 6q cluster in a deuteron can explain the backward proton data. It is still possible that some 2N wave function with an increased amount of probability at high momenta could also explain the data. Therefore it is of interest to find other signatures that could signal the presence of the 2N or 6q states. We will mention two possibilities, one if a polarized deuteron target is available, another if there is enough data to bin in both $x$ and $p$, and then conclude. If a polarized target is available, then the 2N model leads to characteristic variations of the backward proton angular distribution. At low momentum, the wave function is mostly S-wave and the backward distribution is angle independent regardless of the deuteron’s polarization. At a momentum where the D-state dominates (about 400 MeV for the Paris wave function, as in Fig. \[compare\]), a polarized deuteron has a non-isotropic spatial wave function. Fig. \[angulardistribution\] shows the angular distribution of backward protons from the D-state for both a longitudinally polarized deuteron (i.e., $J_z = 0$ with quantization direction along the incoming current) and for the average of the two transverse polarizations. (The latter would be for either transverse polarization in the analogous electromagnetic case.) 3.3in If we can bin data in both $x$ and $p$, then we can define a “two-nucleon test ratio.” This is simply the ratio of the observed differential cross section for backward proton scattering to the cross section for scattering off a neutron with an appropriate momentum shift. The latter is intended to be just what would be expected for the 2N model, with the wave function factor removed. Explicitly, $${R_1} = {\sigma_{meas}(x,y,\alpha,p_T)\over \sigma_{nX}(x,y,\alpha,p_T)} ,$$ where the denominator is $$\begin{aligned} \sigma_{nX} &=& {d\sigma \over dx\, dy}(\nu n \rightarrow \mu^- X) \nonumber \\ &=& K (2-\alpha) \left[ D_n(\xi)+S_n(\xi)+(1-y)^2 \bar U_n(\xi) \right].\end{aligned}$$ If the 2N model is correct, then $$R_1 = |\psi(\alpha,p_T)|^2 .$$ Thus, we can test for the 2N model by plotting $R_1$ vs. $x$ at fixed $\alpha$ and $p_T$, or at fixed $p$. If the 2N model is right, such a plot would produce just a simple horizontal line. The crucial question is how different a result a 6q cluster would give. We have elaborated on this question in Ref. [@cl95] for the electromagnetic case. If the 6q cluster dominates at some fixed backward proton momentum, it gives a curve for $R_1$ vs. $x$ that varies by a factor of roughly two from peak to valley. It should be easily distinguishable from the 2N expectation. In conclusion, we have studied the production of backward protons in neutrino- and antineutrino-deuteron scattering, and compared the existing data to one model. The backward proton data from the E545 Fermilab experiment shown in this paper has not previously been published, although it has appeared in talks [@kaf83]. The model we have considered is an incoherent sum of contributions from 2N and 6q components of the deuteron. None of the 2N models that we have looked at has by itself a large enough high momentum tail to explain the backward proton data above about 300 MeV/$c$. If we add a 6q component, we get straightforwardly a good match to the shape of the backward proton spectrum at high momentum. If the probability of the 6q cluster is one to a few percent in the deuteron, then the 6q contribution accounts well for the observed normalization of the data at high momentum, while adding negligibly to the 2N contribution below about 250 MeV/$c$ [@italians]. We consider this a good indication that 6q configurations exist in the deuteron and can be observed in certain circumstances. It is however not ironclad proof since in principle it may be possible that some 2N wave function with an enhanced high momentum tail could also explain all the backward proton data. But one should realize that a wave function gotten from a potential that is fit to data, including low energy nucleon-nucleon scattering data, is matching a Nature that may contain 6q cluster effects and must mock them up somehow in the context of its own degrees of freedom. This means that a good fit to the data with just a 2N wave function is not in its own turn ironclad proof against a 6q cluster. Hence we have added suggestions of further tests that may eventually argue directly against the 2N models. Acknowledgments {#acknowledgments .unnumbered} =============== We express our appreciation to the Fermilab E545 collaboration for providing their unpublished $\nu d$ backward proton spectrum. CEC thanks the NSF for support under Grant PHY-9600415, and also O. Benhar, S. Liuti, and V. Nikolaev for useful comments. CEC and KEL both thank Fermilab for its hospitality while part of this work was done. [99]{} J. Ashman [*et al.*]{}, Nucl. Phys. B [**328**]{}, 1 (1989); Phys. Lett. B [**206**]{}, 364 (1988). G. Baum [*et al.*]{}, Phys. Rev. Lett. [**51**]{}, 1135 (1983); M.J. Alguard [*et al.*]{}, Phys. Rev. Lett. [**37**]{}, 1261 (1978); see also G. Igo and V. W. Hughes, Proceedings of the Vancouver Meeting-Particles and Fields ’91, Eds. D. Axen, D. Bryman, and M. Comyn (World Scientific, Singapore, 1992) p. 593. P. Amaudruz [*et al.*]{}, Phys. Rev. Lett. [**66**]{}, 2712 (1991). B. Adeva [*et al.*]{}, Phys. Lett. B [**302**]{}, 533 (1993). P.L. Anthony [*et al.*]{}, Phys. Rev. Lett. [**71**]{}, 959 (1993). J.J. Aubert [*et al.*]{}, Phys. Lett. [**123B**]{}, 275 (1983). R. Arnold [*et al.*]{}, Phys. Rev. Lett. [**52**]{}, 727 (1984). K. E. Lassila, C. E. Carlson, A. Petridis, and U. P. Sukhatme, in Proceedings of XIV International Kazimierz Conference, Warsaw, 1991, ed. by Z. Ajduk, S. Pokorski, and A. Wróblewski (World Scientific, Singapore, 1992) p. 579. C.E. Carlson, K.E. Lassila, and U.P. Sukhatme, Phys. Lett. B [**263**]{}, 277 (1991). Quoted in E. Matsinos [*et al.*]{}, Z. Phys. C [**44**]{}, 79 (1989). L.L. Frankfurt and M.I. Strikman, Phys. Lett. B [**69**]{}, 93 (1977). M. Sato, S. Coon, H. Pirner, and J. Vary, Phys. Rev. C [**33**]{}, 1062 (1986). G. Yen and J. Vary, Phys. Rev. C [**40**]{}, R16 (1989). A. Kobushkin, Yad. Fiz. [**28**]{}, 495 (1979) \[Translation, Sov. J. Nucl. Phys. [**28**]{}, 252 (1978)\]. N. S. Manton, Phys. Rev. Lett [**60**]{}, 1916 (1988); E. Braaten and L. Carson, Phys. Rev. D [**38**]{}, 3525 (1988); R. A. Leese, N. S. Manton, and B. J. Schroers, Report “Attractive Channel Skyrmions and the Deuteron,” DTP 94-47, NI 94037, hep-ph/9502405 (1995); see, e. g., J. B. Cole [*et al.*]{}, Phys. Rev. D [**37**]{}, 1105 (1988) and references therein. T. Kafka [*et al.*]{}, Bull. Am. Phys. Soc. [**28**]{}, 756 (1983) (E545 collaboration). T. Kitagaki [*et al*]{}, Phys. Rev. D [**28**]{}, 436 (1983). N. J. Baker [*et al*]{}, Phys. Rev. D [**23**]{}, 2499 (1981); T. Kitagaki [*et al*]{}, Phys. Rev. D [**42**]{}, 1331 (1990). S.J. Barish [*et al.*]{}, Phys. Rev. D [**16**]{}, 3103 (1977). A. G. Tenner, Deuteron properties studied in neutrino and antineutrino interactions, NIKHEF-H/86-7; L. Hulthén, Ark. Mat. Fys. [**28**]{}, 5 (1942). J. Botts [*et al.*]{}, Phys. Lett. B [**304**]{}, 159 (1993). C.E. Carlson and T.J. Havens, Phys. Rev. Lett. [**51**]{}, 261 (1983). K.E. Lassila and U.P. Sukhatme, Phys. Lett. B [**209**]{}, 343 (1988). T. Kitagaki [*et al*]{}, Phys. Rev. Lett. [**49**]{}, 28 (1982). M. Lacombe [*et al.*]{}, Phys. Rev. C [**21**]{} 861 (1980). M. Lacombe [*et al.*]{}, Physics Letters B [**101**]{} 139 (1981). R. Machleidt [*et al*]{}., Phys. Rep. [**149**]{}, 1 (1987). R.V. Reid, Ann. Phys. (N. Y.) [**50**]{}, 411 (1968). J. W. Van Orden, N. Devine, and F. Gross, Phys. Rev. Lett. 75, 4369 (1995). C. E. Carlson and K. E. Lassila, Phys. Rev. C [**51**]{}, 364 (1995). There may also be effects for fast forward protons; see S. Simula, Few Body Syst. Supp. [**9**]{}, 466 [1995]{} and C. Ciofi degli Atti and S. Simula, Phys. Lett. B [**325**]{}, 276 (1994).
--- author: - '[^1]' bibliography: - 'pmandrik\_bib\_file.bib' title: Constraints on anomalous couplings of the Higgs boson from pair production searches --- Introduction {#intro} ============ The discovery of the Higgs boson by the Large Hadron Collider (LHC) [@Aad:2012tfa; @Chatrchyan:2012xdj] experiments has opened up a new area of direct searches for physics Beyond Standard Model (BSM) using the Higgs boson as a probe [@deFlorian:2016spz; @Hou:2017vvp; @Zhang:2018nmy; @Ilyushin:2019mkp; @Capozi:2019xsi; @Cao:2015oaa]. An important test of the Standard Model (SM) is the measurement of Higgs boson pair production. In particular, many BSM models predict the existence of heavy particles that can couple to a pair of Higgs bosons [@deFlorian:2016spz; @Nakamura:2017irk; @Englert:2019eyl; @Robens:2019kga; @Tang:2012pv]. These particles could appear as a resonant contribution to the invariant mass of the $HH$ system or they may contribute to Higgs boson pair production through virtual processes and lead to the cross sections for Higgs boson pair production that are significantly greater than the SM prediction. In this article we use recent results from LHC searches in different final states [@Aaboud:2018ftw; @Aaboud:2018ewm; @Aaboud:2018knk] in order to put limits on anomalous interactions of the Higgs bosons. The study is based on Monte-Carlo (MC) simulation of related processes which allow to take into account differences in SM and BSM Higgs boson pair productions and incorporate the detector effects and reconstruction efficiencies. The rest part of this paper is organized as follows. In Section \[mc\] we describe the theoretical model and MC simulation. In Section \[selection\] the event selection and systematic uncertainties are presented. Finally, the main results are summarized and discussed in Section \[results\]. Event generation {#mc} ================ While the BSM physics in Higgs interactions may arise from different sources the effective field theory approach (EFT) [@Weinberg:1978kz; @Buchmuller:1985jz; @Arzt:1994gp] is used to parameterize observable effects. In this article we use the following effective Lagrangian (with up to dimension-six operators) [@deFlorian:2016spz; @Carvalho:2015ttv] to describe the Higgs boson pair production: $$\label{eq_lagrangian} \mathcal{L}_{BSM} = \kappa_{\lambda} \lambda^{SM}_{HHH} v H^3 - \frac{m_t}{v}(\kappa_t H + \frac{c_2}{v} H^2)(\bar{t_L}t_R + h.c.) + \frac{1}{4} \frac{\alpha_S}{3 \pi v}(c_g H - \frac{c_{2g}}{2 v}H^2)G^{\mu \nu}G_{\mu \nu}$$ where $v$ - vacuum-expectation value of the Higgs field; $\kappa_{\lambda} = \lambda_{HHH} / \lambda_{HHH}^{SM}$ - measure of deviation of Higgs boson trilinear coupling from its SM expectation $\lambda_{HHH}^{SM} = m^2_{H} / 2v^2$; $\kappa_{t} = y_{t} / y_{t}^{SM}$ - measure of deviation of Higgs boson trilinear coupling from its SM expectation $y_{t}^{SM} = \sqrt{2} m^2_{t} / v$; $c_{2}$ - coupling between two Higgs bosons and two top quarks; $c_{g}$ - coupling between one Higgs bosons and two gluons; $c_{2g}$ - coupling between two Higgs bosons and two gluons. [0.40]{} [0.40]{} \ [0.30]{} [0.20]{} [0.30]{} As base events, we use the 12 benchmarks defined in [@Carvalho:2015ttv; @Carvalho:2016rys] and for each of them we simulate $500 \cdot 10^3$ events. The usage of a manageably small set of benchmark points allow to represent the volume of the unexplored parameter space. The list of benchmark hypotheses is provided in Table \[table\_fcnc\]. 1 2 3 4 5 6 7 8 9 10 11 12 ------------------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ $\kappa_\lambda$ 7.5 1.0 1.0 -3.5 1.0 2.4 5.0 15.0 1.0 10.0 2.4 15.0 $\kappa_t$ 1.0 1.0 1.0 1.5 1.0 1.0 1.0 1.0 1.0 1.5 1.0 1.0 $c_{2}$ -1.0 0.5 -1.5 -3.0 0.0 0.0 0.0 0.0 1.0 -1.0 0.0 1.0 $c_{g}$ 0.0 -0.8 0.0 0.0 0.8 0.2 0.2 -1.0 -0.6 0.0 1.0 0.0 $c_{2g}$ 0.0 0.6 -0.8 0.0 -1.0 -0.2 -0.2 1.0 0.6 0.0 -1.0 0.0 The FeynRules [@Alloul:2013bka] implementation of the Lagrangian (\[eq\_lagrangian\]) is interfaced with [<span style="font-variant:small-caps;">MG5\_</span>]{}a[<span style="font-variant:small-caps;">MC@NLO</span>]{} 2.4.2 [@Alwall:2014hca] package using the UFO module [@Degrande:2011ua]. All generated events are processed with [<span style="font-variant:small-caps;">Pythia</span>]{} 8.230 [@Sjostrand:2014zea] for showering, hadronization and the underlying event description. The [<span style="font-variant:small-caps;">NNPDF3.0</span>]{} [@Ball:2014uwa] PDF sets are used. The detector simulation is performed with the fast simulation tool [<span style="font-variant:small-caps;">Delphes</span>]{} 3.4.2 [@deFavereau2014] using the corresponding detectors parameterization cards. No additional pileup interactions are added to the simulation. Event selection and systematic uncertainties {#selection} ============================================ The event selections from $HH \rightarrow \gamma \gamma bb$ [@Aaboud:2018ftw], $HH \rightarrow \gamma \gamma WW$ [@Aaboud:2018ewm] and $HH \rightarrow bbbb$ [@Aaboud:2018knk] searches are reproduced in order to accurately estimate the efficiency for the BSM benchmark hypotheses. In all analyses we use anti-$k_T$ jets reconstructed with a radius parameter $R = 0.4$. $HH \rightarrow \gamma \gamma WW$ channel ------------------------------------------- For the search of $\gamma \gamma \ell \nu j j$ final state following pre-selected objects are used: photons with $p_T > 25$ Gev, $|\eta| < 2.37$ (excluding $1.37 < |\eta| < 1.52$), $I_{rel} < 0.05$, electrons with $p_T > 10$ Gev, $|\eta| < 2.37$ (excluding $1.37 < |\eta| < 1.52$), $I_{rel} < 0.05$, muons with $p_T > 10$ Gev, $|\eta| < 2.7$, $I_{rel} < 0.05$ and jets with $p_T > 25$ Gev, $|\eta| < 2.5$. Events are selected using a diphoton trigger, which requires two photon candidates, one with transverse energy $E_T > 35$ GeV and second with $E_T > 25$ GeV. An overlap removal procedure is performed in the following order: electrons with $\Delta R(e, \gamma) < 0.4$ are removed, jets with $\Delta R(jet, \gamma) < 0.4$ or $\Delta R(jet, e) < 0.2$ are removed, electrons with $\Delta R(e, jet) < 0.4$ are removed, muons with $\Delta R(\mu, \gamma) < 0.4$ or $\Delta R(\mu, jet) < 0.4$ are removed. The events are required to contain at least two jets, no b-tagged jets, at least one charged lepton ($e$ or $\mu$). The transverse momentum of the diphoton system of leading and sub-leading $E_T$ photons is required to be larger than 100 GeV and the mass of diphoton system is required to be $121.7 \text{ GeV} < m_{\gamma \gamma} < 128.5$ GeV. The leading (sub-leading) photon candidate is required to satisfy $E_T /m_{\gamma \gamma} > 0.35$ (0.25). $HH \rightarrow \gamma \gamma bb$ channel ------------------------------------------- Object pre-selections and trigger are the same as in $HH \rightarrow \gamma \gamma WW$ searches. In additional any jets that are within $\Delta R = 0.4$ of an isolated photon candidate or within $\Delta R = 0.2$ of an isolated electron candidate are discarded. The events are required to contain exactly two b-tagged jets with $p_T > 100$ GeV of the leading and $p_T > 30$ GeV of the second jet and at least two photons. The leading (sub-leading) photon candidate is required to satisfy $E_T /m_{\gamma \gamma} > 0.35$ (0.25). The dijet invariant mass is required to be within mass window of $90$ GeV $< m_{jj} < 140$ GeV and the diphoton invariant mass is required to fall within $105$ GeV $< m_{jj} < 160$ GeV. $HH \rightarrow bbbb$ channel ------------------------------- The trigger effects are taken into account requiring the events to feature either one $b-$tagged jet with $p_T > 225$ GeV, or two $b-$tagged jets with $p_T > 55$ GeV. The events are required to contain at least four b-tagged jets with $p_T > 40$ GeV and $|\eta| < 2.5$. The b-tagging jets are paired to construct two Higgs boson candidates. The leading (sub-leading) Higgs boson candidates should have $\Delta R_{jj, lead} \cdot m_{4j} \in \big[ 360 \text{ GeV} - 0.5 \cdot m_{4j}, 653 \text{ GeV} + 0.475 \cdot m_{4j} \big]$ ($\Delta R_{jj, subl} \cdot m_{4j} \in \big[ 235 \text{ GeV}, 875 \text{ GeV} + 0.35 \cdot m_{4j} \big]$) if $m_{4j} < 1250$ and $\Delta R_{jj, lead} \in [0, 1]$ ($\Delta R_{jj, subl} \in [0, 1]$) otherwise. The leading Higgs boson candidate is defined as the candidate with the highest scalar sum of jet $p_T$. In case of more than two Higgs boson candidates satisfying these requirements the pairing with the smallest value of mass disbalance $D_{HH}$ is chosen: $$D_{HH} = \frac{|110 \cdot m_{2j}^{lead} - 120 \cdot m_{2j}^{subl}|}{\sqrt{110^2 + 120^2}}$$ Selected event should contain the leading Higgs boson candidate with $p_T^{lead} > 0.5 m_{4j} - 103$ GeV and sub-leading Higgs boson candidates with $p_T^{subl} > 0.33 m_{4j} - 73$ GeV. The requirement $|\Delta \eta| < 1.5$ is placed on the pseudorapidity difference between the two Higgs boson candidates. A further requirement on the Higgs boson candidates masses is applied: $$\Big( \frac{ m_{2j}^{lead} - 120 \text{ GeV} }{ 0.1 m_{2j}^{lead} } \Big)^2 + \Big( \frac{ m_{2j}^{subl} - 110 \text{ GeV} }{ 0.1 m_{2j}^{subl} } \Big)^2 < 2.56$$ Finally, all possible hadronically decaying top-quark candidates are built from combinations of three jets of which one must be a constituent of a Higgs boson candidate and the other have $p_T > 10$ Gev and $|\eta| < 2.5$. Event is vetoed in the final selection if it contain a top-quark candidate with: $$\Big( \frac{ m_{W} - 80 \text{ GeV} }{ 0.1 m_{W} } \Big)^2 + \Big( \frac{ m_{t} - 173 \text{ GeV} }{ 0.1 m_{t} } \Big)^2 < 2.25$$ Selection efficiencies ------------------------ Benchmark $HH \rightarrow \gamma \gamma WW$ $HH \rightarrow \gamma \gamma bb$ $HH \rightarrow bbbb$ ----------------------------------------------------------------- ----------------------------------- ----------------------------------- ----------------------- SM (Geant4) [@Aaboud:2018ftw; @Aaboud:2018ewm; @Aaboud:2018knk] 8.5 5.8 1.6 SM (Delphes) 6.5 $\pm$ 1.7 5.5 $\pm$ 1.5 1.3 $\pm$ 0.4 1 6.3 $\pm$ 1.8 4.3 $\pm$ 1.3 1.1 $\pm$ 0.3 2 4.9 $\pm$ 1.5 8.4 $\pm$ 2.7 1.5 $\pm$ 0.5 3 5.6 $\pm$ 1.6 6.5 $\pm$ 1.9 1.3 $\pm$ 0.4 4 5.7 $\pm$ 1.6 4.9 $\pm$ 1.4 1.1 $\pm$ 0.3 5 5.3 $\pm$ 1.5 5.6 $\pm$ 1.7 1.3 $\pm$ 0.4 6 5.5 $\pm$ 1.5 5.5 $\pm$ 1.6 1.1 $\pm$ 0.3 7 5.5 $\pm$ 1.5 4.4 $\pm$ 1.3 1.1 $\pm$ 0.3 8 5.2 $\pm$ 1.5 5.3 $\pm$ 1.6 1.1 $\pm$ 0.3 9 7.0 $\pm$ 2.1 10.0 $\pm$ 3.1 1.9 $\pm$ 0.6 10 5.7 $\pm$ 1.6 4.5 $\pm$ 1.3 1.1 $\pm$ 0.3 11 6.5 $\pm$ 1.8 4.3 $\pm$ 1.3 1.2 $\pm$ 0.3 12 5.4 $\pm$ 1.5 4.9 $\pm$ 1.4 1.1 $\pm$ 0.3 The systematic uncertainty in the photon identification and isolation is 3% in total signal yelds [@Aaboud:2018yqu], in the integrated luminosity is 2.1%, in trigger efficiency is 0.4% for $HH \rightarrow \gamma \gamma WW$ and $HH \rightarrow \gamma \gamma bb$ analyses and 2.5% in $HH \rightarrow bbbb$ searches. In all analyses the theoretical uncertainty from the renormalization and factorization scale is determined by varying these scales between 0.5 and 2 times their nominal value while keeping their ratio between 0.5 and 2 [@deFlorian:2016spz]. PDF uncertainty is determined by taking the root mean square of the variation when using different replicas of the default PDF set [@Butterworth:2015oua]. The impact in total signal yelds of the jet energy scale systematic uncertainty is estimated following the prescription in [@Aad:2014bia]. ------------------------------------- ----------------------------------- ----------------------------------- ----------------------- Source of systematic uncertantie $HH \rightarrow \gamma \gamma WW$ $HH \rightarrow \gamma \gamma bb$ $HH \rightarrow bbbb$ Luminosity Trigger efficiency Photon identification and isolation - Jet energy scale Jet energy resolution b-tagging PDF QCD scale ------------------------------------- ----------------------------------- ----------------------------------- ----------------------- Finally, the selection efficiencies for different benchmark points obtained from the MC simulation are given in Table \[table\_eff\]. The uncertainties are combined using summation in quadrature. In the following statistical analysis the correlations between uncertainties for signal and background are neglected. Results and conclusions {#results} ======================= Bayesian inference is used to derive the posterior probability based on the number of selected events where the expected number of signal events is from our modeling and the observed number of events and expected number of background events with uncertainties are taken from the corresponding experimental results [@Aaboud:2018ftw; @Aaboud:2018ewm; @Aaboud:2018knk]. The exclusion limits at 95% C.L. on $HH$ production crossection times branchings for different benchmark models are given in Table \[table\_res\]. The limits on $\sigma_{pp \rightarrow HH \rightarrow \gamma \gamma WW}$ obtained for the first time. The limits on $\sigma_{pp \rightarrow HH \rightarrow \gamma \gamma bb}$ and $\sigma_{pp \rightarrow HH \rightarrow bbbb}$ are comparable with results from [@Sirunyan:2018tki; @Sirunyan:2018qca] and [@Sirunyan:2018iwt] respectly. In order to put limits on $HH$ production crossection based on the combination of analyses we use $B(H \rightarrow bb) = 0.5824$, $B(H \rightarrow \gamma \gamma) = 2.27 \times 10^{-3}$, $B(H \rightarrow WW) = 0.2137$ with additional theoretical uncertainties $0.65\%$, $1.73\%$ and $1\%$ respectively [@deFlorian:2016spz]. The obtained limits on $\sigma_{pp \rightarrow HH}$ are also can complement the results of the combination analysis shown in supplemental material of [@CMS-PAS-HIG-17-030]. Future improvements can be obtained from combinations with other ATLAS $HH$ searches. As the next step, the resulting exclusion bounds could be used in order to constrain BSM models, mapped to EFT, as described in [@Carvalho:2017vnu]. Benchmark $\sigma_{pp \rightarrow HH \rightarrow \gamma \gamma WW}$ $\sigma_{pp \rightarrow HH \rightarrow \gamma \gamma bb}$ $\sigma_{pp \rightarrow HH \rightarrow bbbb}$ $\sigma_{pp \rightarrow HH}$ ----------- ----------------------------------------------------------- ----------------------------------------------------------- ----------------------------------------------- ------------------------------ 1 14.6 2.4 1437.0 1372.6 2 18.6 1.2 1101.9 1149.1 3 16.5 1.6 1252.5 1094.7 4 16.1 2.2 1491.1 1316.7 5 17.3 1.9 1295.3 1248.7 6 16.9 1.9 1462.5 1223.9 7 16.6 2.4 1442.8 1365.1 8 17.7 2.0 1484.8 1247.8 9 13.1 1.0 868.9 866.1 10 16.0 2.4 1481.2 1373.7 11 14.1 2.4 1390.7 1366.5 12 17.1 2.2 1510.9 1327.0 Acknowledgments =============== I would like to thank S. Slabospitskii and V. Kachanov for useful discussions. [^1]:
--- abstract: 'In this work we solve the packing problem for complete $(n,3)$-arcs in $PG(2,16)$, determining that the maximum size is $28$ and the minimum size is $15$. We also performed a partial classification of the extremal size of complete $(n,3)$-arcs in $PG(2,16)$.' author: - | D. Bartoli, S. Marcugini and F. Pambianco\ Dipartimento di Matematica e Informatica,\ Università degli Studi di Perugia,\ Via Vanvitelli 1, 06123 Perugia Italy\ e-mail: {daniele.bartoli, gino, fernanda}@dmi.unipg.it title: | The maximum and the minimum size\ of complete $(n,3)$-arcs in $PG(2,16)$ --- Introduction ============ In the projective plane $PG(2,q)$ over the finite field $GF(q)$ an $(n,r)$-arc is a set of $n$ points such that no $r + 1$ points are collinear and some $r$ points are collinear. An $(n,r)$-arc is called *complete* if it is not contained in a $(n+1,r)$-arc of the same projective plane. An $(n,2)$-arc is called $n$-arc. For a more detailed introduction to $(n,r)$-arcs and in particular $(n,3)$-arcs see [@H1988], [@HS2000], [@3ArchiMPG2_11], [@3ArchiMPG2_13]. The largest size of $(n,r)$-arcs of $PG(2,q)$ is indicated by $m_{r}(2,q)$. In particular $m_{3}(2,q)\leq 2q+1$ for $q\geq 4$ (see [@Thas1975]). In [@HS2000] bounds for $m_{r}(2,q)$ and the relationship between the theory of complete $(n,r)$-arcs, coding theory and mathematical statistics are given.\ Arcs and and $(n,3)$-arcs in $PG(2,q)$ correspond to respectively MDS and NMDS codes of dimension $3$. These types of linear codes are the best in term of minimum distance, among the linear codes with the same length and dimension. In general $(n,k)$-arcs in $PG(2,q)$ correspond to linear codes with Singleton defect equal to $k-2$.\ Results ======= In this work we establish the maximum and the minimum size of complete $(n,3)$-arcs in $PG(2,16)$. To do this we performed a computer based search using some ideas similar to those presented in [@3ArchiPG2_7], [@3ArchiMPG2_11], [@3ArchiMPG2_13] and [@Cook2011]. The maximum size of complete $(n,3)$-arcs in $PG(2,16)$ is $28$. We performed an exhaustive search of $(n,3)$-arcs in $PG(2,16)$ of size greater than $28$ and we found no examples. Moreover we have an example of complete $(28,3)$-arc (see [@BFMP2006]), obtained as union of orbits of some subgroup of $P\Gamma L(3,16)$; the size is equal to the one given in [@BKW2005]. The classification of complete $(28,3)$-arcs is in progress. The minimum size of complete $(n,3)$-arcs in $PG(2,16)$ is $15$. We performed an exhaustive search of $(n,3)$-arcs in $PG(2,16)$ of size less than $15$ and we found no examples. We also proved that a complete $(15,3)$-arc contains a $(k,2)$-arc, with $8\leq k \leq 9$. As result of the search for complete $(15,3)$-arcs we get only the example presented in Table \[15arco\] containing a $(9,2)$-arc. There exists a unique complete $(15,3)$-arc in $PG(2,16)$. We denote[** **]{}$GF(16)=\{0,1=\alpha ^{0},2=\alpha ^{1},\ldots ,15=\alpha ^{14}\}$ where $% \alpha $ is a primitive element such that $\alpha^4 + \alpha^3 + 1=0$. The columns $\ell_{i}$ indicate the number of $i$-secant of the $(n,3)$-arc and $G$ indicates the description of the stabilizer in $P\Gamma L(3,16)$ (see [@librogruppi]). [|c|c|c|c|c|c|]{} Points &$\ell_{0}$&$\ell_{1}$&$\ell_{2}$&$\ell_{3}$&G\ =0.85 mm --- --- --- --- ---- --- --- ---- ---- --- --- ---- ---- ---- ---- 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 2 2 4 9 9 11 11 13 13 0 0 1 1 11 8 5 10 10 2 8 2 11 1 12 --- --- --- --- ---- --- --- ---- ---- --- --- ---- ---- ---- ---- : complete $(15,3)$-arc []{data-label="15arco"} & 92 & 138 & 12 & 31 & $\mathcal{S}_3$\ [99]{} J. Bierbrauer, G. Faina, S. Marcugini and F. Pambianco, *On the structure of the $(n,r)$-arcs in $PG(2,q)$*, Proceedings of the Tenth International Workshop on Algebraic and Combinatorial Coding Theory, Zvenigorod, Russia, 3-9 Settembre 2006, 19-23. M. Braun, A. Kohnert and A. Wassermann, *Construction of linear codes with prescribed distance*, OC05 The fourth International Workshop on Optima Codes and related Topics, PAMPOROVO, Bulgaria (2005), 59-63. G. R. Cook, *Arcs in a Finite Projective Plane*, PhD Thesis, available on internet http://sro.sussex.ac.uk/ J. W. P. Hirschfeld, *Projective Geometries over Finite Fields*, second edition, Oxford University Press, Oxford, (1998). J. W. P. Hirschfeld and L. Storme, *The packing problem in statistics, coding theory, and finite projective spaces*: update 2001, in: Finite Geometries, Proceedings of the Fourth Isle of Thorns Conference, A. Blokhuis, J. W. P. Hirschfeld, D. Jungnickel and J. A. Thas, Eds., Developments in Mathematics 3, Kluwer Academic Publishers, Boston, (2000), 201-246. S. Marcugini, A. Milani and F. Pambianco, *Classification of the $(n,3)$-arcs in $PG(2,7)$*, Journal of Geometry [**80**]{} (2004), 179-184. S. Marcugini, A. Milani and F. Pambianco, *Maximal $(n, 3)$-arcs in $PG(2, 11)$*, Proceedings of Discrete Mathematics (1999), 421-426. S. Marcugini, A. Milani and F. Pambianco, *Maximal $(n, 3)$-arcs in $PG(2, 13)$*, Proceedings of Discrete Mathematics (2005), 139-145. J. Thas, *Some results concerning $((q+1)(n-1),n)$-arcs*, J. Combin. Theory Ser. A [**19**]{} (1975), 228-232. A. D. Thomas and G. V. Wood, *Group Tables*, Orpington, U.K.: Shiva mathematics series 2, (1980).
--- abstract: 'We consider constraints on the isoscalar S-wave $\pi$-N scattering length $a^+$ from $\pi$-deuteron scattering, to third order in small momenta and pion masses in chiral perturbation theory. To this order, the $\pi$-deuteron scattering length is determined by $a^+$ together with three-body corrections that involve no undetermined parameters. We extract a novel value for a combination of dimension two low–energy constants which is in agreement with previous determinations.' --- 22.cm 16.cm -1.7cm -1.5cm 1.5cm 1.2em plus 2pt minus 2pt =-1cm [KFA-IKP(TH)-1997-16]{} [DOE/ER/40762-127]{} [**[The isoscalar S-wave $\pi$-N scattering length $a^+$\ from $\pi$-deuteron scattering]{}**]{} \ \ [*E-mail address: sbeane@fermi.umd.edu*]{}\ \ [*Université Louis Pasteur, F-67037 Strasbourg Cedex 2, France*]{}\ [*E-mail address: bernard@sbghp4.in2p3.fr*]{}\ \ [*E-mail address: lee@anlphy.phy.anl.gov*]{}\ \ [*E-mail address: Ulf-G.Meissner@fz-juelich.de*]{}\ PACS nos.: 13.75.Gx , 12.39.Fe Chiral perturbation theory allows one to relate distinct scattering processes in a systematic manner. Recently methodology has been developed which relates scattering processes involving a single nucleon to nuclear scattering processes [@wein1]. For instance, one can relate $\pi$-N scattering to $\pi$-nucleus scattering. The non-perturbative effects responsible for nuclear binding are accounted for using phenomenological nuclear wavefunctions. Although this clearly introduces an inevitable model dependence, one can compute matrix elements using a variety of wavefunctions in order to ascertain the theoretical error induced by the off-shell behavior of different wavefunctions. Weinberg showed that to third order ($O({q^3})$, where $q$ denotes a small momentum or a pion mass) in chiral perturbation theory the $\pi$-d scattering length is given by [@wein1] $$a_{\pi d}=\frac{(1+\mu)}{(1+\mu /2)}(a_{\pi n} + a_{\pi p})+{a^{(1b)}}+ {a^{(1c,1d)}},$$ where $\mu\equiv{M_\pi}/m$ is the ratio of the pion and the nucleon mass. The various diagrammatic contributions to $a_{\pi d}$ are illustrated in figure 1. The three-body corrections are (in momentum space): $${a^{(1b)}}= - \frac{{M_\pi^2}}{32{\pi^4}{f_\pi^4}{(1+\mu /2)}} \langle\frac{1}{{\vec q}^{\,2}}\rangle_{\sl wf}$$ $${a^{(1c,1d)}}=\frac{{g_A^2}{M_\pi^2}} {128{\pi^4}{f_\pi^4}{(1+\mu /2)}} \langle\frac{{\vec q}\cdot{{\vec\sigma}_1}{\vec q}\cdot{{\vec\sigma}_2}} {({\vec q}^{\,2}+{M_\pi^2})^2}\rangle_{\sl wf}.$$ $\langle\vartheta\rangle_{\sl wf}$ indicates that $\vartheta$ is sandwiched between deuteron wavefunctions. These matrix elements have been evaluated using a cornucopia of wavefunctions; results are in table 1. Clearly ${a^{(1b)}}$ dominates the three-body corrections. This is the result of the shorter range nature of $a^{(1c,1d)}$ as can be seen from the r–space expressions of Eqs.(2) and (3). It is important to stress that the dominant three–body correction turns out to be quite independent of the wavefunction used. This implies that the chiral perturbation theory approach, which relies on the dominance of the pion–exchange, is useful in this context. The $\pi$-N scattering lengths have the decomposition $$a_{\pi n} + a_{\pi p}=2 a^{+}=2(a_{1}+2a_{3})/3,$$ where $a^+$ is the isoscalar S-wave scattering length, and $a_{1}$ and $a_{3}$ are the isospin $1/2$ and $3/2$ contributions, respectively. Weinberg took $a^{+}$ from experimental data and argued that ${a^{(1b)}}$, which dominates the three-body corrections, should be accounted for with corrections to the vertices, which he estimated using a simple model [@eric]. He then found a result for $a_{\pi d}$ in agreement with the then current experimental value [@pid]. Since Weinberg’s paper, there is new experimental information about both the $\pi$-N and $\pi$-d scattering lengths that is at variance with the old data [@chat][@sigg]. Moreover, since Eq.(1) is a perfectly sensible expression to $O(q^3)$ in chiral perturbation theory, we choose to take it seriously by using realistic deuteron wavefunctions to evaluate both Eq.(2) and Eq.(3) in order to see what it reveals. We can express Eq.(1) as $$a^{+}=\frac{(1+{\mu /2})}{2(1+\mu )}\biggl\lbrace a_{\pi d}- ({a^{(1b)}}+ {a^{(1c,1d)}})\biggr\rbrace ,$$ and use experimental information about $\pi$-d scattering to predict $a^+$; the recent PSI-ETHZ pionic deuterium measurement [@chat] gives $$a_{\pi d}=-0.0264 \pm 0.0011\,{M_\pi^{-1}}.$$ For the three-body corrections, we can safely ignore $a^{(1c,1d)}$ and take the average of the $a^{(1b)}$ values in table 1: $$a^{(1b)}=-0.02 \,{M_\pi^{-1}}.$$ We then find $$a^{+}=-(3.0 \pm 0.5)\,\cdot\,{10^{-3}}{M_\pi^{-1}},$$ which is not consistent with the Karlsruhe-Helsinki value [@koch], $$a^{+}=-(8.3 \pm 3.8)\,\cdot\,{10^{-3}}{M_\pi^{-1}},$$ or the new PSI-ETHZ value deduced from the strong interaction shifts in pionic hydrogen and deuterium, which is small and positive [@sigg]:[^1] $$a^{+}=(0...5)\,\cdot\,{10^{-3}}{M_\pi^{-1}}.$$ The result Eq.(8) agrees, however, with the value obtained in the SM95 partial–wave analysis, $a^{+}=-3.0\,\cdot\, {10^{-3}}{M_\pi^{-1}}$[@vpi]. Given the ambiguous experimental situation regarding $a^{+}$, it seems most profitable to turn our formula around and use the $\pi$-d scattering data and three-body corrections to constrain undetermined parameters that appear in $a^{+}$, which has been calculated to $O({q^3})$ in chiral perturbation theory [@bkm1]: $$4\pi(1+\mu )a^{+}= \frac{M_\pi^2}{F_\pi^2}\biggl(-4c_1 +2c_2 -\frac{g_A^2}{4m}+2c_3 \biggr) +\frac{3{g_A^2}{M_\pi^3}}{64\pi{F_\pi^4}}.$$ It should be stressed, however, that to this order there appear large cancellations between the individual terms [@bkm1] which lead one to suspect that a calculation at $O({q^4})$ should be performed to obtain a more precise prediction for this anomalously small observable. This, however, goes beyond the scope of this manuscript. The sole undetermined parameter entering the $O({q^3})$ computation of $a_{\pi d}$ is therefore a combination of $c_1$, $c_2$ and $c_3$: $$\Delta\equiv {-4c_1 +2(c_2 +c_3)}$$ where we can now write $$a_{\pi d}=\frac{1}{2\pi (1+\mu /2)}\biggl\lbrace \frac{M_\pi^2}{F_\pi^2}(\Delta -\frac{g_A^2}{4m})+ \frac{3{g_A^2}{M_\pi^3}}{64\pi{F_\pi^4}}\biggr\rbrace + {a^{(1b)}}+ {a^{(1c,1d)}},$$ and solve for $\Delta$: $$\Delta = \frac{2\pi{F_\pi^2}}{M_\pi^2}(1+\mu /2) \lbrace a_{\pi d}-({a^{(1b)}}+{a^{(1c,1d)})}\rbrace +\frac{g_A^2}{4m}\bigl(1-\frac{3m{M_\pi}}{16\pi{F_\pi^2}}\bigr)$$ in order to constrain $\Delta$ using Eqs.(2), (3) and (6). We find $$\Delta =-(0.10\pm 0.03)\, {\rm GeV}^{-1},$$ where we have taken into account the error in the determination of $a_{\pi d}$. In table 2 we give values of the relevant $c_i$’s obtained from a realistic fit to low-energy pion-nucleon scattering data and subthreshold parameters [@bkm2]. Central values lead to $\sigma (0)=47.6\,$MeV and $a^+ =-4.7\cdot 10^{-3}{M_\pi^{-1}}$. These values of the $c_i$’s give the conservative determination: $$\Delta =-(0.18\pm 0.75)\, {\rm GeV}^{-1}.$$ Also shown in table 2 are values of $c_i$’s deduced from resonance saturation. It is worth mentioning that an independent fit to pion-nucleon scattering including also low–energy constants related to dimension three operators finds results consistent with the fit values of table 2 [@moj]. To summarize, we have shown that the recent precise data on the $\pi$–deuteron scattering length can be used to constrain a combination of dimension two low–energy constants of the chiral effective pion–nucleon Lagrangian. This determination gives a result in agreement with previous determinations that use independent input [@bkm2][@moj]. Therefore, a consistent picture of nucleon chiral perturbation theory is emerging. Next, these calculations should be carried out one order further which would allow one to [*precisely*]{} deduce the isoscalar S–wave $\pi$-N scattering length from the accurately measured $\pi$-d scattering length. Work along these lines is in progress. [**Acknowledgments:**]{} VB and UGM are grateful to the Nuclear Theory Group at Argonne National Laboratory for hospitality while part of this work was completed. We thank R. Workman for a useful communication. This research was supported in part by the U. S. Department of Energy, Nuclear Physics Division (grants DE-FG02-93ER-40762 (SRB), W-31-109-ENG-38 (TSHL) ), by NATO Collaborative Research Grant 950607 (VB, TSHL, UGM) and by the Deutsche Forschungsgemeinschaft (grant ME 864/11-1 (UGM)). ${\sl wf}$ $a^{(1b)}$ $a^{(1c,1d)}$ ------------ ------------ --------------- -- -- Bonn $-0.02021$ $-0.0005754$ ANL-V18 $-0.01960$ $-0.0007919$ Reid-SC $-0.01941$ $-0.0008499$ SSC $-0.01920$ $-0.0006987$ Table 1: Three-body corrections for various deuteron wavefunctions in units of $M_\pi^{-1}$. We use ${F_\pi}=92.4$MeV, ${g_A}=1.32$ and ${M_{\pi^+}}=139.6\,$MeV. $i$ $c_i \quad \quad$ $ c_i^{\rm Res} \,\,$ cv $ c_i^{\rm Res} \,\,$ ranges ---------- ------------------- -------------------------- ------------------------------ -- 1 $-0.93 \pm 0.10$ $-0.9^*$ – 2 $3.34 \pm 0.20$ $3.9\,\,$ $2 \ldots 4$ 3 $-5.29 \pm 0.25$ $-5.3\,\,$ $-4.5 \ldots -5.3$ $\Delta$ $-0.18 \pm 0.75$ $0.8 \,\, $ $-3.0 \ldots +2.6$ Table 2: Values of the LECs $c_i$ in GeV$^{-1}$ for $i=1,\ldots,3$. Also given are the central values (cv) and the ranges for the $c_i$ from resonance exchange. The $^*$ denotes an input quantity. This table is adopted from [@bkm2]. [99]{} S. Weinberg, Phys. Lett. B295, 114 (1992). T. Ericson and W. Weise, “Pion and Nucleons” (Clarendon Press, Oxford, 1988). E. Bovet et al., Phys. Lett. B153, 231 (1985). D. Chatellard et al., 74, 4157 (1995). D. Sigg et al., Phys. Rev. Lett. 75, 3245 (1995);\ D. Sigg et al., Nucl. Phys. A609, 269 (1996), (E) A617, 526 (1997). R. Koch, Nucl. Phys. A448, 707 (1986). R.A. Arndt et al., Phys. Rev. C52, 2120 (1995). V. Bernard, N. Kaiser and Ulf-G. Mei[ß]{}ner, Phys. Lett. B309, 421 (1993). V. Bernard, N. Kaiser and Ulf-G. Mei[ß]{}ner, Nucl. Phys. A615, 483 (1997). M. Mojzis, [hep-ph/9704415]{}, Z.Phys.C, in print. R. Machleidt, Adv. Nucl. Phys. 19, 189 (1989). R.B. Wiringa, V.G. Stoks and R. Schiavilla, Phys. Rev. C51, 38 (1995). R.V. Reid, Ann. Phys. (NY) 50, 411 (1968). R. de Tourrell and D.W. Sprung, Nucl. Phys. A210, 193 (19 73). [^1]: Note that this result might still change a bit since a more sophisticated treatment of Doppler–broadening for the width of the hydrogen level has to be performed. Also, the PSI–ETHZ group did not yet quote a value for $a^+$. We rather used their figure combining the H and d results to get the band given.
--- author: - Pepijn van der Laan and Ieke Moerdijk date: 2nd April 2003 title: The bitensor algebra through operads --- Introduction ============ Throughout this note we will work in the category of vector spaces over a field $k$. At some point we need to restrict to 1-reduced (non-$\Sigma$) cooperads. A 1-reduced (non-$\Sigma$) cooperad is a (non-$\Sigma$) cooperad $C$ such that $C(1) = k$, and $C(0) =0$. For a collection $C$ we will extensively use the grading on the total space $\bigoplus_nC(n)$ defined by $\left(\bigoplus_nC(n)\right)^m =C(m+1)$. Throughout this note we denote by $T$ the unital free associative algebra functor, and by $T^+$ the non-unital free associative algebra functor. Similarly, $S$ (resp. $S^+$) will denote the free unital (reps. non-unital) commutative algebra functor. On pointed vector spaces (i.e. vector spaces $V$ together with a non-zero linear map $u:k\longrightarrow V$), we define the pointed tensor algebra $T_*V$ as the quotient of $TV$ by the ideal generated by $(1-u(1))$. Similarly, we define $S_*V$ as the quotient of $SV$ by the ideal generated by $(1-u(1))$. Let $B$ be a bialgebra with comultiplication $\Delta$. To $B$ we associate the opposite bialgebra $B^{\text{op}}$ with comultiplication $\Delta^{\text{op}} = s \circ\Delta$, where $s$ is the symmetry of the tensor product. We will use that $B$ is a Hopf algebra iff $B^{\text{op}}$ is a Hopf algebra. Constructions ============= Let $C$ be a non-$\Sigma$ cooperad with cocomposition given by maps $$\gamma^*:C(n) \longrightarrow \bigoplus_{k,n_1+\ldots+n_k = n}C(k) \otimes (C(n_1)\otimes\ldots\otimes C(n_k)).$$ Define the graded bialgebra $B_C$ to be the tensor algebra $T(\bigoplus_nC(n))$ on the total space of $C$ as a graded algebra (w.r.t. the usual grading of the total space). Use the natural inclusions $$i_1:C(k)\longrightarrow T(\bigoplus_mC(m)), \quad\text{and}\quad i_2:C(n_1)\otimes\ldots\otimes C(n_k)\longrightarrow T(\bigoplus_mC(m))$$ to define the coproduct on generators as $$\Delta = (i_1\otimes i_2)\circ\gamma^*$$ Extend as an algebra morphism. Define a counit $\eps$ such that $\eps$ vanishes on generators of degree $\neq 0$, and $\eps|_{C(1)} = \eps_C$, the coidentity of $C$. Again extend this as an algebra morphism. Coassociativity and counitality are immediate from the corresponding properties for $C$. Let $C$ be a cooperad (with $S_n$-action on $C(n)$). The bialgebra structure defined above descends to the symmetric algebra $S(\bigoplus_nC(n)_{S_n})$ on the total space of coinvariants of $C$. This bialgebra is denoted $\bar B_C$. To see that this bialgebra is welldefined, just note that for a cooperad we can write the cocomposition as $$\gamma^*:C(n) \longrightarrow \bigoplus_k(\bigoplus_{n_1+\ldots+n_k = n}C(k) \otimes (C(n_1)\otimes\ldots\otimes C(n_k)))^{S_k}.$$ Let $C$ be a 1-reduced non-$\Sigma$ cooperad. Define the Hopf algebra $H_C$ to be $T_*(\bigoplus_nC(n))$ as an algebra with respect to the basepoint given by the inclusion of $C(1)=k$. The coalgebra structure is induced by the coalgebra structure on $B_C$. Since $H_C$ is a connected graded bialgebra, the existence of an antipode is assured. If $C$ be a 1-reduced cooperad, the Hopf algebra structure descends to the pointed symmetric algebra $S_*(\bigoplus_nC(n)^{S_n})$ on the total space of invariants of $C$. This Hopf algebra is denoted $\bar H_C$. Let $B$ be a bialgebra. Let $C_B$ the non$\Sigma$ cooperad $C_B(n)= B^{\otimes n}$ (for $n\geq 1$), with the cocomposition $\gamma^*$ defined on summands by the diagram below. $$\xymatrix{ B^{\otimes n} \ar@{.>}[r]^{\gamma^*\qquad}\ar[d]_{\Delta} & B^{\otimes k} \otimes (B^{\otimes n_1}\otimes\ldots \otimes B^{\otimes n_k})\\ B^{\otimes n}\otimes B^{\otimes n} \ar@{=}[r] &(B^{\otimes n_1}\otimes\ldots \otimes B^{\otimes n_k}) \otimes (B^{\otimes n_1}\otimes\ldots \otimes B^{\otimes n_k}), \ar[u]_{(\mu_1\otimes\ldots\otimes \mu_k)\otimes \id} }$$ where $\Delta$ is the coproduct of $B^{\otimes n}$, and $\mu_i:B^{\otimes n_i}\longrightarrow B$ is the multiplication of the algebra $B$. In Sweedler’s notation one can write the cocomposition $\gamma^*$ of $C_B$ on a generator $(x^1,\ldots,x^n)\in C_B(n)$ as $$\begin{split} \gamma^*&(x^1,\ldots,x^n) =\\ & \sum\sum(x^1_{(1)}\star\ldots\star x^{n_1}_{(1)},\ldots,x^{n-n_k+1}_{(1)}\star\ldots\star x^{n}_{(1)}) \otimes ((x^1_{(2)},\ldots,x^{n_1}_{(2)})\otimes \ldots\otimes(x^{n-n_k+1}_{(2)},\ldots,x^n_{(2)})), \end{split}$$ where the first sum is over all $k$ and all partitions $n = n_1+ \ldots +n _k$, and the second sum is the sum of the Sweedler notation, and where $\star$ denotes the product of $B$. Note that the unit of $B$ makes $C_B$ a coaugmented non-$\Sigma$ cooperad. If the bialgebra $B$ is commutative, then $C_B$ is a cooperad. The Bitensor Algebra ==================== Recall the bialgebras $T(T^+(B))$ and $S(S^+(B))$ (Brouder [@B]). Comparing the equation for $\gamma^*$ in Sweedler’s notation with Brouder’s formulas yields the following result. \[Pp:Brouder\] Let $B$ be a bialgebra. The bialgebra $T(T^+(B))$ is isomorphic to the opposite bialgebra of $B_{C_B}$. If $B$ is commutative, the bialgebra $S(S^+(B))$ is isomorphic to the opposite bialgebra of $\bar B_{C_B}$. Let $C$ be a coaugmented cooperad. Then the collection $C^{>1}$ defined by $C^{>1}(1)= k$ and $C^{>1}(n) = C(n)$ for $n>1$ is a 1-reduced cooperad with cocomposition induced by cocomposition in $C$. The Pinter Hopf algebra associated to $T(T^+(B))$ (cf. [@B] for terminology) is isomorphic to the opposite Hopf algebra of $H_{C^{>1}_B}$, and for $B$ commutative the Pinter Hopf algebra associated to $S(S^+(B))$ is isomorphic to the opposite Hopf algebra of $\bar H_{C^{>1}_B}$. [FvdL]{} C. Berger and I. Moerdijk - Axiomatic homotopy theory for operads. Preprint `math.AT/0206094`, 2002. C. Brouder and W. Schmitt - Quantum groups and quantum field theory III. Renormalisation. Preprint `arXiv:hep-th/0210097`, 2002. A. Frabetti and P.P.I. van der Laan - Groups and Hopf algebras from Operads. Work in progress. P.P.I. van der Laan - Ph. D. Thesis, 2003. <span style="font-variant:small-caps;">Pepijn van der Laan</span> (vdlaan@math.uu.nl) and <span style="font-variant:small-caps;">Ieke Moerdijk</span> (moerdijk@math.uu.nl)\ Mathematisch Instituut, Universiteit Utrecht, P.O.Box 80.010, 3508TA Utrecht, The Netherlands
--- abstract: 'We electron-dope single crystal samples of $\rm SrTiO_3$ by exposing them to Ar$^+$ irradiation and observe carrier mobility similar in its magnitude and temperature dependence to the carrier mobility in other electron-doped $\rm SrTiO_3$ systems. We find that some transport properties are time-dependent. In particular, the sheet resistance increases with time at a temperature-dependent rate, suggesting an activation barrier on the order of 1 eV. We attribute the relaxation effects to diffusion of oxygen vacancies - a process with energy barrier similar to the observed activation energy.' author: - Moty Schultz - Lior Klein title: 'Relaxation of transport properties in electron doped ${\rm SrTiO_3}$' --- Perovskites attract considerable interest for their wide range of intriguing properties, including colossal magnetoresistance in manganites [@colossal1; @colossal2], high-$T_C$ superconductivity in cuprates [@cuprates], ferroelectricity in titanates [@ferroelectricity], and itinerant magnetism (ferromagnetism and antiferromagnetism) in ruthenates [@AFM; @FM]. In addition to their individual intriguing properties, for applications, it is particularly appealing that perovskites-based heteroepitaxial structures can be grown epitaxially, commonly on $\rm SrTiO_3$, thus enabling a wide spectrum of new functionalities which may form the basis for future oxide electronics. In addition to serving as a substrate for perovskite films, $\rm SrTiO_3$ may be used to produce high mobility conductors that would be useful in future oxide electronics. A familiar way to obtain a $\rm SrTiO_3$ - based high mobility conductor is by electron doping [@electron; @mobility; @elec; @doping]. Recently it has been demonstrated that ${\rm SrTiO_3-LaAlO_3}$ heterostructures prepared in a particular way also yield high mobility conductivity. Some groups attributed this phenomenon to the formation of a quasi two dimensional electron gas at the ${\rm SrTiO_3-LaAlO_3}$ interface due to polarity discontinuity [@polarity_dis; @high; @mobility], while others argue that it is related to the formation of oxygen vacancies [@origin; @perspectives; @the; @role; @origin; @unusual]. Both methods yield high mobilities on the order of $\rm{ 10,000 \ cm^2V^{-1}s^{-1}}$ at 4.2 K [@origin; @perspectives; @electron; @mobility; @elec; @doping; @polarity_dis; @high; @mobility; @the; @role; @origin; @unusual], suggesting ${\rm SrTiO_3}$ may be an important component in oxide-based electronic devices. Some possibilities for such use have been demonstrated already in its use as a gate [@mott; @transition] and a channel [@sto; @based; @fet] in field effect transistors. Electron doping ${\rm SrTiO_3}$ is commonly achieved by creating oxygen vacancies which transform ${\rm SrTiO_3}$ into ${\rm SrTiO_{3-\delta}}$. Oxygen vacancies may be induced in various ways including high-temperature annealing in oxygen reduced pressure, and Ar$^+$-irradiation [@electronic; @transport; @elec; @doping; @localized; @metallic; @electron; @mobility; @surface]. Ar$^+$ irradiation is also the method that we use to electron-dope our samples. For any future applications of electron-doped ${\rm SrTiO_3}$, it is important to elucidate the stability of its electrical properties over time. For this reason, in this report we focus on relaxation effects of electrical transport in electron-doped ${\rm SrTiO_3}$. We have irradiated single crystal samples of SrTiO3 with Ar$^+$ and explored the changes in the sheet resistance, mobility and magnetoresistance (MR). We find that while the sheet resistance changes with time (although qualitatively it remains unchanged), the mobility and the MR are time-independent. Our analysis indicates that the activation energy for the observed relaxation is about 1 eV which is the energy scale observed for diffusion of oxygen vacancies. This suggests that diffusion of oxygen vacancies is responsible for the observed relaxation effects. ![(a) Sheet resistance ($R_\Box$) of Ar$^+$ irradiated ${\rm SrTiO_3}$ as a function of temperature after 30 (squares) and 90 (circles) seconds of irradiation. Open and full symbols are used for data taken shortly after irradiation and several days after irradiation, respectively . Inset: $R_\Box$ after 90 seconds of irradiation as a function of $R_\Box$ after 60 seconds of irradiation ($\circ$). $R_\Box$ after 90 seconds of irradiation and several days of waiting, as a function of $R_\Box$ measured shortly after the irradiation (+). The lines are fits to a liner function. (b) $R_\Box$ as a function of time at six different temperatures. Inset: rate of change of $R_\Box$ as a function of temperature. The line is a fit for $\alpha e^{-\frac{E}{kT}}$[]{data-label="RvsT"}](RvsT3.eps) Our samples are commercially available [@company] one sided polished ${\rm SrTiO_3}$ crystals $(5 \times 5 \times 0.5 \ mm^3)$. The ${\rm SrTiO_3}$ samples were irradiated with Ar$^+$ ions, accelerated with 4 kV and the beam’s fluence was about $10^{15}$ ions per second per ${\rm cm^2}$. The estimated penetration depth of the ions, L, in ${\AA}$ is given by the empirical formula [@highly; @conductive; @depth1] $L=1.1\frac{E^{2/3}W}{\rho(Z^{1/4}_i+Z^{1/4}_t)^2}$ where E is the energy in eV, W is the atomic weight of the target in atomic mass units, $\rho$ is the target density, and $Z_i$, $Z_t$ are the atomic numbers of the ions and the target, respectively (since ${\rm SrTiO_3}$ is a compound, we use for the target the weighted average of the atomic weights and numbers). In our case $L\approx120{\AA}$; therefore, we expect that the thickness of the conducting layer will be on this order. The samples become conducting when the irradiation time exceeds 30 sec and no more changes in conductivity are observed after several minutes of irradiation. To irradiate specific parts of a substrate in shapes that will allow resistivity and Hall measurement, we use conventional photolithography that leaves 1 micron-thick photoresist on the samples except for windows in the desired shapes. Figure \[RvsT\]a shows sheet resistance of Ar$^+$-irradiated ${\rm SrTiO_3}$, determined with four point measurements. At low temperatures a quadratic behavior is observed for the sheet resistance ($R_\Box$), $R_\Box=a+bT^2$, typical for electron-electron interactions (except for samples with very high sheet resistance that exhibit resistivity minima). We note that the residual resistivity ratio (RRR) in our samples exceeds in some cases 500. Similar and even higher values of RRR have been reported for electron-doped ${\rm SrTiO_3}$ and ${\rm SrTiO_3-LaAlO_3}$ heterostructures [@origin; @perspectives]. The sheet resistance decreases significantly with irradiation until saturation is obtained. As we can see from the inset of Figure $\ref{RvsT}a$, there is a linear relation between sheet resistances measured after different doses of irradiation (except for the range of temperatures with resistivity minima - if it exists). This indicates that in this range of doping there is no qualitative change in the resistivity. Time-dependent measurements (Figure $\ref{RvsT}b$) show that the sheet resistance of the irradiated sample changes with time at a temperature-dependent rate. Similar to the relation between sheet resistance with different doses of irradiation, we find a linear relation also between sheet resistances measured after different waiting times (see inset of Figure $\ref{RvsT}$a); namely, there is no qualitative change in the resistivity behavior. To extract the relevant energy scale for the relaxation in the sheet resistance, we explore the temperature dependence of the relaxation rate. As seen in the inset of Figure $\ref{RvsT}$b, this rate is well fitted with Arrhenius law, $\alpha e^{-E/kT}$, where E$\approx0.97 eV$. This activation energy is practically identical to the activation energy of oxygen vacancies in $\rm SrTiO_3$ [@diffusion1] and $\rm Ba_{0.5}Sr_{0.5}TiO_3$ [@diffusion2]. Hence, the observed relaxation is very likely due to diffusion of oxygen vacancies. While we do not address here the change in the conducting regions due to this diffusion, we note that irradiated regions which are few microns apart remain electrically disconnected. In the following we explore how this diffusion affects other transport properties. ![Mobility and sheet resistance ($R_\Box$) in an Ar$^+$ irradiated ${\rm SrTiO_3}$ as a function of temperature after 60 (circles) and after 90 (squares) seconds of irradiation.[]{data-label="mobility"}](mobility.eps) ![Magnetoresistance of Ar$^+$ irradiated ${\rm SrTiO_3}$ at 50K (diamonds) 80K (squares) and 100K (circles) measured after two different relaxation times (full and empty symbols). Inset: Scaling of MR data (with a particular relaxation time) according to Kohler’s rule.[]{data-label="MRvsHplot5"}](MRvsHplot5.eps) Figure \[mobility\] shows the mobility and sheet resistance of one of our irradiated samples after 60 and 90 seconds of irradiation. In contrast to the resistance, the change of the mobility with irradiation dose and relaxation time is hardly detectable. The observed mobility is consistent in magnitude and temperature dependence with previous reports [@origin; @perspectives]. Figure \[MRvsHplot5\] shows the MR at various temperatures measured after two different relaxation times. Similar to the mobility, the magnetoresistance $\Delta\rho/\rho$ does not change with relaxation time or irradiation dose (not shown here) despite the significant change in resistance. The inset of Figure \[MRvsHplot5\] shows that the MR data at temperatures higher than 50K obey Kohler’s rule [@kohler]; namely, $\Delta\rho/\rho$ scales with $H/\rho$, implying it is a function of $H\tau$ alone (where H is the magnetic field and $\tau$ is the scattering time). That the MR does not change between different relaxation times or different irradiation times suggests that the scattering time is practically unchanged. In passing, we also note that, the MR bellow 50K does not obey Kohler’s rule and the angular dependance is different, indicating that the mechanism of the MR at low temperatures is not the same as above 50K. The linear relation between sheet resistances with different doses of irradiation and the linear relation between sheet resistances measured after different waiting times, indicate that the qualitative behavior of the resistivity does not change in a detectable way. Together with the fact that the mobility and the scattering time do not change when the sheet resistance changes, it may suggest that the diffusion of the oxygen vacancies decreases the number of charge carriers while hardly affecting the scattering rate of the remaining charge carriers. It remains to be checked how this observation is correlated with time dependent variation in the thickness of the conducting layer and the spatial variation of charge carriers density within this layer. Answers to these questions are important for the potential use of electron-doped $\rm SrTiO_3$ in future oxide electronics. L.K. acknowledges support by the Israel Science Foundation founded by the Israel Academy of Science and Humanities. [999]{} S. Jin, T. H. Tiefel, M. McCormack, R. A. Fastnacht, R. Ramesh, and L. H. Chen, Science **264**, 413 (1994). S. Jin, T. H. Tiefel, M. McCormack, H. M. O’Bryan, L. H. Chen, R. Ramesh, and D. Schurig, Appl. Phys. Lett. **67**, 557 (1995). J. G. Bednorz, and K. A. M$\ddot{u}$ller, Z. Phys. B **64**, 189 (1986). J. G. Bednorz, and K. A. M$\ddot{u}$ller, Phys. Rev. Lett. **52**, 2289 (1984). A. Callaghan, C. W. Moeller, and R. Ward, Inorg. Chem. **5**, 1572 (1966). M. Braden, G. Andr$\acute{e}$, S. Nakatsuji, and Y. Maeno, Phys. Rev B **58**, 847 (1998). O. N. Tufte and P. W. Chapman, Phys. Rev. **155**, 796 (1967). H. P. R. Frederikse and A. W. Hosler, Phys. Rev. **161**, 822 (1967). H. Y. Hwang, A. Ohtomo, N. Nakagawa, D. A. Muller, and J. L. Grazul, Physica E **22**, 712 (2004). A. Ohtomo and H. Y. Hwang, Nature **427**, 423 (2004). G. Herranz, M. Basletic, M. Bibes, C. Carretero, E. Tafra, E. Jacquet, K. Bouzehouane, C. Deranlot, A. Hamzic, J. -M. Broto, A. Barthélémy, and A. Fert, Phys. Rev. Lett. **98**, 216803 (2007). A. S. Kalabukhov, R. Gunnarsson, J. Börjesson, E. Olsson, T. Claeson, and D. Wingler, Phys. Rev B **75**, 121404(R) (2007). W. Siemons, G. Koster, H. Yamamoto, W. A. Harrison, T. H. Geballe, D. H. A. Blank, and M. R. Beasley, Phys. Rev. Lett. **98**, 196802 (2007). D. M. Newns, J. A. Misewich, C. C. Tsuei, A. Gupta, B. A. Scott, and A. Schrott, Appl. Phys. Lett. **73**, 780 (1998). I. Pallecchi, G. Grassano, D. Marré, L. Pellegrino, M. Putti, and A. S. Siri, Appl. Phys. Lett. **78**, 2244 (2001). K. Ueno, I. H. Inoue, H. Akoh, M. Kawasaki, Y. Tokura, and H. Takagi, Appl. Phys. Lett. **83**, 1755 (2003). K. Szot, W. Speier, R. Carius, U. Zastrow, and W. Beyer, Phys. Rev. Lett. **88**, 075508 (2002). H. P. R. Frederikse, W. R. Thurber, and W. R. Hosler, Phys. Rev. **134**, A442 (1964). V. E. Henrich, G. Dresselhaus, and H. J. Zeiger, Phys. Rev. B 17, 4908 (1978). TBL-Kelpin company. D. W. Reagor and V. Y. Butko, Nature Materials **4**, 593 (2005). Harper, J. M. E.,Cuomo, J. J. and Kaufman, H. R., J. Vac. Sci. Technol, **21**, 737-756 (1982). X. D. Zhu, Y. Y. Fei, H. B. Lu, and G. Z. Yang, Appl. Phys. Lett. **87**, 051903-1 (2005). S. Zafer, R. E. Jones, B. Jiang, B. White, P. Chu, D. Taylor, and S. Gillespie, , Appl. Phys. Lett. **73**, 175 (1998). J.M. Ziman, Principles of the Theory of solids, Cambridge University Press, Cambridge, 1972, pp. 250-254.
--- abstract: 'The Ni ion in LaNiO$_2$ has the same formal ionic configuration $3d^9$ as does Cu in isostructural CaCuO$_2$, but it is reported to be nonmagnetic and probably metallic whereas CaCuO$_2$ is a magnetic insulator. From [*ab initio*]{} calculations we trace its individualistic behavior to (1) reduced $3d-2p$ mixing due to an increase of the separation of site energies ($\varepsilon_d - \varepsilon_p$) of at least 2 eV, and (2) important Ni $3d(3z^2-r^2)$ mixing with La $5d(3z^2-r^2)$ states that leads to Fermi surface pockets of La $5d$ character that hole-dope the Ni $3d$ band. Correlation effects do not appear to be large in LaNiO$_2$. However, [*ad hoc*]{} increase of the intraatomic repulsion on the Ni site (using the LDA+U method) is found to lead to a novel correlated state: (i) the transition metal $d(x^2-y^2)$ and $d(3z^2-r^2)$ states undergo consecutive Mott transitions, (ii) their moments are [*antialigned*]{} leading (ideally) to a “singlet" ion in which there are two polarized orbitals, and (iii) mixing of the upper Hubbard $3d(3z^2-r^2)$ band with the La $5d(xy)$ states leaves considerable transition metal $3d$ character in a band pinned to the Fermi level. The magnetic configuration is more indicative of a Ni$^{2+}$ ion in this limit, although the actual charge changes little with $U$.' author: - 'K.-W. Lee and W. E. Pickett' title: 'Infinite Layer LaNiO$_2$: Ni$^{1+}$ is not Cu$^{2+}$' --- Introduction ============ The perovskite oxide LaNiO$_3$, purportedly an example of a correlated metallic Ni$^{3+}$ system, has been investigated over some decades by a few groups[@goodenough; @sreedhar; @gayathri] for possible exotic behavior. The oxygen-poor lanthanum nickelate LaNiO$_x$ has also attracted attention, because of characteristic changes of its electronic and magnetic properties as the oxygens are removed. It is metallic at $2.75 < x < 3$, but semiconducting for $2.50 < x < 2.65$.[@moriga] For $x=2.6$, it shows ferromagnetic ordering with 1.7 $\mu_B$/Ni below 230 K [@moriga] and magnetic behavior of the $x=2.7$ material has been interpreted in terms of a model of ferromagnetic clusters.[@okajima] At $x=2.5$, where formally the Ni is divalent, a perovskite-type compound La$_2$Ni$_2$O$_5$ forms in which NiO$_6$ octahedra lie along $\it{c}$ axis directed chains and NiO$_4$ square-planar units alternate in the $\it{a-b}$ plane. This compound shows antiferromagnetic ordering of the NiO$_6$ units along the $\it{c}$ axis but no magnetic ordering of the NiO$_4$ units.[@alonso] Since LaNiO$_2$ with formally monovalent Ni ions was synthesized by Crespin [*et al.*]{}[@crespin; @levitz] it has attracted interest[@choisnet; @anisimov; @lope] because it is isostructural to CaCuO$_2$,[@siegrist] the parent “infinite layer" material of high T$_c$ superconductors, and like CaCuO$_2$ has a formal $d^9$ ion amongst closed ionic shells. However, it is difficult to synthesize and was not revisited experimentally until recently by Hayward $\it{et~al.}$ who produced it as the major phase by oxygen deintercalation from LaNiO$_3$.[@hayward1] Their materials consist of two phases, the majority being the infinite-layer (NiO$_2$-La-NiO$_2$) structure and the minority being a disordered derivative phase. Magnetization and neutron powder diffraction reveal no long-range magnetic order in their materials. Its paramagnetic susceptibility has been fit by a Curie-Weiss form in the $150 < T/K < 300$ range with S=$\frac{1}{2}$ and Weiss constant $\theta$ = -257 K, but its low T behavior varies strongly from this form. More recently, this same group has produced the isostructural and isovalent nickelate NdNiO$_2$.[@hayward2] One of the most striking features of LaNiO$_2$ is that it potentially provides a structurally simple example of a [*monovalent open shell transition metal d$^9$ ion*]{}. Except for the divalent Cu$^{2+}$ ion, the d$^9$ configuration is practically nonexistent in ionic solids. In particular, the formal similarity of Ni$^{1+}$ and Cu$^{2+}$ suggests that Ni$^{1+}$ compounds might provide a “platform" for additional high temperature superconductors. It is these and related questions that we address here. In this paper we present results of theoretical studies of the electronic and magnetic structures of LaNiO$_2$, and compare with the case of CaCuO$_2$ (or isovalent Ca$_{1-x}$Sr$_x$CuO$_2$) which is well characterized. A central question in transition metal oxides is the role of correlation effects, which are certainly not known [*a priori*]{} in LaNiO$_2$. We look at results both from the local density approximation (LDA) and its magnetic generalization, and then apply also the LDA+U correlated electron band theory that accounts in a self-consistent mean-field way for Hubbard-like intraatomic repulsion characterized by the strength U. Our results reveal very different behavior between LaNiO$_2$ and CaCuO$_2$, in spite of the structural and formal $d^9$ charge similarities. The differences can be traced to (1) the difference in $3d$ site energy between Ni and Cu relative to that of Cu, (2) the ionic charge difference between Ca$^{2+}$ and La$^{3+}$ and associated Madelung potential shifts, and (3) the participation of cation $5d$ states in LaNiO$_2$. We also discuss briefly our discovery of anomalous behavior in the transition metal $3d^9$ ion as described by LDA+U at large U. Although well beyond the physical range of U for LaNiO$_2$, we find that LDA+U produces what might be characterized as a $d^8$ “singlet" ion in which the internal configuration is one $d(x^2-y^2)$ hole with spin up and one $d(3z^2-r^2)$ hole with spin down, corresponding to an extreme spin-density anisotropy on the transition metal ion but (nearly) vanishing net moment. Structure and calculation ========================= In the samples of LaNiO$_2$ synthesized and reported by Hayward $\it{et}$ $\it{al.}$, there exist two phases with space group $P4/\it{mmm}$ (No. 123) but different site symmetry.[@hayward1] We focus on the majority infinite-layer phase, which is isostructural with CaCuO$_2$.[@siegrist] In the crystal structure shown in Fig. 1, Ni ions are at the corners of the square and La ions lie at the center of unit cell. The bond length of Ni-O is 1.979 $\AA$, about 2% more than that of Cu-O in CaCuO$_2$ (1.93 $\AA$). We used the lattice constants $a=3.87093{ }\AA,{ } c=3.3745\AA$,[@hayward1] with a ($\sqrt{2}\times\sqrt{2}$) supercell space group $I4/\it{mmm}$ (No. 139) for AFM calculations. The calculations were carried out with the full-potential nonorthogonal local-orbital (FPLO) method[@klaus] and a regular mesh containing 196 ${\bf k}$ points in the irreducible wedge of the Brillouin zone. Valence orbitals for the basis set were La $3s3p3d4s4p4d5s5p6s6p5d4f$, Ni $3s3p4s4p3d$, O $2s2p3s3p3d$. As frequently done when studying transition metal oxides, we have tried both of the popular forms of functional[@u1; @u2] of LDA+U method[@AZA] with a wide range of on-site Coulomb interaction U from 1 to 8 eV, but the intra-atomic exchange integral J=1 eV was left unchanged. For CaCuO$_2$, we used the same conditions as the previous calculation done by Eschrig $\it{et}$ $\it{al}.$ using FPLO.[@eschrig] Results ======= LDA Results ----------- We present first the LDA results. The paramagnetic (PM) band structure with its energy scale relative to Fermi energy $E_F$ is given in Fig. 2. A complex of La $4f$ bands is located at +2.5 eV with bandwidth less than 1 eV. The O $\it{2p}$ bands extend from about -8 eV to -3.2 eV. The Ni $\it{3d}$ bands are distributed from -3 eV to 2 eV, with the localized $t_{2g}$ complex near -1.5 eV, while the broad La $\it{5d}$ states range from -0.2 eV to 8 eV. Unlike in PM CaCuO$_2$, there are two bands crossing E$_F$. One is like the canonical $d(x^2-y^2)$ derived band in the cuprates, rather broad due to the strong $\it{dp\sigma}$ antibonding interaction with oxygen $p_x, p_y$ states and enclosing holes centered at the M point. The other band, lying at -0.2 eV at $\Gamma$ and also having its maximum at the M=($\frac{\pi}{a},\frac{\pi}{a},0)$ point, is a mixture of La $\it{5d}(3z^2-r^2)$ states and some Ni $\it{3d}(3z^2 -r^2)$ character. Already this band indicates importance of Ni $3d$ - La $5d$ band mixing. [cccc]{} parameters  &  LaNiO$_2$ &  CaCuO$_2$ &  $|\it{Ratio}|$ ($\%$) \ $\varepsilon_{0}$ & 93 & -200&\ $t(100)$ &381 & 534 & 71\ $t(110)$ &-81 & -84 & 96\ $t(001)$ &58 & 83 & 70\ $t(101)$ & 0 & -2 & 0\ $t(111)$ &-14 & -19 & 74\ \[table1\] Using a simple one-band tight binding model $$\begin{aligned} \varepsilon_k= \varepsilon_{\circ} - \sum_{R} t_R~ e^{i \vec k \cdot \vec R},\nonumber\end{aligned}$$ the Ni $3d(x^2-y^2)$ band shown in Fig. 3 can be reproduced with a few hopping amplitudes, but requiring more than might have been anticipated. The site energy is $\varepsilon_{\circ}=93$ meV, slightly above the Fermi level, and the hopping integrals (in meV) are $t(100)=381$, $t(110)=-81$, $t(001)=58$ and $t(111)=-14$. There is no hopping along the (101) direction. As anticipated from the cuprates, the largest hopping is via $t(100)$. However, to correctly describe the $k_z$ dispersion from X-R ([*i.e.*]{} along $\pi/a,0,k_z$) together with the [*lack of dispersion*]{} from $\Gamma$ - Z ($0,0,k_z$) and also M-A ($\pi/a,\pi/a,k_z$), the third neighbor hopping terms $t(111)$ must be included. The comparison of the single band tight binding parameters with those of CaCuO$_2$ is given in Table I. It should be noted that the state in mind is an $x^2 - y^2$ symmetry state that is orthogonal to those on neighboring Ni/Cu ions, [i.e.]{} an $x^2 - y^2$ symmetry Wannier orbital. In Ni, the on-site energy is 0.3 eV above what it is in CaCuO$_2$, lying above E$_F$ rather than below. This difference is partially due to the different Madelung potential in the two differently-charged compounds, but it also reflects some intrinsic hole-doping in the nickelate that leads to a lower Fermi level. The largest hopping amplitude (the conventional [*t*]{}) is 71% of its value in the cuprate, while the second ([*t’*]{}) is essentially the same. The $t(001) \equiv t_z$ is also 70% of its value in the cuprate, while the other amplitudes are the almost unchanged. The LDA Fermi surfaces are shown in Fig. 4. As for the cuprates, the Fermi surface is dominated by the M-centered hole barrel. In this system neighboring barrels touch at R=($\pi/a,0,\pi/c$) because the saddle point at R happens to lie at E$_F$. The Fermi surfaces also include two spheres containing electrons. The sphere at $\Gamma$, with mixed Ni and La $d(3z^2 -r^2)$ character, contains about $0.02$ electrons. The A-centered sphere is mainly Ni $d(zx)$ in character and contains approximately 0.07 electrons per Ni. The barrel, whose radius of $0.8~\pi/a$ in the (1,1,$k_z$) direction is almost independent of $k_z$ but which varies along (1,0,$k_z$), possesses about 1.1 holes, accounting for the total of the $1.0$ hole that is required by Luttinger’s theorem and also fits the formal Ni$^{1+}$ valence (which, being a metal and also mixing with La as well as with O states, is not very relevant). To investigate magnetic tendencies, attempts to find both ferromagnetic (FM) and antiferromagnetic (AFM) states were made. A stable $\sqrt{2}\times\sqrt{2}$ AFM state was obtained, with spin moment 0.53 $\mu_B$ per Ni. This state has lower energy by $6$ meV/Ni than that of PM state. Just as for the paramagnetic case, the AFM state has entangled bands of La $\it{5d}$, Ni $\it{3d}$ and O $\it{2p}$ character near the Fermi energy. In contrast to the unpolarized case (and CaCuO$_2$), with AFM order the large electron pocket has primarily La $5d(xy)$ character and the slightly occupied electron pocket at $\Gamma$ has a combination of La $5d(3z^2-r^2)$ and Ni $3d(3z^2-r^2)$ character. Attempts to obtain a FM solution always led to a vanishing moment. Consideration of Correlation with LDA+U --------------------------------------- As noted in the introduction, no magnetic order has been observed in LaNiO$_2$, either by magnetization or by neutron scattering. Although the local density approximation often does quite well in predicting magnetic moments, for weakly or nearly magnetic systems renormalization by spin fluctuations becomes important[@moriya; @mazin; @mazin2] and such effects are not included in the local density approximation. There is also the question of the strength of correlation effects due to an intra-atomic repulsion $U$ on the Ni site. Analogy to CaCuO$_2$ (same formal $d^9$ configuration, neighboring ion in the periodic table), which is a strong antiferromagnetic insulator, suggests that effects due to $U$ might have some importance. Here we apply the LDA+U “correlated band theory” method to assess effects of intra-atomic repulsion and compare with observed behavior. In the following subsection we compare and contrast with CaCuO$_2$. Upon increasing $U$ from zero in the antiferromagnetically ordered phase, the spin magnetic moment of Ni increases from the LDA value of 0.53 $\mu_B$ to a maximum of 0.8 $\mu_{B}$ at $U=3$ eV. Surprisingly, for $U >$ 4 eV the moment steadily decreases and by $U= 8$ eV it has [*dropped*]{} to 0.2 $\mu_{B}$/Ni, which is less than half of its LDA value. We emphasize that this behavior is unrelated to the observed behavior of LaNiO$_2$ (which may need little or no additional correlation beyond LDA). However, this unprecedented response of the transition metal ion to the imposition of a large $U$ gives new insight into a feature of the LDA+U method that has not been observed previously. This “quenching” of the local moment with increasing $U$ results from behavior of Ni $3d(3z^2-r^2)$ states that is analogous to those of the $3d(x^2-y^2)$, but with the direction of spin inverted (then with additional complications). As usual for a $d^9$ ion in this environment, the majority $3d(x^2-y^2)$ state of Ni is fully occupied even at $U=2$ eV, while the minority state is completely unoccupied at $U=3$ eV, where the moment is maximum and the system is essentially Ni$^{1+}$ S=$\frac{1}{2}$. One can characterize this situation as a Mott insulating $3d(x^2-y^2)$ orbital, as in the undoped cuprates. At $U$ = 3 eV, the density of states has a quasi one-dimensional van Hove singularity due to a flat band just below (bordering) the Fermi energy as can be seen in the $3d$ DOS shown in Fig. \[2Mott\]. Upon increasing $U$ to 4 eV, rather than reinforcing the $S=\frac{1}{2}$ configuration of Ni and thereby forcing the La and O ions to cope with electron/hole doping, the Ni $d(3z^2-r^2)$ states begin to polarize. The charge on the Ni ion drops somewhat, moving it in the Ni$^{1+}$ $\rightarrow$ Ni$^{2+}$ direction, with the charge going into the La $5d$ – O $2p$ states. Idealizing a bit, one might characterize the movement of (unoccupied) [*majority*]{} character of $3d(3z^2-r^2)$ well above E$_F$ as a Mott transition of these orbitals, which is not only [*distinct from*]{} that of the $3d(x^2-y^2)$ states, but is [*oppositely*]{} directed, leading to an on-site “singlet” type of cancellation. This movement of states with increasing $U$ has been emphasized in Fig. \[2Mott\] for easier visualization. The resulting spin density on the transition metal ion at $U$ = 8 eV is pictured in Fig. \[spindens\]. There is strong polarization in all directions from the core except for the position of nodes. The polarization is strongly positive (majority) in the lobes of the $3d(x^2-y^2)$ orbital, and just as strongly negative (minority spin) in the lobes of the $3d(3z^2-r^2)$ orbital. The net moment is (nearly) vanishing, but this results from a singlet combination (as nearly as it can be represented within classical spin picture) of spin-half up in one orbital and spin-half down in another orbital that violates Hund’s first rule. The magnetization density is large throughout the ion, but integrates to (nearly) zero. This behavior is however more complicated than a Mott splitting of occupied and unoccupied state, as can be seen from the substantial Ni $3d$ character that remains, even for $U$ = 8 eV, in a band straddling E$_F$ while the rest of the weight moves to $\sim$4 eV. In both of these bands there is strong mixing with La $5d(xy)$ states. What happens is that as the “upper Hubbard $3d(3z^2-r^2)$ band” rises as $U$ is increased, it progressively mixes more strongly with the La $5d(xy)$ states, forming a bonding band and an antibonding band. While the antibonding combination continues to move upward with increasing $U$, the bonding combination forms a half-filled band which remains at E$_F$. Thus we have found that for the Ni$^{1+}$ ion in this environment, increasing $U$ (well beyond what is physically plausible for LaNiO$_2$) results in $S=\frac{1}{2}$ Ni$^{+1}$ being converted into a nominal Ni$^{+2}$ ion (the actual charge changes little, however) in which the two holes are coupled into an intraatomic $S=0$ singlet. This behavior involves yet a new kind of correlation between the $\it{3d}(3z^2-r^2)$ states and the $\it{3d}(x^2-y^2)$ states, but one which is due to (driven by) the local environment. This behavior is quite different from the results for $U$=8 eV reported by Anisimov, Bukhvalov and Rice[@anisimov] using the Stuttgart TBLMTO-47 code. They obtained an AFM insulating solution analogous to that obtained for CaCuO$_2$,[@eschrig] with a single hole in the $3d$ shell occupying the $3d(x^2-y^2)$ orbital that antibonds with the neighboring oxygen $2p_{\sigma}$ orbital. The reason for this different result is not known, but it is now well established that multiple solutions to the LDA+U equations often exist.[@shick1; @shick2] Comparison with CCO$_2$ and Discussion ====================================== Although Ni$^{+1}$ is isoelectronic to Cu$^{+2}$, both the observed and the calculated behavior of LaNiO$_2$ are very different from CaCuO$_2$. In contrast to CaCuO$_2$, LaNiO$_2$ is (apparently) metallic, with no experimental evidence of magnetic ordering for LaNiO$_2$. The differing electronic and magnetic properties mainly arise from two factors. First, the Ca $\it{3d}$ bands lying in the range of 4 eV and 9 eV are very differently distributed from the broader and lower La $\it{5d}$ bands in the range of -0.2 eV and 8 eV. Secondly, in CaCuO$_2$, O $\it{2p}$ states extend to Fermi level and overlap strongly with Cu $\it{3d}$ states, and the difference of the two centers is less than 1 eV, as can be seen in Fig. \[pjdos\]. Thus, there is a strong $2p-3d$ hybridization that has been heavily discussed in high T$_c$ materials. In LaNiO$_2$, however, Ni $\it{3d}$ states lie just below the Fermi level, with O $\it{2p}$ states located $3-4$ eV below the center of Ni bands. Therefore, p-d hybridization, which plays a crucial role in the electronic structure and superconductivity of CaCuO$_2$, becomes much weaker. Summary ======= Aside from the formal similarity to CaCuO$_2$, the interest in LaNiO$_2$ lies in the occurrence of the unusual monovalent Ni ion. As we have found and in apparent agreement with experiment, this compound is a metal, and the “charge state” of a transition metal atom in a metal usually has much less significance than it is in an insulator. It may be because the compound is metallic that it is stable, but in this study we are not addressing energetics and stability questions. Hayward $\it{et}$ $\it{al}.$[@hayward1] had already suggested that the experimental findings could arise from reduced covalency between the Ni $3d$ and O $2p$ orbitals, and the 30% smaller value of the hopping amplitude $t$ indeed reflects the smaller covalency, as does the increased separation between the Ni $3d$ and O $2p$ bands. It is something of an enigma that in CaCuO$_2$ and other cuprates, LDA calculations fail to give the observed antiferromagnetic states, while in LaNiO$_2$ LDA predicts a weak antiferromagnetic state when there is no magnetism observed. In the cuprates the cause is known and is treated in a reasonable way by application of the LDA+U method. In this nickelate, application of the LDA+U method does not seem to be warranted (although novel behavior occurs it if it used). Rather, the prediction of weak magnetism adds this compound to the small but growing number of systems (ZrZn$_2$,[@zrzn2] Sc$_3$In,[@sc3in] and Ni$_3$Ga,[@mazin2] for example) in which the tendency toward magnetism is overestimated by the local density approximation. It appears that this tendency can be corrected by accounting for magnetic fluctuations.[@moriya; @mazin2] Acknowledgment ============== We acknowledge useful communication with M. Hayward during the course of this research, and discussions with J. Kuneš and P. Novak about the behavior of the LDA+U method. This work was supported by National Science Foundation Grant DMR-0114818. [10]{} J. B. Goodenough and P. M. Raccah, J. Appl. Phys. [**[36]{}**]{}, 1031 (1965). K. Sreedhar, J. M. Honig, M. Darwin, M. McElfresh, P. M. Shand, J. Xu, B. C. Crooker and J. Spalek, Phys. Rev. B [**46**]{}, 6382 (1992). N. Gayathri, A. K. Raychaudhuri, X. Q. Xu, J. L. Peng and R. L. Greene, J. Phys.: Condens. Matt. [**10**]{}, 1323 (1998). T. Moriga, O. Usaka, I. Nakabayashi, T. Kinouchi, S. Kikkawa and F. Kanamaru, Solid State Ionics [**79**]{},252 (1995). Y. Okajima, K. Kohn and K. Siratori, J. Mag. Mag. Mat. [**140-144**]{}, 2149 (1995). J. A. Alonso, M. J. Mart$\acute{i}$nez-Lope, J. L. Garc$\acute{i}$a-Mu$\tilde{n}$oz and M. T. Fern$\acute{a}$ndez-D$\acute{i}$az, J. Phys.: Condens. Matt. [**9**]{}, 6417 (1997). M. Crespin, P. Levitz and L. Gatineau, J. Chem. Soc., Faraday Trans. 2 [**79**]{}, 1181 (1983). P. Levitz, M. Crespin and L. Gatineau, J. Chem. Soc., Faraday Trans. 2 [**79**]{}, 1195 (1983). J. Choisnet, R. A. Evarestov, I. I. Tupitsyn and V. A. Veryazov, J. Phys. Chem. Solids [**57**]{}, 1839 (1996). V. I. Anisimov, D. Bukhvalov and T. M. Rice, Phys. Rev. B [**59**]{}, 7901 (1999). M. J. Mart$\acute{i}$nez-Lope, M. T. Casais and J. A. Alonso, J. Alloys Comp. [**275-277**]{}, 109 (1998). T. Siegrist, S. M. Zahurak, D. W. Murphy and R. S. Roth, Nature [**334**]{}, 231 (1988). M. A. Hayward, M. A. Green, M. J. Rosseinsky and J. Sloan, J. Am. Chem. Soc. [**121**]{}, 8843 (1999). M. A. Hayward and M. J. Rosseinsky, Solid State Sciences [**5**]{}, 839 (2003). K. Koepernik and H. Eschrig, Phys. Rev. B [**59**]{}, 1743 (1999). M. T. Czyzyk and G. A. Sawatzky, Phys. Rev. B [**49**]{}, 14211 (1994). V. I. Anisimov, I. V. Solovyev, M. A. Korotin, M. T. Czyzyk and G. A. Sawatzky, Phys. Rev. B [**48**]{}, 16929 (1993). V. I. Anisimov, J. Zaanen and O. K. Andersen, Phys. Rev. B [**44**]{}, 943 (1991). H. Eschrig, K. Koepernik and I. Chaplygin, J. Solid State Chem. [**176**]{}, 482 (2003). T. Moriya, [*Spin Fluctuations in Itinerant Electron Magnetism*]{} (Berlin, Springer, 1985). I. I. Mazin, D. J. Singh and A. Aguayo, cond-mat/0401563. A. Aguayo, I. I. Mazin and D. J. Singh, Phys. Rev. Lett. [**92**]{}, 147201 (2004). A. B. Shick, W. E. Pickett and A. I. Liechtenstein, J. Elect. Spectrosc. & Rel. Phenom. [**114-116**]{}, 753 (2001). A. B. Shick, V. Janis, V. Drchal and W. E. Pickett, Phys. Rev. B (2004, in press). D. J. Singh and I. I. Mazin, Phys. Rev. B [**69**]{}, 020402 (2004). A. Aguayo and D. J. Singh, Phys. Rev. B [**66**]{}, 020401 (2002).
--- abstract: 'A self-map $T$ of a $\nu$-generalized metric space $(X,d\,)$ is said to be a Ćirić-Matkowski contraction if $d(Tx,Ty)<d(x,y)$, for $x\neq y$, and, for every ${\epsilon}>0$, there is ${\delta}>0$ such that $d(x,y)<{\delta}+{\epsilon}$ implies $d(Tx,Ty)\leq {\epsilon}$. In this paper, fixed point theorems for this kind of contractions of $\nu$-generalized metric spaces, are presented. Then, by replacing the distance function $d(x,y)$ with functions of the form $m(x,y)=d(x,y)+{\gamma}\bigl(d(x,Tx)+d(y,Ty)\bigr)$, where ${\gamma}>0$, results analogue to those due to P.D. Proiniv (Fixed point theorems in metric spaces, Nonlinear Anal. 46 (2006) 546–557) are obtained.' author: - Mortaza Abtahi --- Introduction {#sec:intro} ============ Throughout the paper, the set of integers is denoted by ${\mathbb{Z}}$, the set of nonnegative integers is denoted by ${\mathbb{Z}}^+$, and the set of positive integers is denoted by ${\mathbb{N}}$. Fixed point theory in metric spaces have many applications. It is natural that there have been several attempts to extend it to a more general setting. One of these generalizations was introduced by Branciari in 2000, where the triangle inequality was replaced by a so-called *quadrilateral inequality.* They introduced the concept of $\nu$-generalized metric spaces as follows; see also [@Alamri-Suzuki-Khan; @Kadelburg-Radenovic-1; @Kirk-Shahzad; @Suzuki-Alamri-Khan]. \[dfn:nu-generalized-ms\] Let $X$ be a nonvoid set and $d:X\times X\to[0,\infty)$ be a function. Let $\nu\in{\mathbb{N}}$. Then $(X,d\,)$ is called a *$\nu$-generalized metric space* if the following hold: 1. $d(x,y)=0$ if and only if $x=y$, for every $x,y\in X$; 2. $d(x,y)=d(y,x)$, for every $x,y\in X$; 3. \[item:nu-angle-inequality\] $d(x,y) \leq d(x,u_1)+d(u_1,u_2)+\dotsb+d(u_\nu,y)$, for every set $\{x,u_1,\dotsc,u_\nu,y\}$ of $\nu+2$ elements of $X$ that are all different. Obviously, $(X,d\,)$ is a metric space if and only if it is a $1$-generalized metric space. In [@Alamri-Suzuki-Khan], the completeness of $\nu$-generalized metric spaces are discussed. In [@Suzuki], it is shown that not every generalized metric space has the compatible topology. Let $(X,d\,)$ be a $\nu$-generalized metric space. Let $k\in{\mathbb{N}}$. A sequence $\{x_n\}$ in $X$ is said to be *$k$-Cauchy* if $$\label{eqn:k-Cauchy} \lim_{n\to\infty} \sup{\{d(x_n,x_{n+1+mk}):m\in {\mathbb{Z}}^+\}}=0.$$ The sequence $\{x_n\}$ is said to be *Cauchy* if it is $1$-Cauchy. The concept of Cauchy sequences in $\nu$-generalized metric spaces are studied in [@Alamri-Suzuki-Khan; @Suzuki-Alamri-Khan]; see also [@Branciari]. \[prop:nu-Cauchy-is-Cauchy\] Let $(X,d\,)$ be a $\nu$-generalized metric space and let $\{x_n\}$ be a sequence in $X$ such that $x_n\ (n\in{\mathbb{N}})$ are all different. Suppose $\{x_n\}$ is $\nu$-Cauchy. If $\nu$ is odd, or if $\nu$ is even and $d(x_n,x_{n+2})\to0$, then $\{x_n\}$ is Cauchy. A sequence $\{x_n\}$ in a $\nu$-generalized metric space $(X,d\,)$ is said to *converge* to $x$ if $d(x,x_n)\to0$ as $n\to\infty$. The sequence $\{x_n\}$ is said to *converge to $x$ in the strong sense* if $\{x_n\}$ is Cauchy and $\{x_n\}$ converges to $x$. The space $X$ is said to be *complete* if every Cauchy sequence in $X$ converges. \[prop:d-is-continuous\] Let $\{x_n\}$ and $\{y_n\}$ be sequences in $X$ that converge to $x$ and $y$ in the strong sense, respectively. Then $$d(x,y) = \lim_{n\to\infty} d(x_n,y_n).$$ Branciari, in \[1\], proved a generalization of the Banach contraction principle. As it is mentioned in [@Alamri-Suzuki-Khan], their proof is not correct because a $\nu$-generalized metric space does not necessarily have the compatible topology; see [@Kadelburg-Radenovic-2], [@Samet; @Sarma; @Suzuki] and [@Turinici]. A proof of the Banach contraction principle, as well as proofs of Kannan’s and Ćirić’s fixed point theorems, in $\nu$-generalized metric spaces, can be found in [@Suzuki-Alamri-Khan]. Let $X$ be a complete $\nu$-generalized metric space, and let $T$ be a self-map of $X$. For every $x,y\in X$, let $$m(x,y) = \max{\{d(x,y),d(x,Tx),d(y,Ty),d(x,Ty),d(y,Tx)\}}.$$ Assume there exists $r\in[0,1)$ such that $d(Tx,Ty) \leq r m(x,y)$, for all $x,y\in X$. Then $T$ has a unique fixed point $z$ and, moreover, for any $x\in X$, the Picard iterates $T^n x$ $(n\in{\mathbb{N}})$ converge to $z$ in the strong sense. The paper is organized as follows. In section \[sec:pre\], we study Cauchy sequences in $\nu$-generalized metric spaces. We present a necessary and sufficient condition for a sequence to be Cauchy. Next, in section \[sec:fixed-point-theorems\], we give new fixed point theorems in $\nu$-generalized metric spaces. These results are generalizations to $\nu$-generalized metric spaces of theorems of Meir and Keeler [@Meir-Keeler-1969], Ćirić [@Ciric-1981] and Matkowski [@Kuczma Theorem 1.5.1], and Proinov [@Proinov-2006]. Results on Cauchy Sequences {#sec:pre} =========================== The following is the main result of the section. \[lem:technical-lemma\] Let $\{x_n\}$ be a sequence in a $\nu$-generalized metric space $X$ such that $x_n\ (n\in{\mathbb{N}})$ are all different. Suppose, for every ${\epsilon}>0$, for any two subsequences $\{x_{p_i}\}$ and $\{x_{q_i}\}$, if $\limsup\limits_{i\to\infty}d(x_{p_i},x_{q_i})\leq {\epsilon}$, then, for some $N$, $$\label{eqn:d(xpn+nun,xqn+nun)<=e} d(x_{p_i+1},x_{q_i+1}) \leq {\epsilon}\quad (i\geq N).$$ If $d(x_n,x_{n+1})\to0$, then $\{x_n\}$ is $\nu$-Cauchy. Suppose ${\{x_n\}}$ is not $\nu$-Cauchy. Then fails to hold for $k=\nu$. Hence, there is ${\epsilon}>0$ such that $$\label{eqn:negation-of-Cauchy} \forall k\in{\mathbb{N}},\ \exists\, n\geq k, \quad \sup{\{d(x_n,x_{n+1+m\nu}):m\in{\mathbb{Z}}^+\}}>{\epsilon}.$$ Since $d(x_n,x_{n+1})\to0$, there exist positive integers $k_1<k_2<\dotsb$ such that $$d(x_n,x_{n+1}) < {\epsilon}/i \quad (n\geq k_i).$$ For each $k_i$, by , there exist $n_i\geq k_i+1$ and $m_i\in{\mathbb{Z}}^+$ such that $$d(x_{n_i},x_{n_i+1+m_i\nu})>{\epsilon}.$$ Since $d(x_{n_i},x_{n_i+1})<{\epsilon}$, we have $m_i\geq 1$. We let $m_i$ be the smallest number with this property so that $d(x_{n_i},x_{n_i+1+m_i\nu-\nu}) \leq {\epsilon}$. Now, let $p_i=n_i-1$ and $q_i=n_i+m_i\nu$. Then $q_i > p_i \geq k_i$, and $$d(x_{p_i+1},x_{q_i+1})>{\epsilon},\quad d(x_{p_i+1},x_{q_i+1-\nu}) \leq {\epsilon}.$$ Using property in Definition \[dfn:nu-generalized-ms\], since all $x_n\ (n\in{\mathbb{N}})$ are different, for every $i\in{\mathbb{N}}$, we have $$\begin{split} d(x_{p_i},x_{q_i}) \leq d(x_{p_i},x_{p_i+1}) & + d(x_{p_i+1},x_{q_i+1-\nu}) \\ & + d(x_{q_i+1-\nu},x_{q_i-\nu}) + \dotsb + d(x_{q_i-1},x_{q_i}). \end{split}$$ Therefore, $d(x_{p_i},x_{q_i}) \leq \nu{\epsilon}/i + {\epsilon}$, and thus $\limsup\limits_{i\to\infty} d(x_{p_i},x_{q_i}) \leq {\epsilon}$. This is a contradiction, since $d(x_{p_i+1},x_{q_i+1})>{\epsilon}$, for all $i$. Suppose $\{x_n\}$ satisfies all conditions in Lemma $\ref{lem:technical-lemma}$, and, moreover, $d(x_n,x_{n+2})\to0$. Then $\{x_n\}$ is Cauchy. By Lemma \[lem:technical-lemma\], the sequence $\{x_n\}$ is $\nu$-Cauchy. Since $d(x_n,x_{n+2})\to0$, by Proposition \[prop:nu-Cauchy-is-Cauchy\], the sequence $\{x_n\}$ is Cauchy. \[thm:main\] Let $\{x_n\}$ be a sequence in $X$ such that $x_n\ (n\in{\mathbb{N}})$ are all different and $d(x_n,x_{n+1})+d(x_n,x_{n+2})\to0$. Assume ${m}(x,y)$ is a nonnegative function on $X\times X$ such that, for any two subsequences $\{x_{p_i}\}$ and $\{x_{q_i}\}$, $$\label{eqn:limsup m <= limsup d} \limsup_{i\to\infty} {m}(x_{p_i},x_{q_i}) \leq \limsup_{i\to\infty} d(x_{p_i},x_{q_i}).$$ The following condition then implies that $\{x_n\}$ is Cauchy: for every ${\epsilon}>0$, for any two subsequences $\{x_{p_i}\}$ and $\{x_{q_i}\}$, if $\limsup {m}(x_{p_i},x_{q_i})\leq {\epsilon}$, then, for some $N$, $$d(x_{p_i+1},x_{q_i+1}) \leq {\epsilon}\quad (i\geq N).$$ Follows directly from Lemma \[lem:technical-lemma\] and Theorem \[thm:main\]. Fixed Point Theorems of Ćirić-Matkowski Type {#sec:fixed-point-theorems} ============================================ Let $(X,d\,)$ be a $\nu$-generalized metric space. A mapping $T:X\to X$ is said to be a *[Ćirić-Matkowski]{} contraction* if $d(Tx,Ty)<d(x,y)$, for every $x,y\in X$, with $x\neq y$, and, for any ${\epsilon}>0$, there exists ${\delta}>0$ such that $$\label{eqn:CM-contraction} \forall x,y\in X, \quad d(x, y) < {\delta}+{\epsilon}\Longrightarrow d(Tx,Ty)\leq {\epsilon}.$$ \[lem:equiv-conditions-m-contractive-sequence\] For a sequence ${\{x_n\}}$ in $X$ and a nonnegative function ${m}(x,y)$ on $X\times X$, the following are equivalent: 1. \[item:it-is-m-contractive-sequence\] for every ${\epsilon}>0$, there exist ${\delta}>0$ and $N\in{\mathbb{Z}}^+$ such that $$\label{eqn:m-contractive-sequence} \forall p,q\geq N, \quad {m}(x_p,x_q) < {\epsilon}+{\delta}\Longrightarrow d(x_{p+1},x_{q+1}) \leq {\epsilon}.$$ 2. \[item:m-contractive-sequence-in-term-of-pnqn\] for every ${\epsilon}>0$, for any two subsequences $\{x_{p_i}\}$ and $\{x_{q_i}\}$, if $\limsup {m}(x_{p_i},x_{q_i})\leq {\epsilon}$ then, for some $N$, $d(x_{p_i+1},x_{q_i+1}) \leq {\epsilon}\ (i\geq N)$. Now, suppose $T$ is a [Ćirić-Matkowski]{} contraction on $X$, take a point $x\in X$, and set $x_n=T^nx$ ($n\in{\mathbb{N}}$). Then, for every ${\epsilon}>0$, there exist ${\delta}>0$ such that $d(x_p,x_q) < {\epsilon}+{\delta}$ implies $d(x_{p+1},x_{q+1}) \leq {\epsilon}$. By the above lemma, \[lem:the-same-or-different\] Let $T:X\to X$ be a mapping. Suppose $d(T^nx,T^{n+1}x)\to0$, for some $x\in X$. Then, for some $k\in{\mathbb{N}}$, either the picard iterates $T^n x$ $(n\geq k)$ are all different or they are all the same. Suppose $T^{k+m}x=T^kx$, for some $k,m\in{\mathbb{N}}$, and let $m$ be the smallest positive integer with this property. If $m=1$, that is $T^{k+1}x=T^kx$, then $T^nx=T^kx$, for $n\geq k$, and there is nothing to prove. If $m\geq2$, then every two successive element in the following sequence are different: $$T^kx,T^{k+1}x,\dotsc,T^{k+m-1}x,T^{k+m}x,T^{k+m+1}x,\dotsc$$ \[thm:m-contractive-T-produces-Cauchy\] Let $T$ be a self-map of $X$ and ${m}(x,y)$ be a nonnegative function on $X\times X$. Suppose, for some point $x\in X$, the following conditions hold: 1. for any ${\epsilon}>0$, there exist ${\delta}>0$ and $N\in{\mathbb{Z}}^+$ such that $$\label{eqn:m-contractive-orbits} \forall p,q\geq N, \quad {m}(T^px, T^qx) < {\delta}+{\epsilon}\Longrightarrow d(T^{p+1} x, T^{q+1}x)\leq {\epsilon},$$ 2. condition holds for any two subsequences $\{T^{p_i}x\}$ and $\{T^{q_i}x\}$ of $\{T^nx\}$, 3. $d(T^nx,T^{n+1}x)+d(T^nx,T^{n+2}x)\to0$. Then ${\{T^nx\}}$ is a Cauchy sequence. Using Lemma \[lem:equiv-conditions-m-contractive-sequence\], condition implies that, for every ${\epsilon}>0$, for any two subsequences $\{T^{p_i}x\}$ and $\{T^{q_i}x\}$ of $\{T^nx\}$, if $\limsup {m}(T^{p_i}x,T^{q_i}x)\leq {\epsilon}$ then, for some $N$, $d(T^{p_i+1}x,T^{q_i+1}x) \leq {\epsilon}$ $(i\geq N)$. By Lemma \[lem:the-same-or-different\], the Picard iterates $T^nx$ are eventually all the same, in which case $\{T^nx\}$ is obviously a Cauchy sequence, or they are all different. In the latter case, Theorem \[thm:main\] shows that $\{T^nx\}$ is Cauchy. Let $T$ be a [Ćirić-Matkowski]{} contraction on $X$. Then $T$ has a unique fixed point $z$, and, moreover, for any $x\in X$, the sequence $\{T^nx\}$ converges to $z$ in the strong sense. First, we show that $T$ has at most one fixed point. Suppose $Tz=z$ and $y\neq z$. Then $d(Ty,Tz)=d(Ty,z)<d(y,z)$. Hence $Ty\neq y$. Given $x\in X$, we consider the following two cases. 1. There exists $k,m\in{\mathbb{N}}$ such that $T^{k+m}x=T^kx$. 2. $T^nx$ $(n\in{\mathbb{N}})$ are all different. In case (a), where $T^{k+m}x=T^kx$, for some $k,m\in{\mathbb{N}}$, we let $m$ be the smallest positive integer with this property. If $m=1$, that is $T^{k+1}x=T^kx$, then $T^nx=T^kx$, for $n\geq k$, and there is nothing to prove. If $m\geq2$, then every two successive element in the following sequence are different: $$T^kx,T^{k+1}x,\dotsc,T^{k+m-1}x,T^{k+m}x,T^{k+m+1}x,\dotsc$$ Recall that $x\neq y$ implies $d(Tx,Ty)<d(x,y)$. Hence $$\begin{aligned} d(T^kx,T^{k+1}x) & = d(T^{k+m}x,T^{k+m+1}x) < d(T^{k+m-1}x,T^{k+m}x) \\ & < \dotsb < d(T^{k+1}x,T^{k+2}x) < d(T^kx,T^{k+1}x). \end{aligned}$$ This is absurd. In case (b), we let $x_n=T^nx$, and show that $d(x_n,x_{n+i})\to0$, for $i=1,2$. Since $x_n$ $(n\in{\mathbb{N}})$ are all different, we have $d(x_{n+1},x_{n+i+1})<d(x_{n},x_{n+i})$, for every $n$, that is, the sequence ${\epsilon}_n=d(x_n,x_{n+i})$ is decreasing and thus ${\epsilon}_n \downarrow{\epsilon}$ for some ${\epsilon}\geq0$. If ${\epsilon}>0$, there is ${\delta}>0$ such that ${\epsilon}_n = d(T^nx,T^{n+1}x)\leq {\epsilon}+{\delta}$ implies that ${\epsilon}_{n+1} = d(T^{n+1}x,T^{n+2}x)\leq {\epsilon}$. This is a contradiction since we have ${\epsilon}<{\epsilon}_n$, for all $n$. Hence, $d(x_n,x_{n+i})\to0$ $(i=1,2)$. Now, by Theorem \[thm:m-contractive-T-produces-Cauchy\], the sequence $\{T^nx\}$ is Cauchy. Since $X$ is complete, $\{T^nx\}$ converges to some $z\in X$. By Proposition \[prop:d-is-continuous\], we have $$d(z,Tz) = \lim_{n\to\infty} d(T^nx,Tz) \leq \lim_{n\to\infty} d(T^{n-1}x,z) = 0.$$ Hence $Tz=z$, i.e., $z$ is a fixed point of $T$. Let $\{x_n\}$ be a sequence in a $\nu$-generalized metric space $X$ such that $x_n$ $(n\in{\mathbb{N}})$ are all different. If $d(x_n,x_{n+1})+d(x_{n+1},x_{n+2})\to0$, then $$d(x_n,x_{n+m})\to0, \quad (m\geq 3).$$ A self-mapping $T$ of a $\nu$-generalized metric space $X$ is said to be *sequentially continuous* if $\{Tx_n\}$ converges to $Tx$ whenever $\{x_n\}$ converges to $x$. The mapping $T$ is called *asymptotically regular* if $$d(T^nx,T^{n+1}x)+d(T^nx,T^{n+2}x)\to0\quad (x\in X).$$ We are now in a position to state and prove a version of Proinov’s theorem, [@Proinov-2006 Theorem 4.2], for $\nu$-generalized metric spaces. \[thm:Generalized-Proinov\] Let $X$ be a complete $\nu$-generalized metric space, and $T$ be a sequentially continuous and asymptotically regular self-map of $X$. For ${\gamma}>0$, define ${m}$ on $X\times X$ by $$\label{eqn:m(x,y)-in-generalized-Proinov} m(x,y)=d(x,y)+{\gamma}\bigl(d(x,Tx)+d(y,Ty)\bigr).$$ Suppose $d(Tx,Ty)<{m}(x,y)$, for every $x,y\in X$, with $x\neq y$, and, for any ${\epsilon}>0$, there exist ${\delta}>0$ and $N\in{\mathbb{N}}_0$ such that $$\label{eqn:m-contractive-in-generalized-Proinov} \forall x,y\in X, \quad {m}(T^Nx, T^Ny) < {\delta}+{\epsilon}\Longrightarrow d(T^{N+1}x,T^{N+1}y)\leq {\epsilon}.$$ Then $T$ has a unique fixed point $z$, and, for any $x\in X$, the Picard iterates $T^nx$ $(n\in{\mathbb{N}})$ converge to $z$ in the strong sense. First, let us prove that $T$ has at most one fixed point. If $Ty=y$ and $Tz=z$. Then ${m}(y,z)=d(y,z)=d(Ty,Tz)$. Hence $y=z$. Now, choose $x\in X$ and set $x_n=T^nx$ $(n\in{\mathbb{N}})$. Since $T$ is assumed to be asymptotically regular, we have $d(x_n,x_{n+1})\to0$. Hence, holds, for any two subsequences $\{x_{p_i}\}$ and $\{x_{q_i}\}$. By Theorem \[thm:main\], the sequence $\{T^nx\}$ is Cauchy and, since $X$ is complete, it converges to some point $z\in X$. Since $T$ is sequentially continuous, we have $Tz=z$. [99]{} M. Abtahi, *Fixed point theorems for Meir-Keeler type contractions in metric spaces*, Fixed Point Theory (to appear). B. Alamri, T. Suzuki and L.A. Khan, *Caristi’s Fixed Point Theorem and Subrahmanyam’s Fixed Point Theorem in $\nu$-Generalized Metric Spaces*, Journal of Function Spaces, 2015, Article ID 709391. A. Branciari, *A fixed point theorem of Banach-Caccioppoli type on a class of generalized metric spaces*, Publicationes Mathematicae Debrecen, 57 (2000) 31–37. Lj. B. Ćirić, *A new fixed-point theorem for contractive mappings*, Publ. Inst. Math. (N.S) **30** (44) (1981), 25–27. Z. Kadelburg and S. Radenović, *On generalized metric spaces: A survey*, TWMS J. Pure Appl. Math., 5 (2014), 3–13. Z. Kadelburg and S. Radenović, *Fixed point results in generalized metric spaces without Hausdorff property*, Mathematical Sciences, vol. 8, article 125, 2014. R. Kannan, *Some results on fixed points-II*, Amer. Math. Monthly, 76 (1969), 405–408. W. A. Kirk and N. Shahzad, *Generalized metrics and Caristi’s theorem*, Fixed Point Theory Appl., 2013, 2013:129. M. Kuczma, B. Choczewski, R. Ger, *Iterative functional equations, Encyclopedia of Mathematics and Applications*, vol. 32, Cambridge University Press, Cambridge, 1990. A. Meir, E. Keeler, *A theorem on contraction mappings*, J. Math. Anal. Appl., **28** (1969), 326–329. Petko D. Proinov, *Fixed point theorems in metric spaces*, Nonlinear Anal. 64 (2006), 546–557. B. Samet, *Discussion on ‘a fixed point theorem of Banach-Caccioppoli type on a class of generalized metric spaces’ by A. Branciari*, Publicationes Mathematicae, vol. 76, no. 4, pp. 493–494, 2010. I.R. Sarma, J.M. Rao, S.S. Rao, *Contractions over generalized metric spaces*, Journal of Nonlinear Science and its Applications, vol. 2, no. 3, pp. 180–182, 2009. T. Suzuki, *Generalized metric spaces do not have the compatible topology*, Abstr. Appl. Anal., 2014, Art. ID 458098, 5 pp. T Suzuki, B Alamri and L.A. Khan, *Some notes on fixed point theorems in $\nu$-generalized metric spaces*, Bull. Kyushu Inst. Tech. Pure Appl. Math. 62 (2015), 15–23. M. Turinici, *Functional contractions in local Branciari metric spaces*, ROMAI Journal, vol. 8, no. 2, pp. 189–2012.
--- abstract: | We consider the class of the topologically locally finite (in short TLF) planar vertex-transitive graphs, a class containing in particular all the one-ended planar Cayley graphs and the normal transitive tilings. We characterize these graphs with a finite local representation and a special kind of finite state automaton named *labeling scheme*. As a result, we are able to enumerate and describe all TLF-planar vertex-transitive graphs of any given degree. Also, we are able decide to whether any TLF-planar transitive graph is Cayley or not.\ **Keywords:** vertex-transitive, planar graph, tiling, topologically locally finite, labeling scheme author: - | David [Renault]{}\ [renault@labri.fr](renault@labri.fr)\ LaBRI – Université Bordeaux I\ 351, cours de la Libération\ 33400 Talence, <span style="font-variant:small-caps;">France</span> bibliography: - 'prethese.bib' title: | The vertex-transitive\ TLF-planar graphs --- Introduction {#introduction .unnumbered} ============ Vertex-transitive graphs – or transitive graphs in short – are graphs whose group of automorphisms acts transitively on their sets of vertices. These graphs possess a regular structure, being structurally the same from any vertex. When such a graph is planar, this regular structure confers symmetry properties to the embedding of the graph : the action of automorphisms on the graph can locally be represented as the action of an isometry of the geometry it is embedded in. The class $\mathcal{T}$ of the topologically locally finite (in short TLF) planar transitive graphs is a subclass of the class of the transitive planar graphs. These graphs possess a planar embedding such that the set of vertices in this embedding is a locally finite subset of the plane. For example, $\mathcal{T}$ contains the planar Cayley graphs of the discrete groups of isometries of the plane, be it hyperbolic or Euclidean. We find in $\mathcal{T}$ both tree-like graphs of finite treewidth and one-ended graphs such as the Euclidean grid. The TLF-planar graphs are related to several fields: first, such graphs represent adapted models of computation for parallel algorithms such as cellular automata [@Mazoyer; @Garzon], and provide examples of structures for interconnection networks [@Heydemann]. The class $\mathcal{T}$ is also connected to the vertex-transitive tilings of the plane whose set of vertices is topologically locally finite [@Tilings]. Second, while many problems in combinatorial group theory are undecidable, the planar vertex-transitive graphs possess structural as well as geometrical properties that allow for a more specific approach. For example, the unique embedding property in the case of the 3-connected planar graphs [@Whitney] constrains the structure of the automorphisms of the graph. Finally, putting aside the finite graphs in $\mathcal{T}$, the infinite graphs in $\mathcal{T}$ may factor into finite transitive graphs embedded into compact manifolds, as it is the case for Euclidean graphs [@Bieberbach]. The characterization of the finite planar transitive graphs is a result of Fleischner and Imrich [@Fleischner]. These graphs turn out to be the complexes associated to the uniform convex polyhedra. The finite non-planar case happens to be much more difficult, and has been thoroughly studied up to 26 vertices by McKay [*et al.*]{} [@McKayRoyle; @RoyleURL], but few general results are known. Cayley graphs are natural examples of vertex-transitive graphs, and McKay also studied the problem of the determination of those graphs that were transitive but not Cayley graphs [@McKayPraegerI; @McKayPraegerII]. The problem of enumerating the normal Cayley graphs, has been solved by Chaboud [@Chaboud] and then extended to the TLF-planar Cayley graphs [@DavidCayley]. In this paper, we give an exhaustive description of the class $\mathcal{T}$ of the TLF-planar vertex-transitive graphs. Our description of the class includes both finite and infinite graphs. This extends the Cayley case [@DavidCayley] in several ways. First, the Cayley case is mainly dedicated to the description of groups which happen to have a planar Cayley graph. This article focuses on the graphs themselves, by describing all their possible groups of automorphisms, and as a consequence all their possible embeddings. Second, we can highlight properties of the transitive graphs that do not hold when we restrict ourselves to Cayley graphs. For example, TLF-planar Cayley graphs can always be represented by the Cayley graph of a discrete group of isometries of the plane. There exist transitive graphs for which this is impossible, independently of the embedding. Finally, $\mathcal{T}$ contains a strictly larger class of graphs than the Cayley case. Such a simple graph as the complex associated to the dodecahedron is an example of transitive but non Cayley planar graphs. More precisely, there exist infinite families of graphs having this property. In this article, we refine the description of the groups of automorphisms of the graphs and the geometrical properties of their possible planar embeddings given in [@DavidCayley]. We represent these graphs by their geometrical invariants in a structure called a [*labeling scheme*]{}, as long as a special kind of finite state automaton called a [*border automaton*]{}. We show that there exists a bijection between this representation and the class of the TLF-planar transitive graphs. Our main result (page ) is: **Theorem 15 (Enumeration)** *Given a number $d\geq 2$, it is possible to enumerate all the TLF-planar transitive graphs having internal degree $d$, along with their labeling schemes.* Each vertex-transitive graph belonging to the class $\mathcal{T}$ is effectively computable ([*i.e.*]{} there exists an algorithm able to build every finite ball of the graph). Associated to our results on Cayley graphs, this allows us to determine which of these graphs are Cayley, and more precisely: **Corollary 16 (Cayley checking)** *If $\Gamma$ is a TLF transitive graph, then it is decidable whether $\Gamma$ is the Cayley graph of a group or not, and obtain an enumeration and a description of the groups having $\Gamma$ as a Cayley graph.* Finally, thanks to the characterization of the embeddings of the graphs in $\mathcal{T}$, it is possible to compute their connectivity and approximate their growth rate, which can be either linear, quadratic or exponential, depending on their local geometrical properties. TLF-planar transitive graphs ============================ A *graph* $\Gamma$ consists of a pair $(V,E)$, $V$ being a countable set of *vertices* and a set of [*edges*]{}, where $E$ is a subset of the pair of elements of $V$. Each edge corresponds to a pair of vertices $(v_1,v_2)$ called its extremities. An edge with the same extremities $(v,v)$ is called a loop. The graphs that we consider are loopless. An edge $(u,v)$ is said to be *incident* to the vertices $u$ and $v$. A [*labeling*]{} of the graph is an application from the set of edges into a finite set $L$ of labels or colors. The [*degree*]{} of a vertex is the number of edges incident to this vertex. A [*path*]{} of $\Gamma$ is a sequence of vertices $(v_n)$ in $\Gamma$ such that for all $n$, there exists an edge between $v_n$ and $v_{n+1}$. A [*cycle*]{} is a finite path whose initial and terminal vertices are the same. A *simple* cycle is a cycle where no vertex appears twice. [Considerations on the construction of subgraphs]{} We occasionally build subgraphs of $\Gamma$ by considering a certain subset of vertices and edges $(V',E')$ where $V'\subset V$ and $E'\subset E$, or equivalently by removing from the graph a subset of its vertices and edges. Then, we can consider the remaining set of vertices and edges as a subspace of the graph seen as a metric space, and the connected components of this subspace. These components may not be graphs themselves, since some edges will not have vertices as their extremities. We can resolve this problem and consider these components as new graphs by adding new vertices to the extremities of these edges. A graph $\Gamma$ is [*connected*]{} if, for every pair of vertices $(s_1,s_2)$ of the graph, there exists a finite path in the graph with extremities $s_1$ and $s_2$. A *connected component* is an equivalence class of vertices for the relation “to be connected”. Notice that both definitions are coherent whether $\Gamma$ is seen as a graph or as a metric space. A [*n-separation*]{} is a set of $n$ vertices whose removal separates the graph in two or more connected components not reduced to a single edge. A [*cut-vertex*]{} of $\Gamma$ is a $1$-separation of $\Gamma$. A graph is $n$-[ *separable*]{} if it contains a $n$-separation. If it contains no $n$-separation, it is [*(n+1)-connected*]{}. A graph is [*regular*]{} when all its vertices have the same degree $d$. The graphs we will be dealing with are connected and regular. A [*morphism*]{} from the graph $\Gamma_1 = (V_1,E_1)$ into $\Gamma_2 = (V_2,E_2)$ is an application $\sigma :V_1\rightarrow V_2$ that preserves the edges of the graph. When both graphs are labeled, we impose that the morphisms also preserve the labels of the edges. A graph is said to be [*vertex-transitive*]{} – or [*transitive*]{} in short – if and only if, given any two vertices $(s_1,s_2) \in \Gamma$, there exists an automorphism of $\Gamma$ mapping $s_1$ onto $s_2$. If $\Gamma$ is the Cayley graph of a group, then it is vertex-transitive. [About the lower degree transitive graphs]{} There exists only one non-trivial connected transitive graph of degree $1$, which corresponds to $K_2$, the graph reduced to a single edge. Transitive graphs of degree $2$ correspond to cyclic graphs $C_n$ where $n$ may be infinite. These graphs possess exactly one labeling when $n$ is odd and two labelings when $n$ is even, these labelings corresponding to the planar Cayley graphs of degree $2$ associated to the dihedral groups and cyclic groups. In the following, we shall only be interested in connected transitive graphs of degree $d \geq~3$. A graph $\Gamma$ is said to be [*planar*]{} if it can be embedded in the plane, such that no two edges meet in a point other than a common end. By the plane, we mean a simply connected Riemannian surface, homogeneous and isotropic. For our embeddings, we will only consider the three usual geometries : the sphere, the Euclidean and the hyperbolic plane. Our embeddings will be considered [*tame*]{}, meaning that all edges are $\mathcal{C}^1$ images of $[0;1]$. Such an embedding is said to be [*topologically locally finite*]{} – in short TLF-planar – if its vertices have no accumulation point in the plane. Equivalently, every compact subset of the plane intersects a finite number of vertices of the embedding. Symmetrically, an embedding is said to be TLF in terms of edges if and only if every compact subset of the plane intersects a finite number of edges of the embedding. The following theorem asserts that a TLF-planar graph always possess such an embedding: \[thm:embedding\] If the graph $\Gamma$ is TLF-planar, there exists a [*tame*]{} embedding of the same graph that is TLF in terms of vertices but also of edges. We will always suppose that the TLF-planar graphs are embedded in the plane such that their embedding follows Theorem \[thm:embedding\]. Given a specific embedding of a TLF-planar graph $\Gamma$, a *face* $\mathcal{F}$ is defined as an arc-connected component of the complement of the graph in the plane. $\mathcal{F}$ is said to be *finite* when it is incident to finitely many vertices of the graph, otherwise it is said to be [*infinite*]{}. For TLF-planar graphs, infinite faces are necessarily topologically unbounded in the plane. The [*border*]{} of the face $\mathcal{F}$, noted $\partial\mathcal{F}$, is its boundary in topological terms. A face $\mathcal{F}$ is said to be *incident* to a vertex or an edge of the graph if and only if this vertex of edge intersects with $\partial\mathcal{F}$. In such an embedding, every edge incident to a face is entirely included into the border of this face. Then every edge is incident to exactly two faces of $\Gamma$, which it separates. Considering the previous definitions of the faces, the transitivity property of $\Gamma$ stands out with the following lemma taken from [@DavidCayley]: \[lem:intersectionfaces\] Let $\Gamma$ be a vertex-transitive TLF-planar graph. Given two distinct faces of $\Gamma$, the intersection of their border, when non-empty, is either a vertex or an edge of $\Gamma$. \[lem:finitefaces\] The automorphisms of $\Gamma$ map the border of every finite face onto the border of another finite face. [On the choice of the embedding]{} The previous statements hold for a particular embedding of $\Gamma$ that is locally finite in terms of edges and vertices. This embedding may not be unique. For example, if $\Gamma$ is finite, there exists an embedding of $\Gamma$ in the sphere, where all faces are topologically bounded. If we select a point inside a face and send this point to the infinity, we obtain another embedding of the graph on a non-compact surface homeomorphic to the Euclidean plane. With the previous definitions, the faces of both embeddings are all finite, and the validity of Corollary \[lem:finitefaces\] is the same for both embeddings. The [*size*]{} of a face of an embedding of $\Gamma$ corresponds to the number (possibly infinite) of vertices it is incident to. The [ *type vector*]{} of a vertex of $\Gamma$ is the sequence of sizes of the faces appearing consecutively around this vertex. It is defined up to rotation and symmetry of the graph. If $\Gamma$ is transitive and Corollary \[lem:finitefaces\] holds, the type vector is independent of the choice of the vertex, up to permutation of its elements. For example, the type vector of the Euclidean infinite grid is $[4;4;4;4]$ and the type vector of a cyclic graph $C_n$ with $n$ vertices is $[n;n]$. For a given graph $\Gamma$, ${\textrm{Aut}(\Gamma)}$ denotes its group of automorphisms, and ${\textrm{Trans}(\Gamma)}$ stands for the set of subgroups of ${\textrm{Aut}(\Gamma)}$ acting transitively on the set of vertices of $\Gamma$. Let $G$ belong to ${\textrm{Trans}(\Gamma)}$. $G$ acts on the set of edges of $\Gamma$ and the set of orbits of edges is finite. Thus a *class* or a *color* of edges is defined as an orbit under the action of $G$, the set of colors being called $\mathsf{E}_G$. In the same manner, we define classes or colors of finite faces, corresponding to the finite set $\mathsf{F}_G$. This coloring defines a partition of the set of finite faces of the embedding. Infinite faces have a special status since these faces may not be stable by automorphism. Therefore, we request by convention that all of them correspond to a special color in $\mathsf{F}$. In the following, $\Gamma$ will be a TLF-planar, connected vertex-transitive graph of finite degree $d\geq 3$. Thus we will speak of vertices, edges and faces of $\Gamma$, as defined above. $G$ is a group belonging to ${\textrm{Trans}(\Gamma)}$. We will always suppose that the embedding of $\Gamma$ follows Theorem \[thm:embedding\] and that the automorphisms preserve the borders of the finite faces. In the section \[sec:localinvariant\], we analyze these local invariants, in order to obtain a characterization of the graph by its local geometrical properties in section \[sec:labelingscheme\]. The last section presents some applications of these characterizations. Local geometrical invariants {#sec:localinvariant} ============================ Infinite faces and connectivity ------------------------------- Let us give some intuition on the general structure of the graphs in this class. We prove that the finite vertex-transitive planar graphs of degree $\geq 3$ are all $3$-connected graphs, and in the infinite case, the connectivity of the graphs depends only on the number of infinite faces appearing around each vertex: \[lem:connectivity\] If $\Gamma$ is a TLF-planar transitive graph of degree $\geq 3$, let $n$ be the number of infinite faces appearing around a given vertex of $\Gamma$. Then, depending on the value of $n$: - ([*$n\geq 2 \Leftrightarrow\Gamma$ is $1$-separable*]{})\ Given a vertex $v$ of $\Gamma$, we consider the set of faces incident to that vertex, and the union of the borders of those faces that are finite. If $n\geq 2$, the union of $v$ and two infinite faces incident to $v$ separates the graph into at least two non-trivial components. Then every vertex of the graph is a cut-vertex and the graph is $1$-separable. On the other hand, if $\Gamma$ is $1$-separable, every vertex must meet at least $2$ infinite faces. - ([*$n= 1 \Rightarrow\Gamma$ is $2$-connected and $2$-separable*]{})\ If $n=1$, then consider an edge of $\Gamma$ that does not belong to the border of an infinite face. There must exist one, otherwise $\Gamma$ being of degree at least $3$, that would contradict the fact that $n=1$. The extremities of this edge both meet an infinite face, and these faces are distinct. The removal of these extremities separates $\Gamma$, therefore $\Gamma$ is 2-separable. It is $2$-connected because $n<2$. - ([*$\Gamma$ is $2$-connected and $2$-separable $\Rightarrow n=1$*]{})\ Suppose now that $\Gamma$ is $2$-separable and $2$-connected, and consider $\{s,t\}$ a $2$-separation of $\Gamma$. Let $\Lambda$ be a subgraph separated by $\{s,t\}$. Suppose that we remove $\Lambda$ from the embedding. The remaining TLF-planar graph possesses a face $\mathcal{F}$, inside which $\Lambda$ was embedded. Moreover, $s$ and $t$ both belong to the border of $\mathcal{F}$. If $\mathcal{F}$ is finite, embedding $\Lambda$ inside $\mathcal{F}$ separates the face into at least two subfaces meeting at $s$ and $t$, therefore contradicting Lemma \[lem:intersectionfaces\]. Therefore $\mathcal{F}$ is infinite. When embedding $\Lambda$ inside $\mathcal{F}$, there will remain an infinite face in the embedding of $\Gamma$. Therefore $n\geq 1$ and since $\Gamma$ is $2$-separable, $n=1$. If $\Gamma$ is 1-separable, then every vertex is a cut-vertex. If we cut the graph along its cut-vertices, the remaining components are [*2-connected components*]{}. Since the graph is vertex-transitive, the set of components incident to a vertex is independent of the vertex, and finite, because the degree of the graph is finite. These components are TLF-planar graphs, but not necessarily vertex-transitive themselves. They may be finite or infinite. They may be reduced to a single edge. If $\Gamma$ is at least 2-connected, it is composed of a unique 2-connected component equal to $\Gamma$. A simple invariant ------------------ Consider more closely the implications of Corollary \[lem:finitefaces\]. The group $G$ of automorphisms of the graph acts on the set of the finite faces of the embedding. Let us focus on the invariants under this action. The classes or colors of the edges and faces are simple examples of geometrical invariants. For the sake of clarity, we always mark classes (or colors) of edges with gothic letters $\mathsf{E}_G=\{\mathfrak{a}, \mathfrak{b}, \mathfrak{c} \dots\}$ and classes (or colors) of faces with greek letters $\mathsf{F}_G=\{ \alpha, \beta, \gamma \dots\}$. In the remaining of the article, we will suppose that the group $G$ is fixed and therefore drop the letter $G$. Suppose $e$ is mapped by automorphism on $f\in \mathfrak{e}$. As a result from Theorem \[lem:finitefaces\], the finite faces incident to the edge $e$ are mapped by automorphism onto the finite faces incident to $f$. And the automorphism is invertible, therefore the finite faces incident to both edges are in bijection. In turn, there is an equal number of infinite faces incident to both edges. This concludes the proof. According to Lemma \[lem:separation\], for any class of edges $\mathfrak{e} \in \mathsf{E}$, it is possible to define the [ *separator*]{} of $\mathfrak{e}$, namely $\mathsf{sep}(\mathfrak{e})$, as the pair of classes of faces separated by $\mathfrak{e}$. This separator is a geometrical invariant under the action of $G$. Edge and Face vectors --------------------- Given a vertex $v\in\Gamma$, consider the finite subgraph $\Lambda$ of $\Gamma$ composed of all edges incident to $v$ and its planar embedding induced by the embedding of $\Gamma$. Select a particular edge $e$ incident to $v$. An [*edge vector*]{} $\xi$ of $\Gamma$ around $v$ is the vector whose elements describe the classes of edges appearing around $v$ in $\Lambda$ in the positive direction, starting from $e$. Similarly, a [*face vector*]{} $\phi$ around $v$ is the vector whose elements describe the classes of faces appearing around $v$ in $\Lambda$, starting from the face next to $e$ in the positive direction. As a convention, we always choose the pair $(\xi,\phi)$ composed of an edge vector and a face vector, to be [*locked*]{} as follows: the edge $\xi_i$ separates the faces $\phi_i$ and $\phi_{i-1}$. We decompose these vectors into blocks separated by infinite faces : ------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------- $\xi$ : \[ $\overbrace{\xi_{k_1^s},\dots,\xi_{k_1^e-1},\xi_{k_1^e}}^{\textrm{Block 1}}$, $\overbrace{\xi_{k_2^s},\dots,\xi_{k_2^e-1},\xi_{k_2^e}}^{\textrm{Block 2}}$, …, $\overbrace{\xi_{k_t^s},\dots,\xi_{k_t^e-1},\xi_{k_t^e}}^{\textrm{Block t}}$\] $\phi$ : \[ $\underbrace{\phi_{k_1^s},\dots,\phi_{k_1^e-1}}_{\textrm{Block 1}},\infty$, $\underbrace{\phi_{k_2^s},\dots,\phi_{k_2^e-1}}_{\textrm{Block 2}},\infty$, …, $\underbrace{\phi_{k_t^s},\dots,\phi_{k_t^e-1}}_{\textrm{Block t}},\infty$\] ------------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------- Here $\xi_k$ and $\phi_k$ represent respectively the $k$-th elements of the vectors $\xi$ and $\phi$. The $i$-th block starts at index $k_i^s$ and ends at index $k_i^e$. If the graph does not contain any infinite face, then the decomposition contains a single block. Consider the set of all possible edge and face vectors in the embedding of $\Gamma$. Define the following operations on this set: - <span style="font-variant:small-caps;">Rotation and Symmetry:</span> These operations correspond to the usual isometries of the plane acting on the face and edge vectors. - <span style="font-variant:small-caps;">Rearrangement:</span> The operations consists in rearranging the blocks while preserving the fact that these blocks are separated by infinite faces. More graphically, given a permutation $\sigma$ of the blocks : ------------- ------------------------------------------------------------------------- ------------------------------------------------------------------------- ---- -------------------------------------------------------------------------- $\xi~$: \[ $\overbrace{\xi_{k_{\sigma(1)}^s},\dots, $\overbrace{\xi_{k_{\sigma(2)}^s},\dots, …, $\overbrace{\xi_{k_{\sigma(t)}^s},\dots, \xi_{k_{\sigma(1)}^e}}^{\textrm{Block $\sigma(1)$}}$, \xi_{k_{\sigma(2)}^e}}^{\textrm{Block $\sigma(2)$}}$, \xi_{k_{\sigma(t)}^e}}^{\textrm{Block $\sigma(t)$}}$\] $\phi~$: \[ $\underbrace{\phi_{k_{\sigma(1)}^s},\dots, $\underbrace{\phi_{k_{\sigma(2)}^s},\dots, …, $\underbrace{\phi_{k_{\sigma(t)}^s},\dots, \phi_{k_{\sigma(1)}^e-1}}_{\textrm{Block $\sigma(1)$}},\infty$, \phi_{k_{\sigma(1)}^e-1}}_{\textrm{Block $\sigma(2)$}},\infty$, \phi_{k_{\sigma(1)}^e-1}}_{\textrm{Block $\sigma(t)$}},\infty$\] ------------- ------------------------------------------------------------------------- ------------------------------------------------------------------------- ---- -------------------------------------------------------------------------- - <span style="font-variant:small-caps;">Twist:</span> The twist operation describes a symmetry applied on a single 2-connected component around a vertex. For example, the twist applied onto the first component corresponds to the following transformation: ------------- -------------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------- $\xi$ : \[ $\overbrace{\xi_{k_1^e},\dots,\xi_{k_1^s+1},\xi_{k_1^s}}^{\textrm{Block $\overbrace{\xi_{k_2^s},\dots,\xi_{k_2^e-1},\xi_{k_2^e}}^{\textrm{Block 2}}$, …, $\overbrace{\xi_{k_t^s},\dots,\xi_{k_t^e-1},\xi_{k_t^e}}^{\textrm{Block t}}$\] 1 reversed}}$, $\phi$ : \[ $\underbrace{\phi_{k_1^e-1},\dots,\phi_{k_1^s}}_{\textrm{Block 1 reversed}},\infty$, $\underbrace{\phi_{k_2^s},\dots,\phi_{k_2^e-1}}_{\textrm{Block 2}},\infty$, …, $\underbrace{\phi_{k_t^s},\dots,\phi_{k_t^e-1}}_{\textrm{Block t}},\infty$\] ------------- -------------------------------------------------------------------------------------- ------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------- Two pairs of locked vectors $(\xi_1,\phi_i)$ and $(\xi_2,\phi_2)$ around $v$ are said to be [*isomorphic*]{} if and only if it is possible to transform the first into the second by a sequence of rotations, symmetries, rearrangements and twists. The following lemma states that these operations describe all the possible edge and face vectors in the embedding: \[lem:edgefacevect\] The edge vector and the face vector of $\Gamma$ is independent of the choice of the embedding of $\Gamma$ and of the vertex around which it is chosen, up to isomorphism. This is a direct consequence of Corollary \[lem:finitefaces\]. Since finite faces are mapped onto finite faces by the automorphisms of $\Gamma$, then the 2-connected components of $\Gamma$ are mapped onto 2-connected components. Therefore, the only operations that we may apply on the set of edge and face vectors of $\Gamma$ are rotations and symmetries in the case of 2-connected graphs, and rearrangement of the $2$-connected components for 1-separable graphs. These operations are exactly those described by the twists and rearrangements. Given an edge vector $\xi$ and a face vector $\phi$ of $\Gamma$ that are locked together, it is therefore possible to determine all possible vectors in the class of isomorphism. Therefore we will only consider [*the*]{} edge and face vectors of $\Gamma$ by choosing a representative in this class. \[exm:edgefacevector\] Suppose that the graph $\Gamma$ possesses the set of colors defined by $\mathsf{E}=\{\mathfrak{b},\mathfrak{r},\mathfrak{g}\}$ for its edges and $\mathsf{F}=\{\alpha,\beta,\gamma\}$ for its faces. An example of such a graph appears on Figure \[fig:example2\] page . We represent the edge and face vectors $(\xi,\phi)$ of this graph by the following picture : (-2.5,-2.4)(2.5,2.4) (0.725;0.5)[${\beta}$]{} (0.725;1.5)[${\gamma}$]{} (0.725;2.5)[${\alpha}$]{} (0.725;3.5)[${\beta}$]{} (0.725;4.5)[${\gamma}$]{} (0,0)[2pt]{}[A]{} (1,0)[2pt]{}[A1]{} (1.30;0)[$1({\mathfrak{r}})$]{} (0.3,0.9)[2pt]{}[A2]{} (1.25;1)[$2({\mathfrak{r}})$]{} (-0.8,0.6)[2pt]{}[A3]{} (1.30;2)[$3({\mathfrak{b}})$]{} (-0.8,-0.6)[2pt]{}[A4]{} (1.30;3)[$4({\mathfrak{g}})$]{} (0.3,-0.9)[2pt]{}[A5]{} (1.25;4)[$5({\mathfrak{r}})$]{} (0,0)[0.25]{}[0]{}[4]{}   2.3cm Suppose for our example that the faces colored by $\beta$ are infinite. Therefore, in the aforementioned decomposition, there are two blocks, one containing the edges numbered $\{1;5\}$ and the other containing the edges numbered $\{2;3;4\}$. Under these hypotheses, we can operate the following transformations onto the pair $(\xi,\phi)$: [   ]{} With only two blocks, a rearrangement is the same as a rotation. Moreover, since one of the blocks is stable by symmetry, a twist of this component leaves the pair unchanged. A twist of the other component is the same as a symmetry of the pair. Edge vectors and face vectors in general do not determine a vertex-transitive graph in a unique way. It is quite possible to obtain non-isomorphic graphs possessing the same edge and face vector. For example, consider the graphs on Figure \[fig:edgeface\]. Both graphs correspond to the planar tiling of the hyperbolic plane with decagons; both are vertex-transitive and face-transitive graphs, and they have the same edge vectors. Nevertheless, the borders of the faces differ: for the graph on the left, it corresponds to $(\mathfrak{rgrbrbgbrb})$ and $(\mathfrak{rgbrb})^2$ on the right, where $\mathfrak{r}$, $\mathfrak{g}$ and $\mathfrak{b}$ respectively stand for the three different classes of edges. (-2,-2.5)(2,2.6) (-4,0)[![Non-isomorphic transitive graphs with the same edge and type vectors. The extremities of the lighter edge are non-isomorphic (see detail), which, considering the fact that both graphs are 3-connected, implies the non-isomorphism.[]{data-label="fig:edgeface"}](figure1.ps "fig:"){width="6cm"}]{} (-6.6,0.6)(-5,2.2) (2.9,-3.6)[![Non-isomorphic transitive graphs with the same edge and type vectors. The extremities of the lighter edge are non-isomorphic (see detail), which, considering the fact that both graphs are 3-connected, implies the non-isomorphism.[]{data-label="fig:edgeface"}](figure1.ps "fig:"){width="10cm"}]{} (-1,-0.4)(1,-2.4) (4.2,0)[![Non-isomorphic transitive graphs with the same edge and type vectors. The extremities of the lighter edge are non-isomorphic (see detail), which, considering the fact that both graphs are 3-connected, implies the non-isomorphism.[]{data-label="fig:edgeface"}](figure2.ps "fig:"){width="6cm"}]{} (1.6,0.6)(3.2,2.2) (2.9,-0.8)[![Non-isomorphic transitive graphs with the same edge and type vectors. The extremities of the lighter edge are non-isomorphic (see detail), which, considering the fact that both graphs are 3-connected, implies the non-isomorphism.[]{data-label="fig:edgeface"}](figure2.ps "fig:"){width="10cm"}]{} (-1,0.4)(1,2.4) (1.6,1.4)(1,1.4) (-5,0.6)(-3,-1.4)(-1,-1.4) For an accurate description of the graph, some complementary informations are therefore needed. Following the intuitions in the case of TLF-planar Cayley graphs [@DavidCayley], these informations are likely to come from local invariants linked to the classes of edges of $\Gamma$. Edge neighborhoods {#sec:edgeneighb} ------------------   1.4cm -3.7mm extremity, such that each corresponding edge and face vectors be locked together. Technically, we represent an edge neighborhood $\eta_e$ by the following structure: $$\eta_e = \bigg\{ \overbrace{ \overbrace{\bigstrut [\xi_1,\dots,\xi_d]}^{\textrm{Edge vector}}, \overbrace{\bigstrut [\phi_1,\dots,\phi_d]}^{\textrm{Face vector}} }^{\textrm{1st extremity}}, \overbrace{ \overbrace{\bigstrut [\xi'_1,\dots,\xi'_d]}^{\textrm{Edge vector}}, \overbrace{\bigstrut [\phi'_1,\dots \phi'_d]}^{\textrm{Face vector}} }^{\textrm{2nd extremity}} \bigg\}$$ $$\textrm{for}~\{\xi_i,\xi'_i\} \subset \mathsf{E}, \quad \textrm{and}~ \{\phi_i,\phi'_i\} \subset \mathsf{F}$$ where $\xi_1=\xi'_1=e$, $\phi_1=\phi'_d$ and $\phi_d=\phi'_1$. The vectors $[\xi_1,\dots,\xi_d]$ and $[\phi_1,\dots,\phi_d]$ are respectively the edge and face vectors of the first extremity, and the vectors $[\xi'_1,\dots,\xi'_d]$ and $[\phi'_1,\dots \phi'_d]$ correspond to the second extremity. The [*color*]{} of an edge neighborhood corresponds to the class of edges of $e$. The [*separator*]{} of $\eta_e$, noted $\textsf{sep}(\eta_e)$, correspond to the pair of classes of faces separated by $e$, here $(\phi_1,\phi'_1)$. An edge neighborhood $\eta_e$ colored by $\mathfrak{e}$ is said to be [ *coherent*]{} with a pair of vectors $(\xi,\phi)$ if and only if both edge vectors and face vectors at each extremity of $e$ are isomorphic to $(\xi,\phi)$. Consider the set of edge neighborhoods of the same color of $\Gamma$. As for edge and face vectors, it is possible to define operations on this set: - <span style="font-variant:small-caps;">Inversion and Symmetry:</span> Inversion is the operation exchanging both extremities of the edge neighborhood, and corresponds to an exchange of the edge vectors and of the faces. Symmetry corresponds to the operation of symmetry (defined on the edge and face vectors) applied to each extremity of the edge neighborhood, while preserving the central edge. - <span style="font-variant:small-caps;">Twist or Rearrangement of an extremity:</span> Let $(\xi,\phi)$ be the edge and face vectors associated to one extremity of an edge neighborhood $\eta_e$. Any twist or rearrangement of $(\xi,\phi)$ that stabilizes the central edge leaves the faces separated by $\eta_e$ unchanged and extends naturally on the edge neighborhood. Two edge neighborhoods $\eta_1$ and $\eta_2$ of the same color $\mathfrak{e}\in\mathsf{E}$ are said to be [*isomorphic*]{} if and only if it is possible to transform $\eta_1$ into $\eta_2$ by a sequence of inversions, symmetries, twists and rearrangements of any extremity. As was the case for the edge and face vectors, these operations describe all the possible edge neighborhoods in the embedding. We therefore select a single representative for each class of edges in $\Gamma$. \[lem:edgeneigh\] The edge neighborhood colored by $\mathfrak{e}\in\mathsf{E}$ of $\Gamma$ is independent of the choice of the embedding of $\Gamma$ and of the edge it is referring to, up to isomorphism. The separator of a class of edge is independent of the edge (Lemma \[lem:separation\]). Since finite faces are mapped onto finite faces by automorphisms of $\Gamma$, the $2$-connected components attached to the extremity of a class of edge are mapped onto $2$-connected components by automorphism. Therefore, the automorphisms mapping an edge onto another edge correspond to a rearrangement of composition of either natural transformations of the plane preserving this edge but exchanging its extremities (inversion and symmetry) or automorphisms leaving the edge and its extremities stable (rearrangements and twists). Let $(\xi,\phi)$ be an edge and face vector. Two edges $e,f\in\xi$ labeled by the color $\mathfrak{e} \in \mathsf{E}$ are said to be equivalent, namely $e\sim f$, if and only if there exists an isomorphism of $(\xi,\phi)$ mapping $e$ onto $f$. Consider the set of edges in $\xi$ colored by $\mathfrak{e}$. Then $\textsf{eq}_{\xi,\phi}(\mathfrak{e})$ is the number of classes of equivalence inside this set with regard to $\sim$. The uniqueness of the edge neighborhood implies that $\textsf{eq}_{\xi,\phi}(\mathfrak{e})$ is at least one and at most two. As a matter of fact, each class of equivalence must correspond to an extremity of the edge neighborhood colored by $\mathfrak{e}$, and this neighborhood only has two extremities. \[exm:edgeneighborhoods\] Let us define edge neighborhoods coherent with the pair of edge and face vectors described in example \[exm:edgefacevector\]. We represent the edge neighborhood associated to the edge colored by $\mathfrak{r}$ by the following picture: 9.5mm The fact that the faces colored by $\beta$ are infinite allows for the transformation of this labeling scheme by twists of either extremity. While it could seem that the face vector is superfluous, [*i.e.*]{} that the graph could be described simply by its edge neighborhoods without any face vector, consider the particular case of graphs that are both vertex-transitive and edge-transitive, as in the Figure \[fig:edgeneigh\]. The $[5;5;5;5]$ and $[3;7;3;7]$ graphs – denoted by their type vector – both belong to this class. Both possess the same edge neighborhoods and edge vectors. Yet these two graphs are obviously not isomorphic and their group of automorphisms are distinct, because the first is face-transitive, and the second possesses two different classes of faces. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Non-isomorphic transitive graphs with the same edge neighborhoods, assuming that we remove the face vectors from the edge neighborhoods.[]{data-label="fig:edgeneigh"}](figure3.ps "fig:"){width="6cm"} ![Non-isomorphic transitive graphs with the same edge neighborhoods, assuming that we remove the face vectors from the edge neighborhoods.[]{data-label="fig:edgeneigh"}](figure4.ps "fig:"){width="6cm"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Whitney’s Theorem [@Whitney] states that the finite planar 3-connected graphs have a unique embedding property [*i.e.*]{} their dual is uniquely defined. This property was extended to infinite graphs by Imrich [@Imrich]. As a matter of face, when $\Gamma$ is $3$-connected (while remaining transitive and TLF-planar), all faces of the graph are finite, and the graph is obviously composed of a unique $2$-connected component. Therefore the classes of isomorphisms of the geometrical invariants described in this section contain neither twists nor rearrangements. This is coherent with the unique embedding property. Notice that this property holds when $\Gamma$ is at least $2$-connected, in the case of transitive TLF-planar graphs. On the other hand, when $\Gamma$ is $1$-separable, these invariants provide an accurate description of the possible embeddings of the graph in the plane. Labeling schemes {#sec:labelingscheme} ================ Our purpose in this section is to consider the geometrical invariants of $\Gamma$ and to prove that they are sufficient to give an exact description of the graph. The resulting description of the graph is called a labeling scheme, and extends the notion of labeling scheme for Cayley graphs, detailed in [@Chaboud; @DavidCayley]. Border automaton ---------------- Let $\mathsf{E}$ and $\mathsf{F}$ be two non-intersecting finite sets of colors. A [*labeling scheme*]{} of degree $d$ is a 3-tuple $(\xi,\phi,\eta)$ possessing the following properties: 1. $\xi \in \mathsf{E}^d$ is an edge vector and $\phi \in \mathsf{F}^d$ is a face vector ; 2. for each color $\mathfrak{e} \in \mathsf{E}$, $\textsf{eq}_{\xi,\phi}(\mathfrak{e})$ does not exceed two; 3. for each color $\mathfrak{e}$ in $\xi$, there exists a unique edge neighborhood $\eta_\mathfrak{e} \in \eta$ of the same color; all edge neighborhoods in $\eta$ must be coherent with $(\xi, \phi)$ and if $\textsf{eq}_{\xi,\phi}(\mathfrak{e}) = 2$, then each class of equivalence must appear on an extremity of $\eta_\mathfrak{e}$. Given a graph $\Gamma$, then $\xi$ and $\phi$ stand for $\Gamma$’s edge and face vectors, locked together. The set $\eta$ stands for the set of edge neighborhoods of $\Gamma$. In the following, $\eta_{\mathfrak{e}}$ stands for the edge neighborhood in $\eta$ colored by $\mathfrak{e}$. The isomorphisms of labeling schemes are defined as the isomorphisms of the elements of the scheme. The results in the previous sections ensure that for any pair $(\Gamma,G)$, there exists a labeling scheme $(\xi,\phi,\eta)$ corresponding to the coloring of the vertices and faces associated to the group $G$ of the graph $\Gamma$, up to isomorphism. Notice that the condition $(ii)$, along with Lemma \[lem:edgeneigh\] constrain the number of possible labeling schemes. Consider a labeling scheme $(\xi,\phi,\eta)$. Let $e$ be an element of $\xi$ and $\eta_e\in\eta$ be the edge neighborhood labeled by the color of $e$. The operation of [*gluing $\eta_e$ with $e$*]{} is possible if and only if there exists an edge neighborhood $\kappa$ isomorphic to $\eta_e$ such that one extremity of $\kappa$ be exactly equal to the pair $(\xi,\phi)$, with $e\in\xi$ as the central edge of the neighborhood $\kappa$. \[lem:reconstruction\] Let $(\xi,\phi,\eta)$ be a labeling scheme, $e \in \xi$ and $\eta_e \in \eta$ colored by $e$, there exists a unique way to glue $\eta_e$ with $e$, up to isomorphism of the other extremity of $\eta_e$. Suppose that there exists two different ways to glue $\eta_e$ onto $e$. If each extremity of $\eta_e$ may be glued onto $e$, this means that all edges colored by $e$ are equivalent with regard to $\sim$. If only one extremity of $\eta_e$ may be glued onto $e$, then there exists two classes of equivalence with regard to $\sim$ for the color of $e$, each of them at one extremity of $\eta_e$. In any case, since the number of classes of equivalence for the color of $e$ does not exceed two, that leads to a unique possibility to glue $\eta_e$ onto $e$, up to isomorphism. The existence follows from the fact that every class of equivalence of every color appears on an extremity of the associated edge neighborhood. The gluing of edge neighborhoods allows the reconstruction of the graph. Let us start from an initial graph $\Lambda$ composed of all edges incident to a central vertex $v$. Suppose that these edges are labeled accordingly to the edge vector $\xi$. Obviously, this planar graph does not include faces for the moment, nevertheless we are expecting to build a face $\phi_i$ between the edges $\xi_i$ and $\xi_{i+1}$. By gluing the appropriate edge neighborhood $\eta_\mathfrak{e}$ onto the edge $\xi_i$, we create $d-1$ new edges incident to the other extremity of $\xi_i$ labeled such that the obtained edge neighborhood is isomorphic to $\eta_\mathfrak{e}$. The other extremities of these new edges correspond to new vertices of the graph. We will see later how it is possible to close the border of the faces. \[exm:reconstruction\] Consider the labeling scheme defined in Figure \[fig:labscheme1\], where the set of colors are $\mathsf{E}=\{\mathfrak{b},\mathfrak{r},\mathfrak{g}\}$ and $\mathsf{F}=\{\alpha,\beta,\gamma\}$. It is based on the examples \[exm:edgefacevector\] and \[exm:edgeneighborhoods\]. We associate to each color in $\mathsf{E}$ a unique edge neighborhood. The face $\beta$ is supposed to be infinite, thus allowing by a twist exchanging the faces colored by $\beta$ that $\textsf{eq}_{\xi,\psi}(\mathfrak{r}) = 2$. (-1.6,-1.5)(1.6,2.4) (0.,0.)[${\beta}$]{} (0.3;4.75)(0.3;0.75)(0.3;1.75)(0.3;2.75) (0.65;4.75)[2pt]{}[A0]{} (0.65;0.75)[2pt]{}[A1]{} (0.65;1.75)[2pt]{}[A2]{} (0.65;2.75)[2pt]{}[A3]{} (\[angle=3.6,offset=0.3\]A1)[2pt]{}[B1]{} (\[angle=4.6,offset=0.3\]A1)[2pt]{}[B2]{} (\[angle=0.6,offset=0.3\]A1)[2pt]{}[B3]{} (\[angle=4.6,offset=0.3\]A2)[2pt]{}[B1]{} (\[angle=0.6,offset=0.3\]A2)[2pt]{}[B2]{} (\[angle=1.6,offset=0.3\]A2)[2pt]{}[B3]{} (\[angle=0.45,offset=0.3\]A3)[2pt]{}[B1]{} (\[angle=1.45,offset=0.3\]A3)[2pt]{}[B2]{} (\[angle=2.35,offset=0.3\]A3)[2pt]{}[B3]{} (\[angle=3.3,offset=0.3\]A3)[2pt]{}[B4]{} (\[angle=1.8,offset=0.3\]A0)[2pt]{}[B1]{} (\[angle=2.75,offset=0.3\]A0)[2pt]{}[B2]{} (\[angle=3.65,offset=0.3\]A0)[2pt]{}[B3]{} (\[angle=4.65,offset=0.3\]A0)[2pt]{}[B4]{} (-0.65,-0.3)[$1$]{}(-0.72,-0.22)[$2$]{} (-0.67,-0.11)[$3$]{}(-0.59,-0.10)[$4$]{} (-0.56,-0.24)[$5$]{} (-0.42,0.42)[$4$]{}(-0.46,0.50)[$3$]{} (-0.45,0.6)[$2$]{}(-0.33,0.6)[$1$]{} (-0.3,0.52)[$5$]{} (0.42,0.42)[$1$]{}(0.46,0.50)[$5$]{} (0.43,0.6)[$4$]{}(0.33,0.6)[$3$]{} (0.29,0.52)[$2$]{} (0.65,-0.3)[$2$]{}(0.72,-0.22)[$3$]{} (0.67,-0.11)[$4$]{}(0.59,-0.10)[$5$]{} (0.56,-0.24)[$1$]{} Let us try to glue the edge neighborhoods consecutively, while following the edges constituting the border of the orange face (cf. Figure \[fig:successive\]). We begin by gluing the golden edge neighborhood $\eta_\mathfrak{g}$ onto the unique golden $\mathfrak{g}$ edge belonging to the edge vector. Having glued two red edge neighborhoods $\eta_\mathfrak{r}$, we can ourselves continue the process indefinitely, by gluing red edge neighborhoods along the border of the face. If this labeling scheme corresponds to an existing graph $\Gamma$, then this process describes the border of a face colored by $\beta$ in $\Gamma$. Consider an automaton built over the alphabet $A$. A language $L \subset A^\star$ acts naturally over the set of states of the automaton. Therefore, we consider the following partition of the states of the automaton: two states $u$ and $v$ are equivalent if and only if there exists $l\in L$ such that it is possible to start from the state $u$, and reach the state $v$ by reading the word $l$ on the automaton. Now we will build an automaton by describing its set of states and the language acting on these states. A [*configuration*]{} of a labeling scheme $(\xi,\phi,\eta)$ is a pair $(\mathcal{C},b)$ where $\mathcal{C}$ is a class of equivalence for $\sim$ in $(\xi,\phi)$ and $b\in\{+,-\}$ is a direction of rotation. A configuration represents a block of edges (separated by infinite faces) attached to a vertex, an edge inside this component and a direction expressing in which way the block is embedded in the plane. Two configurations are said to be *equivalent* if $(i)$ they correspond to two blocks, $(ii)$ it is possible to map the first block onto the second by an isomorphism of edge and face vectors and $(iii)$ this map sends the edge of the first configuration onto the edge of the second configuration, and maps the direction of rotation accordingly. For a given labeling scheme, the number of configurations is finite, bounded by $2d$. These configurations define the states of our automaton. Let us define the following relations between configurations: - The relation describes whether a configuration comes next to another one in the edge vector, given a direction of rotation: $(\mathcal{C},b) \overset{\textsf{\tiny next}}{\longrightarrow} (\mathcal{C}',b)$ if and only if there exist $(\bar{\xi},\bar{\phi})$ isomorphic to $(\xi,\phi)$ such that if $\bar{\xi}_i$ (the $i^\textrm{th}$ element of the vector $\xi$) belongs to $\mathcal{C}$, the edge next to $\bar{\xi}_i$ according to the direction $b$ (if $b$ is positive, that is $\bar{\xi}_{i+1}$, otherwise that is $\bar{\xi}_{i-1}$) belongs to $\mathcal{C}'$. - The relation describes how the configuration is modified when we cross the corresponding edge: $(\mathcal{C},b) \overset{\textsf{\tiny inv}}{\longrightarrow} (\mathcal{C}',b')$ if and only both configurations correspond to the same color of edge $\mathfrak{e}$ , and an edge neighborhood isomorphic to $\eta_{\mathfrak{e}}\in\eta$ has $(\mathcal{C},b)$ and $(\mathcal{C}',b')$ at its extremities. Given these two relations, it is possible to build a finite state automaton on the set of configurations. This automaton is called the [*border automaton*]{} associated to this labeling scheme. By Lemma \[lem:reconstruction\], it is connected. The relation is involutive, while the relation may have more than one successor, therefore this automaton is non-deterministic in general. Determining the border of a face $\mathcal{F}$ is only a matter of reading the infinite word $(\textsf{next}\cdot \textsf{inv})^\omega$, with starting state a configuration such that the face $\mathcal{F}$ appears between this configuration and the configuration to it in the automaton. Such a word is an orbit of the automaton under the action of $(\textsf{next}\cdot \textsf{inv})$. For a given labeling scheme, we can associate to an element of the face vector – or a face – an orbit of the automaton. The colors of the faces are supposed to distinguish the classes of the faces under the group of automorphism. Two finite faces are said to be [ *equivalent*]{} if they have the same orbit or orbits that correspond to the same border read in opposite directions. If two faces of the same color are not equivalent, then the labeling scheme is said to be [ *invalid*]{}. Labeling schemes resulting from TLF-planar graphs are always valid. This property ensures that given two faces of the same color, there exists an automorphism mapping the first onto the second. The orbits of the automaton can be classified into two categories : the cyclic orbits and the acyclic ones. Cyclic orbits correspond to faces that can be “closed”, meaning that their border is periodic. Consider the orbit containing the $i$-th edge of $\xi$, with positive direction of rotation. If this orbit is cyclic, then we define $k_i$ as the size of this orbit, otherwise $k_i=\infty$. Let $(a_i), i\in[1;d]$ be a set of formal letters such that $a_i=a_j$ if and only if the i-th and j-th face of $\phi$ are of the same color. The vector whose elements are the $k_i a_i$ (or simple $\infty$ if $k_i$ is infinite) is called the [*primitive type vector*]{} of $(\xi,\phi,\eta)$. A type vector $[l_1,\dots,l_d]$ is said to be [ *valid*]{} with regard to that labeling scheme if and only if there exists a valuation of the $(a_i)$ in ${\mathbb{N}}$ such that $\forall i, l_i = k_i a_i$ and all values in the vector are greater than three. Examples of construction ------------------------ \[exm:threeconnex\] Consider the labeling scheme defined in Figure \[fig:labscheme2\], with degree $5$, $\textsf{E} = \{\mathfrak{b},\mathfrak{r},\mathfrak{g}\}$ and $\textsf{F} = \{ \alpha,\beta, \gamma\}$. With each color in $\textsf{E}$ we associate a unique edge neighborhood. Let us try to compute the associated border automaton. Even if $|\textsf{E}|=3$, there exist $10$ different configurations, one for each edge and direction of rotation. Since the $\textsf{next}$ operation corresponds to a rotation of the edge vector, we represent the border automaton with two cycles of length $5$. On the Figure \[fig:bordaut2\], edges appear in black, while edges appear in dashed gray lines: (-2.2,-2)(2.4,2) (0.725;0.5)[${\beta}$]{} (0.725;1.5)[${\alpha}$]{} (0.725;2.5)[${\beta}$]{} (0.725;3.5)[${\beta}$]{} (0.725;4.5)[${\gamma}$]{} (0,0)[2pt]{}[A]{} (1,0)[2pt]{}[A1]{} (1.3;0)[$1(\mathfrak{r})$]{} (0.3,0.9)[2pt]{}[A2]{} (1.2;1)[$2(\mathfrak{b})$]{} (-0.8,0.6)[2pt]{}[A3]{} (1.3;2)[$3(\mathfrak{b})$]{} (-0.8,-0.6)[2pt]{}[A4]{} (1.3;3)[$4(\mathfrak{g})$]{} (0.3,-0.9)[2pt]{}[A5]{} (1.2;4)[$5(\mathfrak{r})$]{} (0,0)[0.25]{}[0]{}[4]{}    (-2,-1.7)(2,1.8) (2;2)(2.4;1.5)(2;1) (1.4;2)(1.8;1.5)(1.4;1) (2;0)(2.3;0.1)(2.3;-0.1)(2;0) (1.4;0)(1.7;0.1)(1.7;-0.1)(1.4;0) (2;3)(2.4;3.5)(2;4) (1.4;3)(1.8;3.5)(1.4;4) (2;2)[0.1]{} (1.4;1)[0.1]{} (2;3)[0.1]{} (2;0)[0.1]{} (2;1)[0.1]{} (1.4;0)[0.1]{} (1.4;2)[0.1]{} (1.4;4)[0.1]{} (2;4)[0.1]{} (1.4;3)[0.1]{}    (-1.5,-1.7)(2,1.8) (-0.7,-1.5)(3.3,1.5) (-0.3,1)[A]{} (-0.3,1)[0.1]{} (-0.11,1.105)[$2$]{} (0.4,1)[First orbit]{} (-0.3,0)[A]{} (-0.3,0)[0.1]{} (-0.11,0.105)[$3$]{} (0.4,0)[Second orbit]{} (-0.3,-1)[A]{} (-0.3,-1)[0.1]{} (-0.11,-.895)[$2$]{} (0.4,-1)[Third orbit]{} The possible orbits are therefore $[1]$, $[2,4,5]$, and $[3]$. The first one corresponds to faces with red borders $\mathfrak{r}^\star$, and the third one to faces with blue borders $\mathfrak{b}^\star$. The others have border $(\mathfrak{bgr})^\star$. We have three possible classes of borders of the faces, corresponding to each orbit of the automaton and to each color in $\mathsf{F}$. The corresponding primitive type vector is: $[3n,m,3n,3n,p]$ for $(n,m,p)\in {\mathbb{N}}$. On Figure \[fig:example1\], there is an planar embedding in the hyperbolic plane of a graph possessing this labeling scheme and type vector $[3,4,3,3,5]$. The faces [$\alpha$]{} correspond squares and the faces [$\gamma$]{} to pentagons, while the faces with 3-colored borders are triangles. Notice that this graph possesses a trivial stabilizer and is therefore a Cayley graph. (-2.5,-2.8)(2.5,2.8) (0.3,0)[![image](figure5.ps){width="7cm"}]{} \[exm:aperiodic\] Consider now the case of example \[exm:reconstruction\] page , where $\mathsf{E}=\{\mathfrak{b},\mathfrak{r},\mathfrak{g}\}$ and $\mathsf{F}=\{\alpha,\beta,\gamma\}$. As in the previous example, we associate to each color in $\mathsf{E}$ a unique edge neighborhood. The labeling scheme is valid with regard to our definitions. When we assume that the face $\beta$ is infinite, we come down with the border automaton appearing on Figure \[fig:bordaut1\]. (-2,-1.9)(2,1.7) (2;0.75)[2pt]{}[A1]{} (2;1.75)[2pt]{}[A2]{} (2;2.75)[2pt]{}[A3]{} (2;3.75)[2pt]{}[A4]{} (2;4.75)[2pt]{}[A5]{} (\[angle=0.2,offset=0.4\]A1)[$1^-=5^+$]{} (\[angle=0.2,offset=0.4\]A2)[$1^+=5^-$]{} (\[angle=2,offset=0.4\]A3)[$2^+$]{} (\[angle=2.8,offset=0.4\]A4)[$3^+$]{} (\[angle=3.6,offset=0.4\]A5)[$4^+$]{} (-0.8,0)[2pt]{}[B3]{} (0,-0.5)[2pt]{}[B4]{} (0.8,0)[2pt]{}[B5]{} (-0.95,-0.3)[$4^-$]{} (0,0)[$3^-$]{} (0.95,-0.3)[$2^-$]{} (2;2.75)[0.1]{} (0,-0.5)[0.1]{} (2;0.75)[0.1]{} (2;3.75)[0.1]{} (-0.8,0)[0.1]{} (2;1.75)[0.1]{} (2;4.75)[0.1]{} (0.8,0)[0.1]{}    (-1.5,-1.9)(2,1.7) (-0.7,-1.5)(3.3,1.5) (-0.3,1)[A]{} (-0.3,1)[0.1]{} (-0.11,1.105)[$3$]{} (0.4,1)[First orbit]{} (-0.3,0)[A]{} (-0.3,0)[0.1]{} (-0.11,0.105)[$2$]{} (0.4,0)[Second orbit]{} (-0.3,-1)[A]{} (-0.3,-1)[0.1]{} (-0.11,-0.895)[$\infty$]{} (0.4,-1)[Third orbit]{} The first orbit corresponds to the face $\gamma$, which is bordered by $(\mathfrak{rrb})^\star$. The second orbit corresponds to the face $\alpha$, bordered by $(\mathfrak{bg})^\star$. More interesting is the case of the face $\beta$. Notice that due to the possible twists, the operator is not deterministic. This leads to faces bordered by aperiodic words from $\mathfrak{r}^\omega \mathfrak{gr}^\omega$. There does not exist any embedding of the graph can avoid these aperiodic faces (cf. Fig. \[fig:example2\]), and all infinite faces either have a unique yellow edge in their border or none at all. The associated primitive type vector is given by $[\infty,3n,2m,\infty,3n]$ for values of $(m,n) \in {\mathbb{N}}$, which leads to 1-separable graphs such as the one on the right. For Cayley graphs, it is always possible to find an embedding where the border of the faces are periodic [@DavidCayley]. Hence, this set of graphs contains no Cayley graphs, because of this aperiodic border property. (-2.5,-2.8)(2.5,2.8) (0.3,0)[![Example of a 1-separable TLF-planar vertex-transitive graph, with type vector $[3;\infty;3;4;\infty]$, associated to the labeling scheme described in the Example 5.[]{data-label="fig:example2"}](figure6.ps "fig:"){width="7cm"}]{} Realization ----------- Every TLF-planar vertex-transitive graph possesses a valid labeling scheme and a vector type that is valid regarding to that scheme. Consider now the converse of this result: given a general valid labeling scheme, and a valid type vector, is it possible to produce a vertex-transitive graph that has the same labeling scheme and type vector ? Our goal in this section consists in building specific embeddings of vertex-transitive planar graphs with particular geometrical properties. Depending on the graph, we select an appropriate geometry: Euclidean, spherical or hyperbolic. As long as our graphs are TLF-planar, embeddings in spherical geometry will lead to finite graphs. Infinite faces thus can only occur in Euclidean and hyperbolic geometry. The following theorem constructs embeddings that are tilings of the plane by regular polygons. \[thm:existence\] Given a labeling scheme $(\xi,\phi,\eta)$, and a valid type vector $[k_1,\dots,k_d]$, there exists a TLF-planar vertex-transitive graph possessing this scheme and type vector. Moreover, all faces of the embedding are regular polygons. Consider any point in the plane. We are going to evaluate the interior angle of a regular polygon of side length $l$, with $k_i$ sides, and then the total angle corresponding to all the polygons in the type vector must be equal to $2\pi$. The following values result from simple trigonometry in the different geometries: $$\theta_i(l) = \left\{ \begin{array}{cl} \textrm{Spherical :}& \strut 2\arcsin \left( \frac{\cos(\pi/k_i)}{\cos(l/2)}\right) \\ \textrm{Euclidean :}& \strut \frac{(k_i-2)}{k_i}\pi\\ \textrm{Hyperbolic :}& \strut 2\arcsin \left( \frac{\cos(\pi/k_i)}{\cosh(l/2)}\right) \end{array} \right. \quad \textrm{and} \quad \sum_{i=1}^d \theta_i(l) = 2\pi\quad(1)$$ The equation $(1)$ determines the choice of the geometry : let $\Sigma$ be the sum of the angles in the Euclidean plane. If $\Sigma=2\pi$, we choose the Euclidean geometry, and any value is possible for $l$ (the corresponding graphs will be homothetic). If $\Sigma<2\pi$, we choose the spherical geometry, whereas if $\Sigma>2\pi$ we select the hyperbolic geometry. In any case, there exists an unique solution of the equation $(1)$ that determines a unique value for the length $l$, and consequently for the angles $\theta_i$[^1]. These values allow us to draw all the regular polygons that correspond to our type vector around a given point of the plane. Let $\epsilon$ be a point in the plane. By induction, we build a planar locally finite graph $\Gamma_n$, with a central vertex $\epsilon$, such that all vertices at distance $\leq n$ from $\epsilon$ have degree $d$. The graph $\Gamma_1$ is composed of the glued polygons of the first case, centered onto $\epsilon$, but restricted to the edges incident to $\epsilon$. The labeling of $\Gamma_1$ is chosen isomorphic to $(\xi,\phi)$, such that the face $\phi_i$ corresponds to a regular polygon with $k_i$ sides. Suppose now that we have built $\Gamma_n$. Consider the finite set of vertices of $\Gamma_n$ at distance $n$ of degree less than $d$. We shall build the remaining edges with geodesics, such that the length of the geodesic be $l$, and the angle at the vertex corresponding to the face $\phi_i$ be equal to $\theta_i(l)$. The labeling scheme describes how the edges must be labeled : take a vertex $s$ and an edge $e$ colored by $\mathfrak{e}$ incident to $s$. According to Lemma \[lem:reconstruction\], there exists a unique way to glue the edge neighborhood $\eta_\mathfrak{e}$ onto $e$, up to isomorphism. The choice of the edge neighborhood is not crucial, because the isomorphisms leave the angles of the faces constant. The construction of $\Gamma_{n+1}$ is just a matter of gluing the edge neighborhoods on the edges possessing an extremity at distance $n+1$ from $\epsilon$, by choosing an adequate edge neighborhood isomorphic to $\eta_\mathfrak{e}$. This construction can not lead to intersecting edges. In fact; all edges have the same length, and inside a given face, all angles are the same. Therefore the construction of a face ultimately leads to a regular polygon in the plane. This answers the problem of the closure of the faces. (-3,-2)(3,2) (0,0.5)[0.2]{}[270]{}[45]{} \[337.5\](0,0.5)[$\theta_l$]{} (0,-0.5)[0.2]{}[315]{}[90]{} \[22.5\](0,-0.5)[$\theta_l$]{} (0.707,1.207)[0.2]{}[225]{}[0]{} \[292.5\](0.707,1.207)[$\theta_l$]{} (0.707,-1.207)[0.2]{}[0]{}[135]{} \[67.5\](0.707,-1.207)[$\theta_l$]{} (1.707,-1.207)[0.2]{}[45]{}[180]{} \[112.5\](1.707,-1.207)[$\theta_l$]{} (1.707,1.207)[0.2]{}[180]{}[315]{} \[247.5\](1.707,1.207)[$\theta_l$]{} (0,0.5)[2pt]{}[A]{} (0,-0.5)[2pt]{}[AA]{} (0,-0.5)(0,0.5) (0.707,1.207)[2pt]{}[A1]{} (0,0.5)(0.707,1.207) (1.707,1.207)[2pt]{}[A2]{} (0.707,1.207)(1.707,1.207) (2.414,0.5)[2pt]{}[C2]{} (1.707,1.207)(2.414,0.5) (0.707,-1.207)[2pt]{}[A3]{} (0.707,-1.207)(0,-0.5) (1.707,-1.207)[2pt]{}[A4]{} (1.707,-1.207)(0.707,-1.207) (2.414,-0.5)[2pt]{}[A5]{} (2.414,-0.5)(1.707,-1.207) (3.121,-1.207)[2pt]{}[A6]{} (2.414,-1.914)[2pt]{}[A7]{} (-0.707,1.207)[2pt]{}[AA1]{} (0,1.914)[2pt]{}[B1]{} (-1.707,1.207)[2pt]{}[AA2]{} (-0.707,-1.207)[2pt]{}[AA3]{} (0,-1.914)[2pt]{}[B2]{} (-1.707,-1.207)[2pt]{}[AA4]{} (-2.414,-0.5)[2pt]{}[AA5]{} (-3.121,-1.207)[2pt]{}[B1]{} (-2.414,0.5)[2pt]{}[AA6]{} (2.414,0.2)[2pt]{}[C]{} (2.6,0.25)[[?]{}]{} (1.207,0)[[$\mathcal{F}$]{}]{}   (-0.5,-2)(5,2) (1.4,-0.425)[Length of the edges]{} (0,-0.5)[2pt]{}[Y1]{} (1.1,-0.5)[2pt]{}[Y3]{} (0,-0.5)(1.1,-0.5) (1.4,0.425)[Angle inside face $\mathcal{F}$]{} (0.55,0.7)[2pt]{}[Y2]{} (0.55,0.7)[0.2]{}[225]{}[315]{} \[270\](0.55,0.7)[$\theta_l$]{} (0,0)[Y5]{} (1.1,0)[Y6]{} Hence the limit graph $\Gamma$ is well defined. This graph is vertex-transitive: the construction is independent of the starting vertex, and applying this construction with two different starting vertices creates an automorphism mapping the first vertex on the second. The labeling of the edges entails that the automorphism group is transitive. The automorphisms stabilizing a vertex exactly correspond to the classes of equivalence of the configurations. Therefore, for any vertex-transitive graph obeying our conditions, it is possible to embed the graph in a particular geometry of the plane with regular polygons. Moreover, automorphisms of the graph map finite faces onto isometric faces. Consequently, in the case of the 2-connected graphs, every automorphism of $\Gamma$ extends into an isometry of the plane. This result appears in Babai [@BabaiGrowth] in the case of 3-connected graphs, and is here extended to the larger case of the TLF-planar graphs. Notice that the case of $1$-separable graphs is more complex: even is the faces of the graph are regular polygons, example \[exm:aperiodic\] shows that there exists graphs for which it is impossible to find an embedding where automorphisms may be realized by isometries of the plane. A direct consequence of this result pertains to the growth-rate of the TLF-planar graphs. When $\Gamma$ is $2$-separable, its structure is arborescent and its growth-rate is either linear or exponential. On the other hand, when $\Gamma$ is at least $3$-connected, the embedding of $\Gamma$ given by the Theorem \[thm:existence\] is quasi-isometric to the plane inside which it is defined. Therefore the growth-rate of the graph is either quadratic or exponential. Combinatorial approach ---------------------- Consider the problem of finding all the valid labeling schemes for graphs of degree $d$. The number of colors in $\textsf{E}$ and $\textsf{F}$ is bounded by $d$. Therefore the number of possible edge and face vectors is bounded by $d^{d-1}$, up to rotation, and the number of edge neighborhoods for a given color by $2d$. It is therefore possible to enumerate all possible labeling schemes, and then to extract the valid schemes by computing the border automaton. The particular properties of labeling schemes allow to drastically reduce this theoretical upper bound. \[thm:enum\] Given a number $d\geq 2$, it is possible to enumerate all TLF-planar transitive graphs having internal degree $d$, along with their labeling scheme and primitive type vector. A possible question lays in determining all possible type vectors for TLF-planar vertex-transitive graphs. We enumerate the graphs of a given degree, and eliminate the redundant ones by testing if they belong to the same class of isomorphism. For example, the possible type vectors for vertex-transitive graphs of degree $3$ are: \[page:tv\] $$\bigg\{ [n,n,n], [n,2m,2m], [2n,2m,2p] \bigg\}$$ …for values of $n,m,p$ such that the faces of the graphs are at least triangles. The set of planar graphs obtained is presented in appendix, and extends the vertex-transitive graphs presented in [@Tilings]. Notice also that this set is strictly larger than the set of possible Cayley graphs of the same degree, containing in particular the graph associated to the dodecahedron, with type vector $[5;5;5]$ and the icosidodecahedron, with type vector $[3;5;3;5]$, both of which are not Cayley graphs. More generally, this is the case of all tilings with type vector $\{[6n\pm 1,6n\pm 1,6n\pm 1]\}$. This is an example o an infinite family of TLF-planar transitive graphs which are not Cayley. More precisely, for a given labeling scheme $(\xi,\phi,\eta)$ and valid type vector, we have the possibility to check whether the associated unlabeled graph is Cayley or not, by enumerating all Cayley graphs having the same type vector [@DavidCayley], computing the labeling schemes of these transitive graphs and comparing them to $(\xi,\phi,\eta)$: If $\Gamma$ is a TLF-planar transitive graph, then it is decidable whether $\Gamma$ is the Cayley graph of a group or not, and obtain an enumeration and a description of the groups having $\Gamma$ as a Cayley graph. Another side-result is the possibility to enumerate for a given graph $\Gamma$ all the groups of automorphisms acting transitively on the set of vertices of $\Gamma$. This raises the question on finding graphs which are vertex-transitive but not Cayley, [*i.e.*]{} who do not possess a group of automorphism acting simply transitively on its vertices. Consider the stabilizer of a vertex of $\Gamma$: it must be a finite group of isometries of the plane fixing the vertex, hence it is either cyclic or dihedral. If this stabilizer does not contain any rotation, then the subgroup of direct isometries of ${\textrm{Aut}(\Gamma)}$ acts simply transitively on the vertices of $\Gamma$, therefore such graphs must possess rotations in their stabilizers. The $\textsf{multiply}(k)$ operation on a labeling scheme $(\phi,\xi,\eta)$ of degree $d$ consists in building another labeling scheme $(\phi',\xi',\eta')$ of degree $k\times d$ defined by: - $\phi'$ and $\xi'$ consist of $k$ copies of $\phi$ and $\xi$, [*i.e.*]{} $\phi'_i = \phi_{(i {\textrm{ \small mod }}d)}$, and $\xi'_i = \xi_{(i {\textrm{ \small mod }}d)}$; - each edge neighborhood $\eta'_{\mathfrak{e}}$ is a copy of $\eta_{\mathfrak{e}}$ where to the edge and face vectors at each extremity have been replaced by $(\xi',\phi')$. The set of configurations of the automaton remains unchanged by this operation, as is the construction of the border automaton. This operation allows to build labeling schemes with stabilizers containing rotations. Moreover, this operation is invertible: for a labeling scheme whose stabilizer possesses rotations, it is possible to extract a labeling scheme without rotations by an operation of division. Let $\Gamma$ be a 3-connected planar vertex-transitive graph of prime degree $d$. Either $\Gamma$ is a face and edge-transitive graph with type vector $[k;\dots;k]$, or it is a Cayley graph. Consider the group of automorphisms of $\Gamma$ and the stabilizer $G_s$ of a vertex $s$ of $\Gamma$. Since the graph can be embedded in the plane by regular polygons, so that the automorphisms map the finite faces onto finite faces, $G$ is a group of isometries of the plane and $G_s$ is a finite group of isometries of the plane leaving $s$ stable. If $G_s$ contains a non-trivial rotation, since the degree of $\Gamma$ is prime, the graph is edge and face-transitive. On the contrary, if $G_s$ contains no rotation, then it has a single non-trivial element which is a symmetry. Then the subgroup of the direct isometries in $G$ acts simply and transitively on the vertices of $\Gamma$. Therefore $\Gamma$ is a Cayley graph. This analysis points out the classes of graphs that are likely not to be Cayley graphs. For the lesser degrees, these graphs are either face-transitive $[k;\dots;k]$, or edge-transitive $[k_1,k_2,\dots,k_1,k_2]$. There exists an example of graph of degree $9$ belonging to this class that is neither face-transitive nor edge-transitive. Discussion {#discussion .unnumbered} ========== The geometrical invariants of TLF-planar graphs allowed here the description of the graphs by their local properties, in the case of vertex-transitive graphs. Nevertheless, there exists different directions where we could extend this study. First, the dual graph of a transitive planar graph, is not transitive, but cofinite. That is to say that there exists only a finite number of orbits of the vertices under the action of the group of automorphisms. Even if these graphs in general do not respect the same geometrical properties as the transitive graphs, it would be interesting to study this family and their embeddings. We could then question the possibility of extending the representation by labeling schemes to cofinite graphs. Second, the key property of local finiteness is rather restrictive: our graphs possess one accumulation point at most. Yet Levinson showed that the number of accumulation points, if superior to one, is either two or infinite. The analysis of the embeddings of planar graphs with two or more accumulation points could lead to a geometrical representation for a larger class of vertex-transitive planar graphs. [**Appendix**]{} Enumerations of vertex-transitive graphs ======================================== In the following sections, we enumerate all locally finite planar vertex-transitive graphs of small inner degree (from 3 to 4). Transitive graphs of degree $2$ correspond ot cyclic graphs. For the larger degrees, we enumerate all possible labeling schemes, each one corresponding to an enumerable family of planar vertex-transitive graphs. Some of these families may have the same representative in terms of unlabeled graphs, but the groups of automorphisms are distinct. These families are classified depending on whether their borders are periodic (P) or aperiodic (A). The following table displays the number of such classes : $$\begin{array}{|ccc||ccc||} \hline \textrm{Degree} & \textrm{~~P~~} & \textrm{~~A~~} & \textrm{Degree} & \textrm{~~P~~} & \textrm{~~A~~} \\ \hline 1 & 0 & 0 & 4 & 52 & 1 \\ \hline 2 & 1 & 0 & 5 & 174 & >1 \\ \hline 3 & 16 & 0 & 6 & 775 & >1 \\ \hline \end{array}$$ For each labeling scheme, we give the primitive type vector, and a description of the graph as a list containing the classes of edges and the border of the faces. Each class of edge is numbered $a_i^k$, where $k$ is a boolean indicating whether the associated edge neighborhood reverses the direction of rotation of the edge vector or not. Each border of the faces is a word on the edge classes corresponding to an orbit in the border automaton repeated a given number of times. [^1]: Except for the following type vectors : $[3,3,p\geq 5]$, $[3,4,p\geq 6]$ and $[3,5,p\geq 9]$. There does not exist a labeling scheme validating any of these type vectors (cf p ).
--- author: - | Dimbihery Rabenoro\ , , title: A Conditional limit theorem for independent random variables --- Introduction ============ Context and Scope, Importance Sampling Framework ------------------------------------------------ Let $(X_{j})_{j \geq 1}$ be a sequence of independent, not necessarily identically distributed (i.d.), random variables (r.v.) valued in $\mathbb{R}$, such that (s.t.) the $(X_{j})$ have a common support $\mathcal{S}_{X}$. In this chapter, we restrict ourselves to the one-dimensional case, for technical reasons. Indeed, the proof of the Edgeworth expansion theorem which we use here (see [@Petrov; @1975]) is specific to the case $d=1$ and can be extended to our framework (see Section $\ref{Petrov}$ below). We keep the notations of the preceding chapter. For $a \in \mathcal{S}_{X}$ and $n \geq 1$, we denote by $Q_{nak}$ a regular version of the conditional distribution of $X_{1}^{k} := \left( X_{1}, ..., X_{k} \right)$ given $\left\{ S_{1,n} = na \right\}$. \ We have obtained in the preceding chapter an approximation of $Q_{nak}$ when $k = o(n)$. A natural question arises : What can be said about the distribution of the $n - k$ other r.v.’s, that is of $(X_{j})_{ k+1 \leq j \leq n}$, given $\left\{ S_{1,n} = na \right\}$. In terms of Statistical Mechanics, the question would be : What can be said about the distribution of energy for the large component ? Set $$k' := n - k, \quad \textrm{so that} \enskip \frac{k'}{n} \rightarrow 1 \enskip \textrm{as} \enskip n \rightarrow \infty.$$ Therefore, we study the distribution of $Q_{nak}$ when $\frac{k}{n}$ is allowed to converge to 1 as $n \rightarrow \infty$. In [@Dembo; @and; @Zeitouni; @1996], it is explained that the condition $k=o(n)$ is necessary to get a Gibbs Conditioning Principle. In this paper, as expected we do not obtain a Gibbs type measure as an approximation of $Q_{nak}$, if $\frac{k}{n}$ does not converge to 0. \ Now, we describe an Importance Sampling (IS) framework within which it is natural to consider $Q_{nak}$ for large $k$. Consider a sequence $(X_{j})_{j \geq 1}$ of r.v.’s. For large $n$ but *fixed*, we intend to estimate $$\Pi_{n} := P(X_{1}^{n} \in \mathcal{E}_{n}), \quad \textrm{for some event} \enskip \mathcal{E}_{n}.$$ A classical IS estimator of $\Pi_{n}$ is the following. $$\label{ISestimator} \widehat{\Pi}_{n}(N) := \frac{1}{N} \sum\limits_{i=1}^{N} \frac{p_{1}^{n}(Y_{1}^{n}(i))}{q_{1}^{n}(Y_{1}^{n}(i))} 1_{\mathcal{E}_{n}} (Y_{1}^{n}(i)),$$ where $p_{1}^{n}$ is the density of $X_{1}^{n}$ and the $(Y_{1}^{n}(i))$ are i.i.d copies of a random vector $Y_{1}^{n}$ with density $q_{1}^{n}$. Then, the law of large numbers insures that $\widehat{\Pi}_{n}(N)$ converges almost surely to $\Pi_{n}$, as $N \rightarrow \infty$. The interest of this resampling procedure is to reduce the variance of the resulting estimator, compared to the usual Monte Carlo method. It is well known that the optimal density from the point of view of the variance is the conditional density $p(X_{1}^{n} | \mathcal{E}_{n})$. Therefore, it is natural to search an approximation of $p(X_{1}^{n} | \mathcal{E}_{n})$. This approach has been developed in [@Broniatowski; @and; @Ritov], for an i.i.d. sequence $(X_{j})_{j \geq 1}$ of centered r.v.’s, with $${\mathcal{E}_{n}} = \left\{ (x_{i})_{1 \leq i \leq n} \in \mathbb{R}^{n} : \sum\limits_{i=1}^{n} x_{i} \geq na_{n} \right\},$$ for some sequence $(a_{n})$ converging slowly to 0. Therefore, $\widehat{\Pi}_{n}(N)$ estimates the moderate deviation probability of $S_{1,n}/n$. In [@Broniatowski; @and; @Ritov], they get an approximation of $p(X_{1}^{k} | \mathcal{E}_{n})$, which should be close to $p(X_{1}^{n} | \mathcal{E}_{n})$ if $k$ is large. For a r.v. $X$, denote by $\mathcal{L}(X)$ its probability distribution. They obtain that, for some density $g_{k}$ on $\mathbb{R}^{k}$, $$\label{theoRitov} p\left( \left. X_{1}^{k} = Y_{1}^{k} \right| S_{1,n} \geq na_{n} \right) \approx g_{k}(Y_{1}^{k}), \quad \textrm{where} \quad Y_{1}^{k} \sim \mathcal{L} \left( \left. X_{1}^{k} \right| S_{1,n} \geq na_{n} \right).$$ The precise sense of $\approx$ is given in Section $\ref{landauNotations}$ below. They deduce from an elementary lemma that $$g_{k}(Z_{1}^{k}) \approx p\left( \left. X_{1}^{k} = Z_{1}^{k} \right| S_{1,n} \geq na_{n} \right) , \quad \textrm{where } Z_{1}^{k} \textrm{ has density } g_{k}.$$ Then, the approximation density $g_{k}$ has a computable expression, which allows to simulate $Z_{1}^{k}$. A density $\overline{g}_{n}$ on $\mathbb{R}^{n}$ is constructed from $g_{k}$. In $(\ref{ISestimator})$, $q_{1}^{n}$ and $(Y_{1}^{n}(i))$ are replaced respectively by $\overline{g}_{n}$ and copies of a r.v. with density $\overline{g}_{n}$. The IS estimator obtained has better performances than the existing ones which estimate $\Pi_{n}$. \ Now, it is reasonable to expect that $(\ref{theoRitov})$ implies that the distribution of $X_{1}^{k}$ given $\left\{S_{1,n} \geq na_{n} \right\}$ is close to the distribution associated to $g_{k}$. We can use this idea to get an approximation of $Q_{nak}$ for some $k$ such that $\frac{k}{n} \rightarrow 1$ (see Theorem $\ref{largekTheorem}$), but also for a class of $k$ which are some $o(n)$ (see Theorem $\ref{smallkTheorem}$). However, in both cases, the condition $n-k \rightarrow \infty$ is required for the Edgeworth expansions. \ We consider a sequence $(X_{j})_{j \geq 1}$ of *independent* r.v.’s. For any $a \in \mathcal{S}_{X}$, let $p\left( \left. X_{1}^{k} = \cdot \right| S_{1,n} = na \right)$ be the density of $X_{1}^{k}$ given $\left\{ S_{1,n} = na \right\}$. In this paper, we obtain that, for some density $g_{k}$ on $\mathbb{R}^{k}$, $$p\left( \left. X_{1}^{k} = Y_{1}^{k} \right| S_{1,n} = na \right) \approx g_{k}(Y_{1}^{k}), \quad \textrm{where} \quad Y_{1}^{k} \sim \mathcal{L} \left( \left. X_{1}^{k} \right| S_{1,n} = na \right).$$ We deduce (see Section 2.4) that $$\left\| Q_{nak} - G_{k} \right\|_{TV} \longrightarrow 0 \quad \textrm{as} \enskip n \rightarrow \infty,$$ where $G_{k}$ is the distribution associated to $g_{k}$. More precisely, when $k$ is small ($k = o(n^{\rho})$ with $0 < \rho < 1/2$), $G_{k}$ is the same Gibbs type measure as in the preceding chapter, while for large $k$ (see the assumptions of Theorem $\ref{largekTheorem}$), $G_{k}$ is a slight modification of this measure. \ Kolmogorov’s extension theorem does not apply to the sequence $(Q_{nan})_{n \geq 1}$ of probability measures. Therefore, we need to consider a sequence $((\Omega_{n},\mathcal{A}_{n}, \mathcal{P}_{n}))_{n \geq 1}$ of probability spaces s.t. for any $n \geq 1$, $Y_{1}^{n}$ is a random vector defined on $(\Omega_{n},\mathcal{A}_{n}, \mathcal{P}_{n})$ and the distribution of $Y_{1}^{n}$ is $Q_{nan}$. Then, for $k \leq n$, $Q_{nak}$ is the distribution of $Y_{1}^{k}$. The properties of $(Y_{1}^{n})_{n \geq 1}$ are studied in Section 3, after some elementary results and statement of the Assumptions in Section 2, while Section 4 is devoted to our main Results and their proofs. Assumptions and elementary results ================================== All the r.v.’s considered are a.c. w.r.t. the Lebesgue measure on $\mathbb{R}$. For any r.v. $X$, let $P_{X}$ be its distribution, $p_{X}$ its density and $\Phi_{X}$ its moment generating function (mgf). For any $j \geq 1$, set $$P_{j} := P_{X_{j}} \quad ; \quad p_{j} := p_{X_{j}} \quad ; \quad \Phi_{j} := \Phi_{X_{j}}.$$ Conditional density ------------------- Let $U$ and $V$ be r.v.’s having respective densities $p_{U}$ and $p_{V}$ and a joint density denoted by $p_{(U,V)}$. Then, there exists a conditional density of $U$ given $V$, denoted as follows. $$p\left( \left. U = u \right| V = v \right) = \frac{ p_{(U,V)} \left(u, v \right)}{p_{V}(v)}.$$ \[densCond\] Let $(X_{j})_{j \geq 1}$ be a sequence of independent r.v.’s. For any $n \geq 1$ and $1 \leq i \leq n$, let $J_{n}$ be a subset of $\left\{ i, ..., n \right\}$ s.t. $\alpha_{n} := \left| J_{n} \right| < n-i+1$. Let $L_{n}$ be the complement of $J_{n}$ in $\left\{ i, ..., n \right\}$. Set $S_{L_{n}} := \sum\limits_{j \in L_{n}} X_{j}$. Then, there exists a conditional density of $(X_{j})_{j \in J_{n}}$ given $S_{i,n}$, defined by $$\label{densCondFormule} p\left( ( \left. X_{j} )_{j \in J_{n}} = (x_{j}) \right| S_{i,n} = s \right) = \frac{ \left\{ \prod\limits_{j \in J_{n}} p_{j}(x_{j}) \right\} p_{S_{L_{n}}} \left( s - \sum\limits_{j \in J_{n}} x_{j} \right) } {p_{S_{i,n}}\left( s \right)},$$ The tilted density ------------------ For a r.v. $X$, let $\Phi_{X}$ be its mgf and let $\Theta_{X} := \left\{ \theta \in \mathbb{R} : \Phi_{X}(\theta ) < \infty \right\}$. For any $\theta \in \Theta_{X}$, denote by $\widetilde{X}^{\theta }$ a random vector having the tilted density, defined by $$p_{\widetilde{X}^{\theta}}(x) := \frac{(\exp \theta x) p_{X}(x)}{\Phi_{X}(\theta)}.$$ \ For any $j \geq 1$, set $\Phi_{j} := \Phi_{X_{j}}$. We suppose throughout the text that the functions $(\Phi_{j})_{j \geq 1}$ have the same domain of finiteness denoted by $\Theta$, which is assumed to be of non void interior. We write, for any $j \geq 1$, $$\Theta := \left\{ \theta \in \mathbb{R}^{d} : \Phi_{j} ( \theta) < \infty \right\}.$$ For any $j \geq 1$, there exists a probability space $(\Omega^{\theta}, \mathcal{A}^{\theta}, \mathcal{P}^{\theta})$ such that for all finite subset $J \subset \mathbb{N}$ and for all $(B_{j})_{j \in J} \in \mathcal{B}(\mathbb{R})^{|J|}$, $$\mathcal{P}^{\theta} \left( \left(\widetilde{X}_{j}^{\theta}\right)_{j \in J} \in (B_{j})_{j \in J} \right) = \prod_{j \in J} \widetilde{P}_{j}^{\theta}(B_{j}) = \prod_{j \in J} \int\limits_{B_{j}} \widetilde{p}_{j}^{\theta}(x)dx,$$ where $\widetilde{P}_{j}^{\theta} := P_{\widetilde{X}_{j}^{\theta}}$ and $\widetilde{p}_{j}^{\theta} := p_{\widetilde{X}_{j}^{\theta}}$. In other words, $\left(\widetilde{X}_{j}^{\theta}\right)_{j \geq 1}$ is a sequence of independent r.v.’s defined on $(\Omega^{\theta}, \mathcal{A}^{\theta}, \mathcal{P}^{\theta})$. For any $j \geq 1$, and $\theta \in \Theta$, we have that $$\mathbb{E} \left[ \widetilde{X}_{j}^{\theta} \right] = m_{j}(\theta) \quad \textrm{where} \quad m_{j}(\theta) := \frac{d\kappa_{j}}{d\theta}(\theta) \enskip \textrm{and} \enskip \kappa_{j}(\theta) := \log \Phi_{j}(\theta).$$ For any $\theta \in \Theta$, $j \geq 1$ and $j' \geq 1$, $$\mathbb{E}\left[ \widetilde{X_{j}+X_{j'}}^{\theta} \right] = \mathbb{E}\left[ \widetilde{X}_{j}^{\theta}+\widetilde{X}_{j'}^{\theta} \right].$$ For any $n \geq 1$ and $1 \leq \ell \leq n$, for any $\theta \in \Theta$, $$\mathbb{E}\left[ \widetilde{S_{\ell,n}}^{\theta} \right] = \sum\limits_{j=\ell}^{n} m_{j}(\theta).$$ \ For any $j \geq 1$ and $\theta \in \Theta$, set $$\overline{X}_{j}^{\theta} := \widetilde{X}_{j}^{\theta} - \mathbb{E}[\widetilde{X}_{j}^{\theta}] = \widetilde{X}_{j}^{\theta} - m_{j}(\theta)$$ and for any $\ell \geq 3$, $$s_{j}^{2}(\theta) := Var \left( \widetilde{X}_{j}^{\theta} \right) \quad ; \quad \sigma_{j}(\theta) := \sqrt{s_{j}^{2}(\theta)} \quad ; \quad \mu_{j}^{\ell}(\theta) := \mathbb{E} \left[ \left(\overline{X}_{j}^{\theta} \right)^{\ell} \right] \quad ; \quad |\mu|_{j}^{\ell}(\theta) := \mathbb{E} \left[ \left|\overline{X}_{j}^{\theta} \right|^{\ell} \right].$$ Then, $$s_{j}^{2}(\theta) = \frac{d^{2} \kappa_{j}}{d\theta^{2}}(\theta) \quad \textrm{and} \quad \mu_{j}^{\ell}(\theta) = \frac{d^{\ell} \kappa_{j}}{d \theta^{\ell}}(\theta).$$ Landau Notations {#landauNotations} ---------------- Let $(X_{n})_{n \geq 1}$ be a sequence of r.v.’s such that for any $n \geq 1$, $X_{n}$ is defined on a probability space $(\Omega_{n},\mathcal{A}_{n}, \mathcal{P}_{n})$. Let $(u_n)$ be a sequence of real numbers. We say that \ $(X_{n})_{n \geq 1}$ is a $\mathcal{O}_{\mathcal{P}_{n}}(u_n)$ if for all $\epsilon > 0$, there exists $A \geq 0$ and $ N_{\epsilon} \in \mathbb{N}$, s.t. for all $n \geqslant N_{\epsilon}$, $$\mathcal{P}_{n} \left(\left| \frac{X_{n}} {u_{n}} \right| \leqslant A \right) \geqslant 1 - \epsilon.$$ $(X_{n})_{n \geq 1}$ is a $o_{\mathcal{P}_{n}}(u_n)$ if for all $\epsilon > 0$ and $\delta > 0$, there exists $N_{\epsilon, \delta} \in \mathbb{N}$ s.t. for all $n \geqslant N_{\epsilon}$, $$\mathcal{P}_{n} \left(\left| \frac{X_{n}} {u_{n}} \right| \leqslant \delta \right) \geqslant 1 - \epsilon.$$ $(X_{n})_{n \geq 1}$ converges to $\ell \in \mathbb{R}$ in $\mathcal{P}_{n}$- probability and we note $X_{n} \enskip {\underset{\mathcal{P}_{n}} {\longrightarrow}} \enskip \ell $ if $$X_{n} = \ell + o_{\mathcal{P}_{n}}(1).$$ These notations differ from the classical Landau notations in probability by the fact that here, the rv’s $(X_{n})$ are not defined on the same probability space. However, they satisfy similar properties, which we will use implicitly in the proofs. A criterion for convergence in Total Variation Distance ------------------------------------------------------- Set $$\mathcal{A}_{\rightarrow 1} := \left\{ (B_{n})_{n \geq 1} \in \prod\limits_{n \geq 1} \mathcal{A}_{n} \enskip : \enskip \mathcal{P}_{n}(B_{n}) \enskip {\underset{n \infty} {\longrightarrow}} \enskip 1 \right\}.$$ \[lienISdvt\] For all integer $n \geq 1$, let $Y_{1}^{n} : (\Omega_{n},\mathcal{A}_{n}, \mathcal{P}_{n}) \longrightarrow (\mathbb{R}^n, \mathcal{B}(\mathbb{R}^{n}))$ be a random vector. For any $1 \leq k \leq n$, the distribution of $Y_{1}^k$ is denoted by $P_{k}$. Let $G_{k}$ be a probability measure on $\mathbb{R}^{k}$. Assume that $P_{k}$ and $G_{k}$ have positive densities $p_{k}$ and $g_{k}$, and that $k \rightarrow \infty$ as $n \rightarrow \infty$. If there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for any $n \geq 1$, we have on $B_{n}$ that $$\label{errel} p_{k}(Y_{1}^k) = g_{k}(Y_{1}^k) \left[1 + T_{n} \right] \enskip \textrm{where} \enskip T_{n} = o_{\mathcal{P}_{n}}(1),$$ then, $$\left\| P_k - G_k \right\|_{TV} \enskip {\underset{n \infty} {\longrightarrow}} \enskip 0.$$ For any $\delta >0$, set $$E(n,\delta):= \left\{ (y_{1}^{k}) \in \mathbb{R}^{k} : \left| \frac{p_k(y_{1}^{k})}{g_k(y_{1}^{k})} - 1 \right| \leqslant \delta \right\}.$$ Then, $$\begin{aligned} \mathcal{P}_{n} \left( \left\{ \left| T_{n} \right| \leqslant \delta \right\} \cap B_{n} \right) &\leqslant \mathcal{P}_{n} \left( \left| \frac{p_k(Y_{1}^{k})}{g_k(Y_{1}^{k})} - 1 \right| \leqslant \delta \right) \\ &= P_k(E(n,\delta)) \\ &= \int\limits_{E(n,\delta)} \frac{p_k(y_{1}^{k})}{g_k(y_{1}^{k})}g_k(y_{1}^{k})dy_{1}^{k} \\ &\leqslant (1+\delta)G_k(E(n,\delta)). \end{aligned}$$ By $(\ref{errel})$, for all $n$ large enough, $$\begin{aligned} \mathcal{P}_{n} \left( \left\{ \left| T_{n} \right| \leqslant \delta \right\} \cap B_{n} \right) &\geq 1 - \mathcal{P}_{n}\left( \left\{ \left| T_{n} \right| > \delta \right\} \right) - \mathcal{P}_{n}(B_{n}^{c}) \\ &\geq 1-2\delta.\end{aligned}$$ Combining the preceding inequalities, we obtain that for all $n$ large enough, $$1-2\delta \leq P_k(E(n,\delta)) \leq (1 + \delta) G_k(E(n,\delta)).$$ Therefore, $$\sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} | P_k(C) - P_k(C \cap E(n,\delta))| \leqslant P_k(E(n,\delta)^{c}) \leq 2\delta$$ and $$\begin{aligned} \sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} | G_k(C) - G_k(C \cap E(n,\delta)) | &\leq 1 - G_k(E(n,\delta)) \\ &\leq 1 - \frac{1-2\delta}{1+ \delta} \\ &= \frac{3\delta}{1+ \delta}. \end{aligned}$$ Now, we have that $$\sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} | P_k(C \cap E(n,\delta)) - G_k(C \cap E(n,\delta)) | \leqslant \sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} \int\limits_{C \cap E(n,\delta)} | p_k(y_{1}^{k}) - g_k(y_{1}^{k}) | dy_{1}^{k}$$ From the definition of $E(n,\delta)$, we deduce that $$\begin{aligned} \sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} | P_k(C \cap E(n,\delta)) - G_k(C \cap E(n,\delta)) | &\leqslant \delta \sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} \int\limits_{C \cap E(n,\delta)} g_k(y_{1}^{k}) dy_{1}^{k} \\ &\leqslant \delta.\end{aligned}$$ Finally, applying the triangle inequality, we have that for all $n$ large enough, $$\begin{aligned} \sup\limits_{C \in \mathcal{B}(\mathbb{R}^k)} | P_k(C) - G_k(C)| &\leq 2\delta + \delta + \frac{3\delta}{1+ \delta} \\ &= 3\delta \left( \frac{2+\delta}{1+\delta}\right), \end{aligned}$$ which converges to 0 as $\delta \rightarrow 0$. A rate of convergence is not obtainable by this method. A first calculus {#firstCalculus} ---------------- Set $$p_{k}\left(Y_{1}^k\right) := p\left( \left. X_{1}^k = Y_{1}^k \right| S_{1,n} = na\right).$$ \ First, we have that $$p\left( \left.X_{1}^k = Y_{1}^k \right| S_{1,n} = na\right) = p\left( \left. X_k=Y_k \right| X_1^{k-1}=Y_1^{k-1};S_{1,n}=na\right) p\left( \left. X_1^{k-1}=Y_1^{k-1} \right| S_{1,n}=na\right)$$ Set $p_{k}\left(Y_1^{k}\right) := p\left( \left. X_{1}^k = Y_{1}^k \right| S_{1,n} = ns\right)$, then we deduce by induction on $k$ that $$\label{pkFirstStep} p_{k}\left(Y_1^k\right) = \left\{ \prod_{i=1}^{k-1} p\left( \left. X_{i+1}=Y_{i+1} \right| X_1^i= Y_1^i ; S_{1,n}=na \right) \right\} p\left( \left.X_{1}=Y_{1} \right| S_{1,n}=na\right).$$ For $1 \leq i_{1} \leq i_{2} \leq n$, set $\Sigma_{i_{1},i_{2}} := \sum\limits_{j=i_{1}}^{i_{2}} Y_{j}$. We deduce from $(\ref{pkFirstStep})$ that $$p_{k}\left(Y_1^k\right) = \left\{ \prod_{i=1}^{k-1} p\left(\left. X_{i+1}=Y_{i+1} \right| S_{i+1,n}=na-\Sigma_{1,i}\right) \right\} p\left(X_{1}=Y_{1} | S_{1,n}=na\right).$$ Let $\Sigma_{1,0}=0$. Then, $$p_{k}\left(Y_1^k\right) = \prod_{i=0}^{k-1} \pi_i, \enskip \textrm{where} \enskip \pi_i := p\left(X_{i+1}=Y_{i+1} | S_{i+1,n}=na-\Sigma_{1,i}\right).$$ The conditioning event being $\{ S_{i+1,n}=na-\Sigma_{1,i} \}$, we search $\theta$ s.t. $$E\left[\widetilde{S_{i+1,n}}^{\theta}\right] = \sum\limits_{j=i+1}^{n} m_{j}(\theta) = na - \Sigma_{1,i}.$$ Since $\mathcal{P}_{n}$-a.s., $\Sigma_{1,i} + \Sigma_{i+1,n} = na$, this is equivalent to solve the following equation, where $\theta$ is unknown. $$\label{equaTilting} \overline{m}_{i+1,n}(\theta) := \frac{\sum\limits_{j=i+1}^{n} m_{j}(\theta)}{n-i} = \frac{\Sigma_{i+1,n}}{n-i}.$$ *We will see below (see Definition $\ref{deftin}$) that, under suitable assumptions, equation $(\ref{equaTilting})$ has a unique solution $t_{i,n}$. In the following lines, the tilted densities pertain to $\theta = t_{i,n}$*. \ For $e=1,2$, let $\overline{q}_{i+e,n}$ be the density of $\overline{S}_{i+e,n}$, where $$\label{SienBarre} \overline{S}_{i+e,n} := \frac{\widetilde{S_{i+e,n}} - \mathbb{E}\left[ \widetilde{S_{i+e,n}} \right]} {\sqrt{Var\left(\widetilde{S_{i+e,n}}\right)}} = \frac{\widetilde{S_{i+e,n}} - \sum\limits_{j=i+e}^{n} m_{j}(t_{i,n})} {\sqrt{\sum\limits_{j=i+e}^{n} s_{j}^{2}(t_{i,n})}}.$$ Using the invariance of the conditional density under the tilting operation, Fact $\ref{densCond}$ and then renormalizing, we obtain that $$\pi_{i} = p\left(\widetilde{X_{i+1}}=Y_{i+1} | \widetilde{S_{i+1,n}}=na-\Sigma_{1,i} \right) = \widetilde{p}_{i+1}(Y_{i+1}) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \frac{\overline{q}_{i+2,n}(Z_{i+1})}{\overline{q}_{i+1,n}(0)},$$ where $$Z_{i+1} := \frac{m_{i+1} - Y_{i+1}}{\sigma_{i+2,n}}.$$ Assumptions ----------- Let $f : (\alpha, \beta) \longrightarrow (A,B)$ be a function, where $\alpha$, $\beta$, $A$ and $B$ may be finite or not. Consider the following condition $(\mathcal{H})$. \ $(\mathcal{H})$ : $f$ is strictly increasing and $\lim\limits_{\theta \rightarrow \alpha} f(\theta) = A$ ; $\lim\limits_{\theta \rightarrow \beta} f(\theta) = B$. ### Statements We suppose *throughout the text* that the following assumptions hold. So in the statements of the results, we will not always precise which among them are required. \ $\left( \mathcal{S}upp \right)$ : The $(X_{j})$, $j \geq 1$ have a common support $\mathcal{S}_{X}=(A,B)$, where $A$ and $B$ may be finite or not. \ $\left( \mathcal{M}gf \right)$ : The mgf’s $(\Phi_{j})_{j \geq 1}$ have the same domain of finiteness $\Theta=(\alpha, \beta)$, where $\alpha$ and $\beta$ may be finite or not. \ $(\mathcal{H}\kappa)$ : For all $j \geq 1$, $m_{j} := \frac{d\kappa_{j}}{d\theta}$ satisfies $(\mathcal{H})$. \ $(\mathcal{U}f)$ : There exist functions $f_{+}$ and $f_{-}$ which satisfy $(\mathcal{H})$ and such that $$\forall j \geq 1, \enskip \forall \theta \in \Theta, \enskip f_{-}(\theta) \leq m_j(\theta) \leq f_{+}(\theta).$$ \ $\left( \mathcal{C}v \right)$ : For any compact $K \subset \Theta$, $$0 < \inf\limits_{j \geq 1} \enskip \inf\limits_{\theta \in K} s_{j}^{2}(\theta) \enskip \leq \enskip \sup\limits_{j \geq 1} \enskip \sup\limits_{\theta \in K} \enskip s_{j}^{2}(\theta) < \infty,$$ \ $\left( \mathcal{AM}6 \right)$ : For any compact $K \subset \Theta$, $$\sup\limits_{j \geq 1} \enskip \sup\limits_{\theta \in K} \enskip |\mu|_{j}^{6} (\theta) < \infty.$$ \ $(\mathcal{C}f)$ : For any $j \geq 1$, $p_{j}$ is a function of class $\mathcal{C}^{1}$ and for any compact $K \subset \Theta$, $$\sup\limits_{j \geq 1} \enskip \sup\limits_{\theta \in K} \enskip \left\| \frac{d \widetilde{p}_{j}^{\theta}}{d x} \right\|_{L^{1}} < \infty.$$ ### Elementary Facts If a function $f$ satisfies $(\mathcal{H})$, then $f$ is a homeomorphism from $(\alpha,\beta)$ to $(A,B)$. \[moyenneHomeo\] If a function $f$ is defined as the mean of functions satisfying $(\mathcal{H})$, then $f$ satisfies $(\mathcal{H})$. In particular, $f$ is a homeomorphism from $(\alpha,\beta)$ to $(A,B)$. \[homeo\] Let $\ell, n$ be integers with $1 \leq \ell \leq n$. Set $$\overline{m}_{\ell,n} := \frac{1}{n-\ell+1} \sum\limits_{j=\ell}^{n} m_{j}.$$ Then, we deduce from $(\mathcal{H}\kappa)$ and Fact $\ref{moyenneHomeo}$ that $\overline{m}_{\ell,n}$ is a homeomorphism from $(\alpha,\beta)$ to $(A,B)$. Consequently, for any $s \in \mathcal{S}_{X}$, the equation $$\overline{m}_{\ell,n}(\theta) = s$$ has a unique solution in $\Theta = (\alpha,\beta)$. We deduce from Corollary $\ref{homeo}$ that for any $a \in \mathcal{S}_{X}$, for any $n \geq 1$, there exists a unique $\theta^{a}_{n} \in \Theta$ s.t. $$\overline{m}_{1,n}(\theta^{a}_{n}) = a.$$ We deduce from $(\mathcal{H}\kappa)$ that for any $a \in \mathcal{S}_{X}$, there exists a compact set $K_{a}$ of $\mathbb{R}$ s.t. $$\left\{ \theta^{a}_{n} : n \geq 1 \right\} \subset K_{a} \subset \Theta.$$ \[mjThetan\] We deduce from the preceding Fact and the Assumptions that, for any $a \in \mathcal{S}_{X}$, $$\sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip |m_{j}(\theta^{a}_{n})| < \infty,$$ $$0 < \inf\limits_{n \geq 1} \enskip \inf\limits_{j \geq 1} \enskip \Phi_{j}(\theta^{a}_{n}) \leq \sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip \Phi_{j}(\theta^{a}_{n}) < \infty,$$ $$0 < \inf\limits_{n \geq 1} \enskip \inf\limits_{j \geq 1} \enskip s_{j}^{2}(\theta^{a}_{n}) \leq \sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip s_{j}^{2}(\theta^{a}_{n}) < \infty,$$ and for any $3 \leq \ell \leq 6$, $$\sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip |\mu_{j}^{\ell} (\theta^{a}_{n})| \leq \sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip |\mu|_{j}^{\ell} (\theta^{a}_{n}) < \infty.$$ \[deftin\] We deduce from Corollary $\ref{homeo}$ that for any $n \geq 1$ and $0 \leq i \leq k-1$, there exists a unique $t_{i,n} \in \Theta$ s.t. $$\overline{m}_{i+1,n}(t_{i,n}) = \frac{\sum\limits_{j=i+1}^{n} Y_{j}}{n-i}.$$ Since $\overline{m}_{i+1,n}$ is a homeomorphism from $\mathcal{S}_{X}$ to $\Theta$, $t_{i,n}$ is a r.v. defined on $(\Omega_{n}, \mathcal{A}_{n})$. \[ppteMomentsTin\] Assume that $$\label{thetaIEqua} \max\limits_{0 \leq i \leq k-1} |t_{i,n}| = \mathcal{O}_{\mathcal{P}_{n}}(1)$$ Then, under the Assumptions, we have that $$\max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip |m_{j}(t_{i,n})| = \mathcal{O}_{\mathcal{P}_{n}}(1),$$ $$\max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip \max \left\{ \frac{1}{\Phi_{j}(t_{i,n})} ; \Phi_{j}(t_{i,n}) \right\} = \mathcal{O}_{\mathcal{P}_{n}}(1),$$ $$\label{unExemple} \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip \max \left\{ \frac{1}{s_{j}^{2}(t_{i,n})} ; s_{j}^{2}(t_{i,n}) \right\} = \mathcal{O}_{\mathcal{P}_{n}}(1),$$ and for any $3 \leq \ell \leq 6$, $$\max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip |\mu_{j}^{\ell} (t_{i,n})| \leq \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip |\mu|_{j}^{\ell} (t_{i,n}) = \mathcal{O}_{\mathcal{P}_{n}}(1).$$ We prove only $(\ref{unExemple})$, the other proofs being similar. Let $\epsilon > 0$. Then, $(\ref{thetaIEqua})$ implies that there exists $A_{\epsilon} > 0$ s.t. for all $n$ large enough, $$\mathcal{P}_{n} \left( \max\limits_{0 \leq i \leq k-1} |t_{i,n}| \leq A_{\epsilon} \right) \geq 1 - \epsilon.$$ Now, $\left( \mathcal{C}v \right)$ implies that $$s_{A_{\epsilon}}^{2} := \sup\limits_{j \geq 1} \enskip \sup\limits_{\theta \in [ - A_{\epsilon} ; A_{\epsilon}]} \enskip s_{j}^{2} (\theta) < \infty.$$ Therefore, $$\mathcal{P}_{n} \left( \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip s_{j}^{2}(t_{i,n}) \leq s_{A_{\epsilon}}^{2} \right) \geq 1 - \epsilon.$$ We will prove in Section 3.4. that, under the Assumptions, $(\ref{thetaIEqua})$ holds. Properties of $(Y_{1}^{n})_{n \geq 1}$ ====================================== Edgeworth expansion {#Petrov} ------------------- Let $(X_{j})_{j \geq 1}$ be a sequence of independent r.v.’s with zero means and finite variances. For any $j \geq 1$ and $\ell \geq 3$, set $$s_{j}^{2} := \mathbb{E}[X_{j}^{2}] = Var(X_{j}) \quad ; \quad \sigma_{j} := \sqrt{s_{j}^{2}} \quad ; \quad \mu_{j}^{\ell} := \mathbb{E}[X_{j}^{\ell}] \quad ; \quad |\mu|_{j}^{\ell} := \mathbb{E} \left[ \left|X_{j} \right|^{\ell} \right].$$ For any $p,q$ with $1 \leq p \leq q$ and $\ell >2$, set $$s_{p,q}^{2} := \sum\limits_{j=p}^{q} s_{j}^{2} \quad ; \quad \sigma_{p,q} := \sqrt{s_{p,q}^{2}} \quad ; \quad \mu_{p,q}^{\ell} := \sum\limits_{j=p}^{q} \mu_{j}^{\ell}.$$ For any $j \geq 1$, if $p_{j}$ is of class $\mathcal{C}^{1}$, set $$d_{j} := \left\| \frac{dp_{j}}{dx} \right\|_{L^{1}}.$$ For $\nu \geq 3$, let $H_{\nu}$ be the Hermite polynomial of degree $\nu$. For example, $$H_{3}(x)=x^{3} - 3x \quad ; \quad H_{4}(x)=x^{4} - 6x^{2} + 3 \quad ; \quad H_{5}(x)=x^{5} - 10x^{3} + 15x.$$ \[classicEdge\] Let $m$ be an integer with $m \geq 3$. Assume that $$\label{covClassique} \sup\limits_{j \geq 1} \enskip \frac{1}{s_{j}^{2}} < \infty,$$ $$\label{AMClassique} \sup\limits_{j \geq 1} \enskip |\mu|_{j}^{m+1} < \infty,$$ $$\label{condCF2} \sup\limits_{j \geq 1} \enskip d_{j} < \infty.$$ \ Let $\mathfrak{n}$ be the density of the standard normal distribution. For any $n \geq 1$, let $q_{n}$ be the density of $(s_{1,n}^{2})^{-1/2} S_{1,n}$. Then, for all $n$ large enough, we have that $$\label{concluPetrov} \sup\limits_{x \in \mathbb{R}} \enskip \left| q_{n}(x) - \mathfrak{n}(x) \left( 1 + \sum\limits_{\nu=3}^{m} P_{\nu,n}(x) \right) \right| = \frac{o(1)}{n^{(m-2)/2}},$$ where, for example, $$P_{3,n}(x) = \frac{\mu_{1,n}^{3}}{6(s_{1,n}^{2})^{3/2}}H_{3}(x)$$ $$P_{4,n}(x) = \frac{(\mu_{1,n}^{3})^2}{72(s_{1,n}^{2})^{3}} H_{6}(x) + \frac{ \mu_{1,n}^{4} - 3\sum\limits_{j=1}^{n} (s_{j}^{2})^2}{24(s_{1,n}^{2})^{2}} H_{4}(x)$$ $$P_{5,n}(x) = \frac{(\mu_{1,n}^{3})^3}{1296(s_{1,n}^{2})^{9/2}} H_{9}(x) + \frac{\mu_{1,n}^{3} \left( \mu_{1,n}^{4} - 3\sum\limits_{j=1}^{n} (s_{j}^{2})^2 \right)} {144(s_{1,n}^{2})^{7/2}} H_{7}(x) + \frac{ \mu_{1,n}^{5} - 10\sum\limits_{j=1}^{n} \mu_{j}^{3} s_{j}^{2}}{120(s_{1,n}^{2})^{5/2}} H_{5}(x)$$ \[remPnu\] We obtain from $(\ref{covClassique})$ and $(\ref{AMClassique})$ that $$P_{3,n}(x) = \mathcal{O} \left( \frac{1}{n^{1/2}} \right)H_{3}(x)$$ $$P_{4,n}(x) = \mathcal{O} \left( \frac{1}{n} \right)H_{6}(x) + \mathcal{O} \left( \frac{1}{n} \right)H_{4}(x)$$ $$P_{5,n}(x) = \mathcal{O} \left( \frac{1}{n^{3/2}} \right)H_{9}(x) + \mathcal{O} \left( \frac{1}{n^{3/2}} \right)H_{7}(x) + \mathcal{O} \left( \frac{1}{n^{3/2}} \right)H_{5}(x)$$ Extensions of the Edgeworth expansion ------------------------------------- For any integers $p,q$ with $1 \leq p \leq q$ and $\theta \in \Theta$, set $$s_{p,q}^{2}(\theta) := \sum\limits_{j=p}^{q} s_{j}^{2}(\theta) \quad ; \quad \sigma_{p,q}(\theta) := \sqrt{s_{p,q}^{2}(\theta)} \quad ; \quad \mu_{p,q}^{\ell}(\theta) := \sum\limits_{j=p}^{p} \mu_{j}^{\ell}(\theta).$$ For any $j \geq 1$ and $\theta \in \Theta$, set $$d_{j}(\theta) := \left\| \frac{d \widetilde{p}_{j}^{\theta}}{d x} \right\|_{L^{1}}.$$ ### First Extension For any $n \geq 1$, let $J_{n}$ be a subset of $\left\{ 1, ..., n \right\}$ s.t. $\alpha_{n} := \left| J_{n} \right| < n$. Let $L_{n}$ be the complement of $J_{n}$ in $\left\{ 1, ..., n \right\}$. Set $$\overline{S}_{L_{n}} := \sum\limits_{j \in L_{n}} \widetilde{X}_{j}^{\theta_{n}^{a}} - \mathbb{E}\left[ \widetilde{X}_{j}^{\theta_{n}^{a}} \right] = \widetilde{X}_{j}^{\theta_{n}^{a}} - m_{j}(\theta_{n}^{a}).$$ For any $\theta \in \Theta$ and $\ell \geq 3$, set $$s_{L_{n}}^{2}(\theta) := \sum\limits_{j \in L_{n}} s_{j}^{2}(\theta) \quad ; \quad \sigma_{L_{n}}(\theta) := \sqrt{s_{L_{n}}^{2}(\theta)} \quad ; \quad \mu_{L_{n}}^{\ell}(\theta) := \sum\limits_{j \in L_{n}} \mu_{j}^{\ell}(\theta).$$ \[EdgeTT\] Let $m$ be an integer with $m \geq 3$. Assume that $$\label{covThetan} \sup\limits_{j \geq 1} \enskip \frac{1}{s_{j}^{2}(\theta_{n}^{a})} = \mathcal{O}(1),$$ $$\label{AM6Thetan} \sup\limits_{j \geq 1} \enskip |\mu|_{j}^{m+1}(\theta_{n}^{a}) = \mathcal{O}(1),$$ $$\label{cf2Thetan} \sup\limits_{j \geq 1} \enskip d_{j}(\theta_{n}^{a}) = \mathcal{O}(1).$$ \ For any $n \geq 1$, let $\overline{q}_{L_{n}}$ be the density of $(s_{L_{n}}^{2})^{-1/2} \overline{S}_{L_{n}}$. Then, for all $n$ large enough, we have that $$\label{concluTTEdge} \sup\limits_{x \in \mathbb{R}} \enskip \left| \overline{q}_{L_{n}}(x) - \mathfrak{n}(x) \left(1 + \sum\limits_{\nu=3}^{m} \overline{P}_{\nu,L_{n}}(x) \right)\right| = \frac{o\left( 1 \right)}{(n-\alpha_{n})^{(m-2)/2}},$$ where the $\overline{P}_{\nu,L_{n}}$ are defined as the $P_{\nu,n}$, except that the $s_{1,n}^{2}$ and the $\mu_{1,n}^{\ell}$ are replaced respectively by $s_{L_{n}}^{2}(\theta_{n}^{a})$ and $\mu_{L_{n}}^{\ell}(\theta_{n}^{a})$. \[coroEdgeTT\] Assume that $\left( \mathcal{C}v \right)$, $\left( \mathcal{AM}(m+1) \right)$, $(\mathcal{C}f)$ and $(\mathcal{U}f)$ hold. Then, $(\ref{concluTTEdge})$ holds. By Remark $\ref{remPnu}$, for $\nu = 3, 4, 5$, some $\mathcal{O} \left( \frac{1}{n^{(\nu-2)/2}} \right)$ appear in $P_{\nu,n}$. They are replaced by some $\frac{\mathcal{O}(1)} {(n-\alpha_{n})^{(\nu-2)/2}}$ in $\overline{P}_{\nu,L_{n}}$. ### Second Extension Let $m$ be an integer with $m \geq 3$. Assume that $$\label{covTI} \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip \frac{1}{s_{j}^{2}(t_{i,n})} = \mathcal{O}_{\mathcal{P}_{n}}(1),$$ $$\label{AM6TI} \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip |\mu|_{j}^{m+1}(t_{i,n}) = \mathcal{O}_{\mathcal{P}_{n}}(1),$$ $$\label{cf2TI} \max\limits_{0 \leq i \leq k-1} \enskip \sup\limits_{j \geq 1} \enskip d_{j}(t_{i,n}) = \mathcal{O}_{\mathcal{P}_{n}}(1).$$ \ Let $e \in \left\{1, 2 \right\}$. We recall that $\overline{q}_{i+e,n}$ is the density of $\overline{S}_{i+e,n}$, defined by $(\ref{SienBarre})$. Then, $$\label{extensionEdgeTI} \sup\limits_{x \in \mathbb{R}} \enskip \left| \overline{q}_{i+e,n}(x) - \mathfrak{n}(x) \left( 1+\sum\limits_{\nu=3}^{m} \overline{P}_{\nu,n}^{(i,e)}(x) \right) \right| = \frac{o_{\mathcal{P}_n}(1)}{(n-i-e+1)^{(m-2)/2}},$$ where the $\overline{P}_{\nu,n}^{(i,e)}$ are defined as the $P_{\nu,n}$, except that the $s_{1,n}^{2}$ and the $\mu_{1,n}^{\ell}$ are replaced respectively by $s_{i+e,n}^{2}(t_{i,n})$ and $\mu_{i+e,n}^{\ell}(t_{i,n})$. We follow the lines of the proof of Theorem $\ref{classicEdge}$, given in [@Petrov; @1975]. For $j \geq 1$, let $\widetilde{\xi}_{j}$ be the characteristic function of $\widetilde{X_{j}}^{t_{i,n}}$. Then, for any $\tau \in \mathbb{R}$, $$\widetilde{\xi}_{j}(\tau) = \int \exp(i\tau x) \frac{\exp(t_{i,n}x)p_{j}(x)}{\Phi_{j}(t_{i,n})}dx$$ is a r.v. defined on $(\Omega_{n}, \mathcal{A}_{n})$. Performing a Taylor expansion of $\exp(i\tau x)$, we obtain that $$\label{devXij} \widetilde{\xi}_{j}(\tau) = 1 + \frac{s_{j}^{2}(t_{i,n})}{2} (i\tau)^{2} + \sum\limits_{\nu=3}^{m} \frac{\mu_{j}^{\nu}(t_{i,n})}{\nu!} (i\tau)^{\nu} + r_{j}(\tau).$$ Then, we deduce from Fact $\ref{ppteMomentsTin}$ that $$\label{resteTin} \sum\limits_{j=i+e}^{n} r_{j} \left( \frac{\tau}{\sigma_{i+e,n}} \right) \leq \frac{\delta_{i,n}}{(n-i-e+1)^{(m-2)/2}} |\tau|^{m}, \quad \textrm{where} \enskip \max\limits_{0 \leq i \leq k-1} |\delta_{i,n}| = o_{\mathcal{P}_{n}}(1).$$ For any $n \geq 1$, and $\omega \in \Omega_{n}$, we consider a triangular array whose row of index $n$ is composed of the $n-i-e+1$ *independent* r.v.’s $$\left( \overline{X}_{j}^{t_{i,n}(\omega)} \right)_{i+e \leq j \leq n}$$ Let $\overline{\xi}_{i+e,n}$ be the characteristic function of $\overline{S}_{i+e,n}^{t_{i,n}}$, given by $\overline{\xi}_{i+e,n}(\tau) = \int \exp(i\tau x) \overline{q}_{i+e,n}(x) dx$. By independence of the $\left( \overline{X}_{j}^{t_{i,n}(\omega)} \right)_{i+e \leq j \leq n}$ and $(\ref{devXij})$ combined with $(\ref{resteTin})$, we obtain that for suitable some constant $\rho > 0$, for $|\tau| \leq n^{\rho}$, $$\label{controlePetrov} \left| \overline{\xi}_{i+e,n}(\tau) - u_{m,n}(\tau) \right| \leq \frac{\delta_{i,n}}{(n-i-e+1)^{(m-2)/2}} \left(|\tau|^{m} + |\tau|^{3(m-1)} \right) \exp\left(- \frac{\tau^{2}}{2}\right),$$ where $u_{m,n}$ is the Fourier transform of $\mathfrak{n}(x) \left( 1+\sum\limits_{\nu=3}^{m} \overline{P}_{\nu,n}^{(i,e)}(x) \right)$ and $\max\limits_{0 \leq i \leq k-1} |\delta_{i,n}| = o_{\mathcal{P}_{n}}(1)$. \ Now, we have that $$\begin{aligned} I &:= \int\limits_{-\infty}^{\infty} \left| \overline{\xi}_{i+e,n}(\tau) - u_{m,n}(\tau) \right| d\tau \\ &\leq \label{integralFourier} \int\limits_{|\tau| \leq n^{\rho}} \left| \overline{\xi}_{i+e,n}(\tau) - u_{m,n}(\tau) \right| d\tau + \int\limits_{|\tau| > n^{\rho}} \left| u_{m,n}(\tau) \right| d\tau + \int\limits_{|\tau| > n^{\rho}} \left| \overline{\xi}_{i+e,n}(\tau) \right| d\tau. \end{aligned}$$ Then, we obtain from $(\ref{controlePetrov})$ that $$\int\limits_{|\tau| \leq n^{\rho}} \left| \overline{\xi}_{i+e,n}(\tau) - u_{m,n}(\tau) \right| d\tau = \frac{o_{\mathcal{P}_{n}}(1)}{(n-i-e+1)^{(m-2)/2}}.$$ Then, using general results on characteristic functions (see Lemma 12 in [@Petrov; @1975]), we prove that $$\int\limits_{|\tau| > n^{\rho}} \left| u_{m,n}(\tau) \right| d\tau = \frac{o_{\mathcal{P}_{n}}(1)}{(n-i-e+1)^{(m-2)/2}}.$$ Now, $(\ref{cf2TI})$ implies that for any $\alpha > 0$ and $\eta > 0$, $$\max\limits_{0 \leq i \leq k-1} (n-i-e+1)^{\alpha} \int\limits_{|\tau| > \eta} \prod\limits_{j=i+e}^{n} \left| \widetilde{\xi}_{j}(\tau) \right| d\tau = o_{\mathcal{P}_{n}}(1),$$ which implies in turn that $$\int\limits_{|\tau| > n^{\rho}} \left| \overline{\xi}_{i+e,n}(\tau) \right| d\tau = \frac{o_{\mathcal{P}_{n}}(1)}{(n-i-e+1)^{(m-2)/2}}.$$ Considering $(\ref{integralFourier})$, we deduce that $$I = \frac{o_{\mathcal{P}_{n}}(1)}{(n-i-e+1)^{(m-2)/2}}.$$ Then, Fourier inversion yields that $$\overline{q}_{i+e,n}(x) - \mathfrak{n}(x) \left( 1+\sum\limits_{\nu=3}^{m} \overline{P}_{\nu,n}^{(i,e)}(x) \right) = \frac{1}{2\pi} \int\limits_{-\infty}^{\infty} \exp(-i\tau x) (\overline{\xi}_{i+e,n}(\tau) - u_{m,n}(\tau)) d\tau.$$ Therefore, $$\sup\limits_{x \in \mathbb{R}} \enskip \left| \overline{q}_{i+e,n}(x) - \mathfrak{n}(x) \left( 1+\sum\limits_{\nu=3}^{m} \overline{P}_{\nu,n}^{(i,e)}(x) \right) \right| \leq \frac{I}{2\pi} = \frac{o_{\mathcal{P}_{n}}(1)}{(n-i-e+1)^{(m-2)/2}}.$$ \[condPetrovTI\] Assume that $\left( \mathcal{C}v \right)$, $\left( \mathcal{AM}(m+1) \right)$, $(\mathcal{C}f)$ hold, and that $$\max\limits_{0 \leq i \leq k-1} |t_{i,n}| = \mathcal{O}_{\mathcal{P}_{n}}(1)$$ Then, $(\ref{extensionEdgeTI})$ holds. By Remark $\ref{remPnu}$, for $\nu = 3, 4, 5$, some $\mathcal{O} \left( \frac{1}{n^{(\nu-2)/2}} \right)$ appear in $P_{\nu,n}$. They are replaced by some $\frac{\mathcal{O}_{\mathcal{P}_{n}} (1)} {(n-i-1)^{(\nu-2)/2}}$ in $P_{\nu,n}^{(i,e)}$. Moments of $Y_{j}$ ------------------ *Throughout this Section 3.3, all the tilted densities considered pertain to $\theta = \theta^{a}_{n}$, defined by* $$\overline{m}_{1,n}(\theta^{a}_{n}) = a.$$ \ The moments of the $Y_{j}$’s are obtained by integration of the conditional density. As expected, their first order approximations are the moments of $\widetilde{X_{j}}$. \[momentsYj\] $$\label{espYj} \max\limits_{1 \leq j \leq n} \left| \mathbb{E}_{\mathcal{P}_{n}}[Y_j] - m_{j}(\theta^{a}_{n}) \right| = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right).$$ For any $n \geq 1$ and $1 \leq j \leq n$, we have that $$\label{espYjInt} \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}] = \int x p(X_{j}=x|S_{1,n}=na) dx = \int x p(\widetilde{X}_{j}=x | \widetilde{S}_{1,n}=na) dx.$$ Let $L_{n}=\left\{1, ..., n\right\}\setminus\lbrace{j}\rbrace$. Normalizing, we obtain that $$p(\widetilde{X_{j}}=x | \widetilde{S}_{1,n}=na) = \widetilde{p}_{j}(x) \left( \frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{L_{n}}(\theta^{a}_{n})} \right) \frac{p_{\overline{S}_{L_{n}}}(\gamma_{n}^{j}(x))} {p_{\overline{S}_{1,n}}(0)}, \quad \textrm{where} \quad \gamma_{n}^{j}(x) := \frac{m_{j}(\theta^{a}_{n})-x}{\sigma_{L_{n}}(\theta^{a}_{n})}.$$ Since $\left( \mathcal{AM}6 \right)$ implies $\left( \mathcal{AM}4 \right)$, we get from Corollary $\ref{coroEdgeTT}$ with $m=3$ that $$p_{\overline{S}_{L_{n}}}(\gamma_{n}^{j}(x)) = \mathfrak{n}(\gamma_{n}^{j}(x)) \left[ 1 + \frac{\mu_{L_{n}}^{3}(\theta^{a}_{n})}{6(s_{L_{n}}^{2}(\theta^{a}_{n}))^{3/2}} H_{3}(\gamma_{n}^{j}(x)) \right] + \frac{o(1)}{\sqrt{n-1}}$$ and $$p_{\overline{S}_{1,n}}(0) = \mathfrak{n}(0)+ \frac{o(1)}{\sqrt{n}}.$$ \ Now, $\left( \mathcal{C}v \right)$, $\left( \mathcal{AM}6 \right)$ and the boundedness of the sequence $(\theta^{a}_{n})_{n \geq 1}$ imply readily that $$\frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{L_{n}}(\theta^{a}_{n})} = 1+\mathcal{O}\left(\frac{1}{n}\right) \quad \textrm{and} \quad \frac{\mu_{L_{n}}^{3}(\theta^{a}_{n})}{6(s_{L_{n}}^{2}(\theta^{a}_{n}))^{3/2}} = \mathcal{O}\left( \frac{1}{\sqrt{n-1}} \right).$$ Since the functions $\theta \mapsto \mathfrak{n}(\theta)$ and $\theta \mapsto \mathfrak{n}(\theta)H_{3}(\theta)$ are bounded, we deduce that $$\begin{aligned} \frac{p_{\overline{S}_{L_{n}}}(\gamma_{n}^{j}(x))} {p_{\overline{S}_{1,n}}(0)} &= \left\{\mathfrak{n}(\gamma_{n}^{j}(x)) \left( 1+\mathcal{O}\left( \frac{1}{\sqrt{n}}\right)H_{3}(\gamma_{n}^{j}(x)) \right) + \frac{o(1)}{\sqrt{n-1}} \right\} \left\{ \frac{1}{\mathfrak{n}(0)} + \frac{o(1)}{\sqrt{n}} \right\}\\ &= \frac{\mathfrak{n}(\gamma_{n}^{j}(x))}{\mathfrak{n}(0)} + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) = \exp \left( -\frac{\gamma_{n}^{j}(x)^{2}}{2} \right) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right). \end{aligned}$$ Consequently, $$\label{densCondThetan} p(\widetilde{X}_{j}=x|\widetilde{S}_{1,n}=na) = \widetilde{p}_{j}(x) \left( 1+\mathcal{O}\left(\frac{1}{n}\right) \right) \left\{ \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) \right\}.$$ Recalling that $\int x \widetilde{p}_{j}(x)dx = m_{j}(\theta_{n}^{a})$, we deduce from $(\ref{espYjInt})$ and $(\ref{densCondThetan})$ that $$\mathbb{E}_{\mathcal{P}_{n}}[Y_{j}] = \left\{ \int x \widetilde{p}_{j}(x) \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) dx + m_{j}(\theta_{n}^{a}) \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) \right\} \left( 1+\mathcal{O}\left(\frac{1}{n}\right) \right).$$ Therefore, it is enough to prove that $$\int x \widetilde{p}_{j}(x) \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) dx = m_{j}(\theta_{n}^{a}) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right)$$ Now, for any $u \in \mathbb{R}$, $$\label{expUdev} 1 - u^{2}/2 \leq \exp \left( - u^{2}/2 \right) \leq 1,$$ from which we deduce that $$\label{xpositif} \int\limits_{0}^{\infty} x\widetilde{p}_{j}(x)dx - \frac{1}{2}\int\limits_{0}^{\infty} x\widetilde{p}_{j}(x)\gamma_{n}^{j}(x)^{2}dx \leq \int\limits_{0}^{\infty} x \widetilde{p}_{j}(x) \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) dx \leq \int\limits_{0}^{\infty} x\widetilde{p}_{j}(x)dx$$ and $$\label{xnegatif} \int\limits_{-\infty}^{0} x\widetilde{p}_{j}(x)dx \leq \int\limits_{-\infty}^{0} x \widetilde{p}_{j}(x) \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) dx \leq \int\limits_{-\infty}^{0} x\widetilde{p}_{j}(x)dx - \frac{1}{2}\int\limits_{-\infty}^{0} x\widetilde{p}_{j}(x)\gamma_{n}^{j}(x)^{2}dx.$$ Adding $(\ref{xpositif})$ and $(\ref{xnegatif})$, we obtain that $$\label{encadrementThetan} m_{j}(\theta_{n}^{a}) - \frac{1}{2}\int\limits_{0}^{\infty} x\widetilde{p}_{j}(x)\gamma_{n}^{j}(x)^{2}dx \leq \int\limits x \widetilde{p}_{j}(x) \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) dx \leq m_{j}(\theta_{n}^{a}) - \frac{1}{2}\int\limits_{-\infty}^{0} x\widetilde{p}_{j}(x)\gamma_{n}^{j}(x)^{2}dx.$$ For any $B \in \mathcal{B}(\mathbb{R})$, we have that $$\begin{aligned} \int_{B} x \widetilde{p}_{j}(x) \gamma_{n}^{j}(x)^{2} dx &= \frac{1}{s_{L_{n}}^{2}(\theta^{a}_{n})} \left\{ \int_{B} x \widetilde{p}_{j}(x) \left(m_{j}(\theta^{a}_{n})-x \right)^{2}dx \right\}\\ \label{tauxCvThetan} &= \frac{1}{s_{L_{n}}^{2}(\theta^{a}_{n})} \sum_{i=0}^{2} \binom{2}{i} m_{j}(\theta^{a}_{n})^{2-i} (-1)^{i} \int_{B} x^{1+i} \widetilde{p}_{j}(x)dx. \end{aligned}$$ Let $i \in \left\{ 0, 1, 2 \right\}$. Recalling that $L_{n}=\left\{1, ..., n\right\}\setminus\lbrace{j}\rbrace$, we get from $\left( \mathcal{C}v \right)$ and $(\mathcal{U}f)$ that $$\max\limits_{1 \leq j \leq n} \enskip \frac{1}{s_{L_{n}}^{2}(\theta^{a}_{n})} = \mathcal{O} \left( \frac{1}{n} \right) \qquad \textrm{and} \qquad \max\limits_{1 \leq j \leq n} \enskip \left| m_{j}(\theta^{a}_{n}) \right|^{2-i} = \mathcal{O}(1).$$ Then, $\left( \mathcal{AM}6 \right)$ implies that for all $n \geq 1$, $$\max\limits_{1 \leq j \leq n} \enskip \left| \int_{B} x^{1+i} \widetilde{p}_{j}(x)dx \right| \leq \max\limits_{1 \leq j \leq n} \enskip \int_{\mathbb{R}} |x|^{1+i} \widetilde{p}_{j}(x)dx \leq \sup\limits_{j \geq 1} \left\{ 1 + \sup\limits_{\theta \in K_{a}} \mathbb{E} \left[ \left| \widetilde{X}^{\theta}_{j} \right|^{6} \right] \right\} < \infty.$$ So we deduce from $(\ref{tauxCvThetan})$ that $$\label{intB} \max\limits_{1 \leq j \leq n} \enskip \int_{B} x \widetilde{p}_{j}(x) \gamma_{n}^{j}(x)^{2} dx = \mathcal{O} \left( \frac{1}{n} \right).$$ Taking $B = (-\infty, 0)$ and $B = (0, \infty)$ in $(\ref{intB})$, we conclude the proof by $(\ref{encadrementThetan})$. \[VarCovYj\] We have that $$\label{covYj} \max\limits_{1 \leq j < j' \leq n} \left| \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}Y_{j'}] - m_{j}(\theta^{a}_{n})m_{j'}(\theta^{a}_{n}) \right| = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right)$$ and $$\label{varYj} \max\limits_{1 \leq j \leq n} \left| \mathbb{E}_{\mathcal{P}_{n}}[Y_j^2] - \left( s_{j}^{2}(\theta^{a}_{n}) + m_j(\theta^{a}_{n})^2 \right) \right| = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right).$$ For any $1 \leq j < j' \leq n$, we have that $$\begin{aligned} \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}Y_{j'}] = \int x x' p\left( \left. \widetilde{X_{j}}=x ; \widetilde{X_{j'}}=x' \right| \widetilde{S_{1,n}} = na \right) dxdx'. \end{aligned}$$ Let $L_{n}=\left\{1, ..., n\right\}\setminus\lbrace{j,j'}\rbrace$. Normalizing, we obtain that $$p\left( \left. \widetilde{X_{j}}=x ; \widetilde{X_{j'}}=x' \right| \widetilde{S_{1,n}} = na \right) = \widetilde{p}_{j}(x) \widetilde{p}_{j'}(x') \left( \frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{L_{n}}(\theta^{a}_{n})} \right) \frac{p_{\overline{S}_{L_{n}}}\left(\Gamma_{n}^{j,j'}(x)\right)} {p_{\overline{S}_{1,n}}(0)},$$ where $$\Gamma_{n}^{j}(x) := \frac{m_{j}(\theta^{a}_{n}) + m_{j'}(\theta^{a}_{n}) - x - x'} {\sigma_{L_{n}}(\theta^{a}_{n})}.$$ \ Since $\left( \mathcal{AM}4 \right)$ holds, we get from Corollary $\ref{coroEdgeTT}$ with $m=3$ that $$p\left( \left. \widetilde{X_{j}}=x ; \widetilde{X_{j'}}=x' \right| \widetilde{S_{1,n}} = na \right) = \widetilde{p}_{j}(x) \widetilde{p}_{j'}(x') \left( 1+\mathcal{O}\left(\frac{1}{n}\right) \right) \left\{ \exp\left(-\frac{\Gamma_{n}^{j}(x)^{2}}{2}\right) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) \right\}.$$ As in the preceding proof, we get from $(\ref{expUdev})$ (applied to $\exp\left(-\frac{\Gamma_{n}^{j}(x)^{2}}{2}\right)$) that, uniformly in $j$, $$\begin{aligned} \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}Y_{j'}] &= \int xx' \widetilde{p}_{j}(x) \widetilde{p}_{j'}(x') dxdx' + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) \\ &= m_{j}(\theta^{a}_{n})m_{j'}(\theta^{a}_{n}) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right). \end{aligned}$$ The proof of $(\ref{varYj})$ is quite similar. \[covEtVarYj\] We have that $$\max\limits_{1 \leq j < j' \leq n} Cov_{\mathcal{P}_{n}}(Y_{j}, Y_{j}') = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right)$$ and $$\label{covYj} \max\limits_{1 \leq j \leq n} \left| Var_{\mathcal{P}_{n}}(Y_{j}) - \left( s_{j}^{2}(\theta^{a}_{n}) \right) \right| = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right).$$ We deduce from the preceding Lemmas that for any $1 \leq j < j' \leq n$, $$\begin{aligned} Cov_{\mathcal{P}_{n}}(Y_{j}, Y_{j}') &= \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}Y_{j'}] - \mathbb{E}_{\mathcal{P}_{n}}[Y_{j}] \mathbb{E}_{\mathcal{P}_{n}}[Y_{j'}] \\ &= \left( m_{j}(\theta^{a}_{n})m_{j'}(\theta^{a}_{n}) + \mathcal{O}\left(\frac{1}{\sqrt{n}} \right) \right) - \left( m_{j}(\theta^{a}_{n})m_{j'}(\theta^{a}_{n}) + \mathcal{O}\left(\frac{1}{\sqrt{n}} \right) \right) \\ &= \mathcal{O}\left(\frac{1}{\sqrt{n}}\right). \end{aligned}$$ Proof of $\max\limits_{0 \leq i \leq k-1} |t_{i,n}| = \mathcal{O}_{\mathcal{P}_{n}}(1)$ --------------------------------------------------------------------------------------- \ For any $n \geq 1$ and $i=0, ..., k-1$, set $$V_{i+1,n} := \frac{1}{n-i} \sum\limits_{j=i+1}^{n} Z_{j} \quad \textrm{where} \quad Z_{j} := Y_{j} - \mathbb{E}[Y_{j}].$$ \[V1n\] We have that $$\mathbb{E}_{\mathcal{P}_{n}} [V_{1,n}^2] = o(1).$$ We have that $$\mathbb{E}_{\mathcal{P}_{n}} [V_{1,n}^2] = \frac{1}{n^{2}} \left\{ \sum\limits_{j=1}^{n} Var_{\mathcal{P}_{n}}(Y_{j}) + 2 \sum\limits_{1 \leq j < j' \leq n} Cov_{\mathcal{P}_{n}}(Y_{j}, Y_{j'}) \right\}.$$ Then, we get from Corollary $\ref{covEtVarYj}$ that $$\mathbb{E}_{\mathcal{P}_{n}} [V_{1,n}^2] = \frac{1}{n^{2}} \left\{ \sum\limits_{j=1}^{n} \left[ s_{j}^{2}(\theta^{a}_{n}) + \mathcal{O} \left( \frac{1}{\sqrt{n}} \right) \right] + n(n-1) \mathcal{O} \left( \frac{1}{\sqrt{n}} \right) \right\}.$$ We conclude the proof by Corollary $\ref{mjThetan}$ which implies that $$\frac{1}{n^{2}} \sum\limits_{j=1}^{n} \left[ s_{j}^{2}(\theta^{a}_{n}) + \mathcal{O} \left( \frac{1}{\sqrt{n}} \right) \right] = o(1).$$ \[maxVi\] We have that $$\max\limits_{0 \leq i \leq k-1} |V_{i+1,n}| = o_{\mathcal{P}_n}(1).$$ We follow the lines of Kolmogorov’s maximal inequality proof. Let $n \geq 1$ and $i \in \left\{ 0, ..., k - 1 \right\}$. For any $\delta>0$, set $$A_{i,n} := \left\{|V_{i+1,n}| \geq \delta \right\} \bigcap \left( \bigcap\limits_{j=0}^{i-1} \left\{ |V_{j+1,n}| < \delta \right\} \right),$$ and $$A_{n} := \left\{ \max\limits_{0 \leq i \leq k-1} |V_{i+1,n}| \geq \delta \right\} = \bigcup_{i=0}^{k-1} A_{i,n}.$$ Since the $(A_{i,n})_{0 \leq i \leq k-1}$ are non-overlapping, we have that $$\begin{aligned} \mathbb{E}_{\mathcal{P}_{n}} [V_{1,n}^{2}] & \geq \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} V_{1,n}^{2} d \mathcal{P}_{n} \\ &= \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} \left\{ (V_{1,n} - V_{i+1,n}) + V_{i+1,n} \right\}^{2} d\mathcal{P}_{n} \\ & \geq 2 \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} (V_{1,n} - V_{i+1,n})V_{i+1,n} d\mathcal{P}_{n} + \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} V_{i+1,n}^{2} d\mathcal{P}_{n} \\ & \geq 2 \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} (V_{1,n} - V_{i+1,n})V_{i+1,n} d\mathcal{P}_{n} + \delta^{2} \mathcal{P}_{n} (A_{n}). \end{aligned}$$ By Lemma $\ref{V1n}$, it is enough to prove that $$\label{analogueKolmo} \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} (V_{1,n}-V_{i+1,n})V_{i+1,n} d\mathcal{P}_{n} = o(1) .$$ In the proof of Kolmogorov, the corresponding term is equal to 0, by independence of the involved random variables. Similarly $(\ref{analogueKolmo})$ will follow from Corollary $\ref{VarCovYj}$, which states that the $(Z_{j})$ are asymptotically uncorrelated. Indeed, we have that $$\label{eachSum} \sum_{i=0}^{k-1} \int\limits_{A_{i,n}} (V_{1,n}-V_{i+1,n})V_{i+1,n} d\mathcal{P}_{n} = \sum_{i=0}^{k-1} \mathbb{E}_{\mathcal{P}_{n}} \left[\mathbf{1}_{A_{i,n}} V_{1,n}V_{i+1,n} \right] - \sum_{i=0}^{k-1} \mathbb{E}_{\mathcal{P}_{n}}\left[ \mathbf{1}_{A_{i,n}} V_{1,n}^{2} \right].$$ Then, it is enough to prove that each sum in the right-hand side of $(\ref{eachSum})$ is a $o(1)$. We get readily that $$\label{V1nVin} \mathbb{E}_{\mathcal{P}_{n}} \left[\mathbf{1}_{A_{i,n}} V_{1,n}V_{i+1,n} \right] = \frac{1}{n(n-i)}\left\{\sum_{j=i+1}^{n}\mathbb{E}_{\mathcal{P}_{n}}\left[\mathbf{1}_{A_{i,n}} Z_{j}^{2} \right] + \sum_{\substack{1 \leq j \leq n \\ i+1 \leq j' \leq n \\ j \neq j'}} \mathbb{E}_{\mathcal{P}_{n}}[\mathbf{1}_{A_{i,n}}Z_{j}Z_{j'}]\right\}$$ and $$\label{VinCarre} \mathbb{E}_{\mathcal{P}_{n}} \left[\mathbf{1}_{A_{i,n}} V_{i+1,n}^{2} \right] = \frac{1}{(n-i)^{2}}\left\{\sum_{j=i+1}^{n}\mathbb{E}_{\mathcal{P}_{n}}\left[\mathbf{1}_{A_{i,n}} Z_{j}^{2} \right] + \sum_{\substack{i+1 \leq j,j' \leq n \\ j \neq j'}} \mathbb{E}_{\mathcal{P}_{n}}[\mathbf{1}_{A_{i,n}}Z_{j}Z_{j'}] \right\}.$$ Now, the Cauchy-Schwarz inequality applied twice, first in $\mathcal{L}^{2}$ and then in $\mathbb{R}^{k}$, implies that $$\begin{aligned} \sum_{i=0}^{k-1} \frac{1}{n(n-i)} \sum_{j=i+1}^{n}\mathbb{E}_{\mathcal{P}_{n}}\left[\mathbf{1}_{A_{i,n}} Z_{j}^{2} \right] &\leq \frac{1}{n} \sum\limits_{i=0}^{k-1} \mathcal{P}_{n}(A_{i,n})^{1/2} \left( \frac{\sum\limits_{j=i+1}^{n} \mathbb{E}_{\mathcal{P}_{n}}\left[ Z_{j}^{4}\right]^{1/2}}{n-i} \right) \\ \label{CSRk} & \leq \frac{1}{n} \left\{ \sum\limits_{i=0}^{k-1} \mathcal{P}_{n}(A_{i,n}) \right\}^{1/2} \left\{ \sum_{i=0}^{k-1} \left( \frac{\sum\limits_{j=i+1}^{n} \mathbb{E}_{\mathcal{P}_{n}}\left[ Z_{j}^{4}\right]^{1/2}}{n-i}\right)^{2} \right\}^{1/2}. \end{aligned}$$ Then, $\left[ \sum\limits_{i=0}^{k-1} \mathcal{P}_{n}(A_{i,n}) \right]^{1/2} = \mathcal{P}_{n}(A_{n})^{1/2} \leq 1$ and we obtain from Corollary $\ref{VarCovYj}$ and Fact $\ref{mjThetan}$ that, for all $i \in \left\{0, ..., k-1 \right\}$, $$\label{bornitudeCS} \left( \frac{\sum\limits_{j=i+1}^{n} \mathbb{E}_{\mathcal{P}_{n}}\left[ Z_{j}^{4}\right]^{1/2}}{n-i}\right)^{2} = \left( \frac{\sum\limits_{j=i+1}^{n} \left\{ \mu_{j}^{4}(\theta_{n}^{a}) + \mathcal{O}\left( \frac{1}{n} \right) \right\}^{1/2} }{n-i}\right)^{2} = \mathcal{O}(1).$$ Finally, we deduce from $(\ref{CSRk})$ and $(\ref{bornitudeCS})$ that $$\sum_{i=0}^{k-1} \frac{1}{n(n-i)} \sum_{j=i+1}^{n}\mathbb{E}_{\mathcal{P}_{n}}\left[\mathbf{1}_{A_{i,n}} Z_{j}^{2} \right] = \frac{1}{n} \left\{ k \mathcal{O}(1) \right\}^{1/2} = o(1).$$ \ We obtain similarly that $$\begin{aligned} \sum_{i=0}^{k-1} \frac{1}{(n-i)^{2}} \sum_{j=i+1}^{n}\mathbb{E}_{\mathcal{P}_{n}}\left[\mathbf{1}_{A_{i,n}} Z_{j}^{2} \right] &\leq \mathcal{P}_{n}(A_{n})^{1/2} \left\{ \sum\limits_{i=0}^{k-1} \frac{1}{(n-i)^{2}} \left( \frac{\sum\limits_{j=i+1}^{n} \mathbb{E}_{\mathcal{P}_{n}}\left[ Z_{j}^{4}\right]^{1/2}}{n-i}\right)^{2} \right\}^{1/2} \\ &= \mathcal{O}(1) \left\{ \sum\limits_{i=0}^{k-1} \frac{1}{(n-i)^{2}} \right\}^{1/2} = o(1). \end{aligned}$$ To conclude, we consider the sums involving $\mathbb{E}_{\mathcal{P}_{n}}[\mathbf{1}_{A_{i,n}}Z_{j}Z_{j'}]$, for $j \neq j'$, in $(\ref{V1nVin})$ and $(\ref{VinCarre})$. The Cauchy-Scwarz inequality brings terms of the form $\mathbb{E}_{\mathcal{P}_{n}}[Z_{j}^{2}Z_{j'}^{2}]$. Clearly, $Z_{j}^{2}$ and $Z_{j'}^{2}$ are similarly asymptotically uncorrelated and thereby, we obtain analogously that $$\min \left\{ \sum_{i=0}^{k-1} \frac{1}{n(n-i)} \sum_{\substack{1 \leq j \leq n \\ i+1 \leq j' \leq n \\ j \neq j'}} \mathbb{E}_{\mathcal{P}_{n}}[\mathbf{1}_{A_{i,n}}Z_{j}Z_{j'}] \quad ; \quad \sum_{i=0}^{k-1} \frac{1}{(n-i)^{2}}\sum_{\substack{i+1 \leq j,j' \leq n \\ j \neq j'}} \mathbb{E}_{\mathcal{P}_{n}}[\mathbf{1}_{A_{i,n}}Z_{j}Z_{j'}] \right\} = o(1),$$ which ends the proof. \[maxTi\] We have that $$\label{tiEqua} \max\limits_{0 \leq i \leq k-1} |t_{i,n}| = \mathcal{O}_{\mathcal{P}_{n}}(1).$$ The triangle inequality implies that for any $n \geq 1$, $$\label{inegTriang} \max\limits_{0 \leq i \leq k-1} \left| \overline{m}_{i+1,n}(t_{i,n}) \right| \leq \max\limits_{0 \leq i \leq k-1} \left| V_{i+1,n} \right| + \max\limits_{0 \leq i \leq k-1} \left| \left(\frac{1}{n-i} \sum_{j=i+1}^{n} \mathbb{E}[Y_{j}] \right) - \overline{m}_{i+1,n}(\theta_{n}^{a})\right| + \max\limits_{0 \leq i \leq k-1} \left| \overline{m}_{i+1,n}(\theta_{n}^{a}) \right|.$$ We get from Lemma $\ref{maxVi}$ and assumption (E) that $$\label{ViPetitTau} \max\limits_{0 \leq i \leq k-1} |V_{i+1,n}| = o_{\mathcal{P}_n}(1).$$ Then, Lemma $\ref{momentsYj}$ implies that $$\label{espYjpetitTau} \max\limits_{0 \leq i \leq k-1} \left| \left(\frac{1}{n-i} \sum_{j=i+1}^{n} \mathbb{E}[Y_{j}] \right) - \overline{m}_{i+1,n}(\theta_{n}^{a})\right| \leq \max\limits_{0 \leq i \leq k-1} \left\{ \frac{1}{n-i} \sum_{j=i+1}^{n} \left| \mathbb{E}[Y_{j}] - m_{j}(\theta_{n}^{a}) \right| \right\} = \mathcal{O}\left(\frac{1}{n}\right).$$ Now, Fact $\ref{mjThetan}$ implies that $$\label{thetanPetitTau} \max\limits_{0 \leq i \leq k-1} \left| \overline{m}_{i+1,n}(\theta_{n}^{a}) \right| = \mathcal{O}(1).$$ Combining $(\ref{inegTriang})$, $(\ref{ViPetitTau})$, $(\ref{espYjpetitTau})$, and $(\ref{thetanPetitTau})$, we obtain that $$\label{miGrandTau} \max\limits_{0 \leq i \leq k-1} \left| \overline{m}_{i+1,n}(t_{i,n}) \right| = \mathcal{O}_{\mathcal{P}_n}(1).$$ Now, $(\mathcal{H}\kappa)$ implies that for all $i = 0, ..., k-1$, $\overline{m}_{i+1,n}$ is a homeomorphism from $\Theta$ to $\mathcal{S}_{X}$. Then, we get from $(\mathcal{U}f)$ that for all $s \in \mathcal{S}_{X}$, $$(f_{+})^{-1}(s) \leq (\overline{m}_{i+1,n})^{-1}(s) \leq (f_{-})^{-1}(s).$$ We deduce that $\mathcal{P}_{n}$ - a.s., $$(f_{+})^{-1}(\overline{m}_{i+1,n}(t_{i,n})) \leq t_{i,n} \leq (f_{-})^{-1}(\overline{m}_{i+1,n}(t_{i,n})),$$ which combined to $(\ref{miGrandTau})$ concludes the proof. The max of the trajectories --------------------------- *Throughout this Section 3.5, all the tilted densities considered pertain to $\theta = \theta^{a}_{n}$, defined by* $$\overline{m}_{1,n}(\theta^{a}_{n}) = a.$$ \[maxYj\] We have that $$\max\limits_{1 \leq j \leq n} |Y_{j}| = \mathcal{O}_{\mathcal{P}_{n}} (\log n).$$ For any $n \geq 1$, set $M_{n} := \max\limits_{1 \leq j \leq n} |Y_{j}|$. For all $s>0$, we have that $$\begin{aligned} \mathcal{P}_{n} \left(M_{n} \geq s \right) &\leq \sum\limits_{j=1}^{n} \mathcal{P}_{n}(Y_j \leq - s) + \mathcal{P}_{n}(Y_j \geq s)\\ & = \sum\limits_{j=1}^{n}\int_{-\infty}^{-s} P(\widetilde{X}_{j} =x | \widetilde{S}_{1,n} = na)dx + \int_{s}^{\infty} P(\widetilde{X}_{j}=x | \widetilde{S}_{1,n} = na)dx.\end{aligned}$$ Now, we recall from $(\ref{densCondThetan})$ that $$p(\widetilde{X}_{j}=x|\widetilde{S}_{1,n}=na) = \widetilde{p}_{j}(x) \left( 1+\mathcal{O}\left(\frac{1}{n}\right) \right) \left\{ \exp\left(-\frac{\gamma_{n}^{j}(x)^{2}}{2}\right) + \mathcal{O}\left(\frac{1}{\sqrt{n}}\right) \right\} = \widetilde{p}_{j}(x) \mathcal{O}(1).$$ Consequently, there exists an absolute constant $C > 0$ s.t. for all $n \geq 1$, $$\mathcal{P}_{n} \left(M_{n} \geq s \right) \leq C \left\{ \sum\limits_{j=1}^{n} P\left(\widetilde{X}_{j} \leq -s \right) + P\left(\widetilde{X}_{j} \geq s \right) \right\}.$$ We get from Markov’s inequality that for any $\lambda > 0$, $$P\left( \widetilde{X}_{j} \leq -s \right) = P\left( \exp(-\lambda \widetilde{X}_{j}) \geq \exp(\lambda s) \right) \leq \mathbb{E}\left[ \exp(-\lambda \widetilde{X}_{j}) \right]\exp(-\lambda s)$$ and $$P\left( \widetilde{X}_{j} \geq s \right) \leq \mathbb{E}\left[ \exp(\lambda \widetilde{X}_{j}) \right] \exp(-\lambda s).$$ Then, for any $\lambda \neq 0$, $$\mathbb{E}\left[ \exp( \lambda \widetilde{X}_{j}) \right] = \int\displaystyle \exp(\lambda x) \left[ \frac{\exp(\theta_{n}^{a} x) p_{j}(x)} {\Phi_{j}(\theta_{n}^{a})} dx \right] = \frac{\Phi_{j}(\theta_{n}^{a} + \lambda)}{\Phi_{j}(\theta_{n}^{a})}.$$ Therefore, $$\mathcal{P}_{n} \left(M_{n} \geq s \right) \leq C \left\{ \sum\limits_{j=1}^{n} \frac{\Phi_{j}(\theta_{n}^{a} - \lambda)}{\Phi_{j}(\theta_{n}^{a})} + \frac{\Phi_{j}(\theta_{n}^{a} + \lambda)}{\Phi_{j}(\theta_{n}^{a})} \right\} \exp(-\lambda s).$$ Since the sequence $(\theta_{n}^{a})_{n \geq 1}$ is bounded, we can find $\lambda > 0$ s.t. each of the sequences $(\theta_{n}^{a} - \lambda)_{n \geq 1}$ and $(\theta_{n}^{a} + \lambda)_{n \geq 1}$ is included in a compact subset of $\Theta$. Therefore, we deduce that there exists an absolute constant $D$ s.t. $$\sup\limits_{n \geq 1} \enskip \sup\limits_{j \geq 1} \enskip \max \left\{ \frac{\Phi_{j}(\theta^{a}_{n} - \lambda)}{\Phi_{j}(\theta^{a}_{n})} \enskip ; \enskip \frac{\Phi_{j}(\theta^{a}_{n} + \lambda)}{\Phi_{j}(\theta^{a}_{n})} \right\} \leq D.$$ Therefore, $$\mathcal{P}_{n} \left(M_{n} \geq s \right) \leq CD n \exp(-\lambda s) = CD \exp\left( \log n - \lambda s\right).$$ Consequently, for all sequence $(s_{n})_{n \geq 1}$ s.t. $\frac{s_{n}}{\log n} \rightarrow \infty$ as $n \rightarrow \infty$, we have that $$\label{covMn} \mathcal{P}_{n} \left(M_{n} \geq s_{n} \right) \rightarrow 0 \enskip \textrm{as} \enskip n \rightarrow \infty.$$ Set $Z_{n} := \frac{M_{n}}{\log n}$. For any sequence $(a_n)_{n \geq 1}$ s.t. $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$, we have that $$\mathcal{P}_{n} \left(Z_{n} \geq a_{n} \right) = \mathcal{P}_{n} \left(M_{n} \geq s_{n} \right) \enskip \textrm{where} \enskip s_{n} := a_{n} \log n, \enskip \textrm{so that} \enskip \frac{s_{n}}{\log n} \rightarrow \infty \enskip \textrm{as} \enskip n \rightarrow \infty.$$ Finally, we conclude the proof by applying the following Fact, since we get from $(\ref{covMn})$ that $$\mathcal{P}_{n} \left(Z_{n} \geq a_{n} \right) \rightarrow 0 \enskip \textrm{as} \enskip n \rightarrow \infty.$$ For all $n \geq 1$, let $Z_{n} : (\Omega_{n},\mathcal{A}_{n}, \mathcal{P}_{n}) \longrightarrow \mathbb{R}$ be a r.v. Assume that for any sequence $(a_n)_{n \geq 1}$ s.t. $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$, we have that $\mathcal{P}_{n} (|Z_{n}| \geq a_{n}) \rightarrow 0$ as $n \rightarrow \infty$. Then, $$Z_{n} = \mathcal{O}_{\mathcal{P}_{n}}(1).$$ Suppose that the sequence $(Z_{n})$ is not a $\mathcal{O}_{\mathcal{P}_{n}}(1)$. This means that there exists $\epsilon > 0$ s.t. for all $k \in \mathbb{N}$, there exists $n(k) \in \mathbb{N}$ s.t. $$\label{nonO1} \mathcal{P}_{n(k)} (|Z_{n(k)}| \geq k) > \epsilon.$$ \ If the sequence $(n(k))_{k}$ is bounded, then there exists a fixed $n_{0} \in \mathbb{N}$ and a subsequence $(n(k_{j}))_{j \geq 1}$ such that for all $j \geq 1$, $n(k_{j}) = n_{0}$. We can clearly assume that $k_{j} \rightarrow \infty$ as $j \rightarrow \infty$, which implies that $$\lim\limits_{j \rightarrow \infty} \mathcal{P}_{n(k_{j})} (|Z_{n(k_{j})}| \geq k_{j}) = \lim\limits_{j \rightarrow \infty} \mathcal{P}_{n_{0}} (|Z_{n_{0}}| \geq k_{j}) = 0,$$ which contradicts $(\ref{nonO1})$. \ If the sequence $(n(k))_k$ is not bounded, then there exists a strictly increasing subsequence $(n(k_{j}))_j$ s.t. $n(k_{j}) \rightarrow \infty$ as $j \rightarrow \infty$. Now, we can define a sequence $(a_{n})$ s.t. for all $j \geq 1$, $a_{n(k_{j})} = k_{j}$. We still can assume that $k_{j} \rightarrow \infty$ as $j \rightarrow \infty$. Therefore, we can assume that $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$, which implies that $$\lim\limits_{j \rightarrow \infty} \mathcal{P}_{n(k_{j})} (|Z_{n(k_{j})}| \geq k_{j}) = \lim\limits_{j \rightarrow \infty} \mathcal{P}_{n(k_{j})} (|Z_{n(k_{j})}| \geq a_{n(k_{j})}) = 0,$$ which contradicts $(\ref{nonO1})$. Taylor expansion ---------------- \[devLimLandau\] Let $I$ be an interval of $\mathbb{R}$ containing $0$, of non void interior, and $f : I \longrightarrow \mathbb{R}$ a function of class $C^2$. Let $(U_{n})$ be a sequence of random variables $U_{n} : (\Omega_{n},\mathcal{A}_{n}) \longrightarrow (\mathbb{R}, \mathcal{B}(\mathbb{R}))$ s.t. $$U_{n} = o_{\mathcal{P}_{n}}(1).$$ Then, there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for any $n \geq 1$, $$f(U_{n}) = f(0) + U_{n}f'(0) + U_{n}^{2} \mathcal{O}_{\mathcal{P}_{n}}(1) \quad \textrm{on} \enskip B_{n}.$$ Furthermore, if $U_{n}=o_{\mathcal{P}_{n}}(u_n)$, with $u_{n} \enskip {\underset{n \infty} {\longrightarrow}} \enskip 0$, then $$f(U_{n}) = f(0) + o_{\mathcal{P}_{n}}(u_n) \quad \textrm{on} \enskip B_{n}.$$ Let $\epsilon >0$. Let $\delta >0$ s.t. $(-\delta,\delta) \subset I$. Set $$B_{n} := \{|U_{n}| < \delta\}.$$ Since $U_{n} = o_{\mathcal{P}_{n}}(1)$, we have that $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$. For any $n \geq 1$, $f(U_{n})$ is well defined on $B_{n}$, and the Taylor-Lagrange formula provides a $C_{n}$ with $|C_{n}| \leq |U_{n}|$, s.t. $$f(U_{n}) = f(0) + U_{n}f'(0) + \frac{U_{n}^{2}}{2} f '' (C_{n}).$$ Now, $C_{n}$ can be obtained from a dichotomy process, initialized with $U_n$. This implies that for all $n$, $C_n$ is a measurable mapping from $(\Omega_{n},\mathcal{A}_{n})$ to $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, for $C_n$ is the limit of such mappings. Then, as $|C_{n}| \leq |U_{n}|$ and $f''$ is continuous, we have that $$C_{n} \enskip {\underset{\mathcal{P}_{n}} {\longrightarrow}} \enskip 0 \Longrightarrow f ''(C_{n}) \enskip {\underset{\mathcal{P}_{n}} {\longrightarrow}} \enskip f''(0) \Longrightarrow f ''(C_{n})=\mathcal{O}_{\mathcal{P}_{n}}(1).$$ Furthermore, if $U_{n}=o_{\mathcal{P}_{n}}(u_n)$ with $u_{n} \enskip {\underset{n \infty} {\longrightarrow}} \enskip 0$, then $\frac{U_{n}^{2}}{2} f '' (C_{n})$ is also a $o_{\mathcal{P}_{n}}(u_n)$. Main Results ============ Theorem with small $k$ ---------------------- \[smallkTheorem\] Suppose that the Assumptions stated in Section 2.6 hold. Assume that $$\label{conditionK} k \longrightarrow \infty \enskip \textrm{as } \enskip n \longrightarrow \infty \qquad \textrm{and that} \qquad k=o(n^{\rho}), \quad \textrm{with} \enskip 0 < \rho < 1/2.$$ Then, $$\label{myDFavecY} \left\| Q_{nak} - \widetilde{P}_{1}^{k} \right\|_{TV} \enskip {\underset{n \infty} {\longrightarrow}} \enskip 0,$$ where $\widetilde{P}_{1}^{k}$ is the joint distribution of independent r.v.’s $\left( \widetilde{X_{j}}^{\theta_{n}^{a}} \right)_{1 \leq j \leq k}$. We have that $$\pi_{k}(Y_1^k):= p\left( X_{1}^k = Y_{1}^k | S_{1,n} = na \right) = \frac{p_{\widetilde{X}_{1}^{k}}\left(Y_{1}^k \right)p_{\widetilde{S}_{k+1,n}}(na-\Sigma_{1,k})}{p_{\widetilde{S}_{1,n}}(na)}.$$ Then we normalize, so that $$\pi_{k}(Y_1^k) = p_{\widetilde{X}_{1}^{k}}\left(Y_{1}^k \right) \frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{k+1,n}(\theta^{a}_{n})} \frac{p_{\overline{S}_{k+1,n}}(Z_{k})}{p_{\overline{S}_{1,n}}(0)} \quad \textrm{where} \quad Z_{k}:=\frac{\sum\limits_{j=1}^{k}m_{j}(\theta^{a}_{n})-Y_{j}}{\sigma_{k+1,n}(\theta^{a}_{n})}.$$ Since $\left( \mathcal{AM}4 \right)$ holds, we get from Corollary $\ref{coroEdgeTT}$ with $m=3$ that $$\pi_{k}(Y_1^k) = p_{\widetilde{X}_{1}^{k}}\left(Y_{1}^k \right) \frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{k+1,n}(\theta^{a}_{n})} \frac{\mathfrak{n}(Z_{k}) \left( 1+ \frac{\mu_{k+1,n}^{3}(\theta^{a}_{n})}{6(s_{k+1,n}^{2}(\theta^{a}_{n}))^{3/2}} H_{3}(Z_{k}) \right) + \frac{o(1)}{(n-k)^{3/2}}} {\mathfrak{n}(0) + \frac{o(1)}{n^{3/2}}}$$ \ First, we get from Corollary $\ref{mjThetan}$ that $$\frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{k+1,n}(\theta^{a}_{n})} = \left( 1 + \frac{s_{1,k}^{2}(\theta^{a}_{n})}{s_{k+1,n}^{2}(\theta^{a}_{n})} \right)^{1/2} = \left( 1 + \frac{k}{n-k}\mathcal{O}(1) \right)^{1/2} \quad \textrm{and} \quad \frac{\mu_{k+1,n}^{3}(\theta^{a}_{n})}{6(s_{k+1,n}^{2}(\theta^{a}_{n}))^{3/2}} = \frac{\mathcal{O}(1)}{(n-k)^{1/2}}.$$ Then, $(\ref{conditionK})$ implies that $$\frac{\sigma_{1,n}(\theta^{a}_{n})}{\sigma_{k+1,n}(\theta^{a}_{n})} = 1+o(1) \quad \textrm{and} \quad \frac{\mu_{k+1,n}^{3}(\theta^{a}_{n})}{6(s_{k+1,n}^{2}(\theta^{a}_{n}))^{3/2}} = o(1).$$ \ Now, we get from Corollary $\ref{mjThetan}$ and Lemma $\ref{maxYj}$ that $$Z_{k} = \frac{k\log n}{\sqrt{n-k}} \mathcal{O}_{\mathcal{P}_{n}}(1).$$ Then, $(\ref{conditionK})$ implies that $$Z_{k} = o_{\mathcal{P}_{n}}(1), \quad \textrm{so that} \quad \mathfrak{n}(Z_{k}) \enskip {\underset{\mathcal{P}_{n}} {\longrightarrow}} \enskip \mathfrak{n}(0) \quad \textrm{and} \quad H_{3}(Z_{k}) \enskip {\underset{\mathcal{P}_{n}} {\longrightarrow}} \enskip H_{3}(0) = 0.$$ We obtain from the preceding lines that $$\pi_{k}(Y_1^k) = p_{\widetilde{X}_{1}^{k}}\left(Y_{1}^k \right) (1+o_{\mathcal{P}_{n}}(1)).$$ Finally, we apply Lemma $\ref{lienISdvt}$ to conclude the proof. Theorem with large $k$ ---------------------- ### Statement of the Theorem Let $y_{1}^{n} \in \left( \mathcal{S}_{X} \right)^{n}$. Then, for any $0 \leq i \leq k-1$, there exists a unique $\tau_{i}(y_{1}^{n})$ s.t. $$\overline{m}_{i+1,n}(\tau_{i}(y_{1}^{n})) = \frac{\sum\limits_{j=i+1}^{n} y_{j}}{n-i}.$$ For $0 \leq i \leq k-1$, define a density $g(y_{i+1}|y_{1}^{i})$ by $$g(y_{i+1}|y_{1}^{i}) := C_{i}^{-1} \widetilde{p}_{i+1}(y_{i+1}) \exp \left(-\frac{\left(y_{i+1} - m_{i+1}(\tau_{i}(y_{1}^{n}))\right)^{2}} {2s_{i+2,n}^{2}(\tau_{i}(y_{1}^{n}))} \right) \exp \left( \frac{3\alpha_{i+2,n}^{(3)}(\tau_{i}(y_{1}^{n}))} {\sigma_{i+2,n}(\tau_{i}(y_{1}^{n}))} y_{i+1} \right),$$ where $C_{i}$ is a normalizing constant which insures that $\int g(y_{i+1}|y_{1}^{i}) dy_{i+1} = 1$ and $$\alpha_{i+e,n}^{(3)}(\tau_{i}(y_{1}^{n})) := \frac{\mu_{i+e,n}^{3}(\tau_{i}(y_{1}^{n}))}{6(s_{i+e,n}^{2}(\tau_{i}(y_{1}^{n})))^{3/2}}.$$ Then, we define the limiting density on $\mathbb{R}^k$ by $$g_{k}(y_{1}^{k}) := \prod_{i=0}^{k-1} g(y_{i+1}|y_{1}^{i}).$$ \[largekTheorem\] Suppose that the Assumptions stated in Section 2.6 hold. Assume that $k$ is of order $n - (\log n)^{\tau}$ with $\tau > 6$. $$\left\| Q_{nak}-G_{k} \right\|_{TV} \enskip {\underset{n \infty} {\longrightarrow}} \enskip 0,$$ where $G_{k}$ is the distribution associated to the density $g_{k}$. We get from the criterion for convergence in total variation distance stated in Section 2.4. that it is enough to prove the following Theorem. Suppose that the Assumptions stated in Section 2.6 hold. Assume that $k$ is of order $n - (\log n)^{\tau}$ with $\tau > 6$. Then, there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for any $n \geq 1$, $$p_{k}(Y_{1}^k) := p(X_{1}^k = Y_{1}^k | S_{1,n} = na) = g_{k}(Y_{1}^k) [1+o_{\mathcal{P}_{n}}(1)] \quad \textrm{on} \enskip B_{n}.$$ *The proof is given hereafter, in three steps. Throughout the proof, all the tilted densities considered pertain to $\theta = t_{i,n}$. We write $s_{j}^{2}$, $\mu_{j}^{\ell}$ instead of $s_{j}^{2}(t_{i,n})$, $\mu_{j}^{\ell}(t_{i,n})$.* ### Identifying $g(Y_{i+1}|Y_{1}^{i})$ When $y_{1}^{n} = Y_{1}^{n}$, we have that $$\sum\limits_{j=i+1}^{n} y_{j} = \sum\limits_{j=i+1}^{n} Y_{j} = na - \sum\limits_{j=1}^{i} Y_{j} \qquad \mathcal{P}_{n} \enskip \textrm{a.s.}$$ and $$\tau_{i}(Y_{1}^{n}) = t_{i,n}.$$ \ We recall from the first calculus of Section $\ref{firstCalculus}$ that $$\pi_{i} = \widetilde{p}_{i+1}(Y_{i+1}) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \frac{\overline{q}_{i+2,n}(Z_{i+1})}{\overline{q}_{i+1,n}(0)}, \enskip \textrm{where } Z_{i+1} := \frac{m_{i+1} - Y_{i+1}}{\sigma_{i+2,n}}.$$ Since $\left( \mathcal{AM}6 \right)$ holds, we get from Corollary $\ref{condPetrovTI}$ with $m=5$ that $$\label{apresEdge} \pi_{i} = \widetilde{p}_{i+1}(Y_{i+1}) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \left\{ \frac{ \mathfrak{n}(Z_{i+1}) \left[1 + \sum\limits_{\nu = 3}^{5} \overline{P}_{\nu}^{i+2,n} (Z_{i+1}) \right] + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i-1)^{3/2}} } {\mathfrak{n}(0) \left[1 + \sum\limits_{\nu = 3}^{5} \overline{P}_{\nu}^{i+1,n} (0) \right] + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i)^{3/2}} } \right\}.$$ \ For $e \in \left\{ 1, 2 \right\}$, set $$\alpha_{i+e,n}^{(3)} := \frac{\mu_{i+e,n}^{3}}{6(s_{i+e,n}^{2})^{3/2}} = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)} {(n-i-e+1)^{1/2}},$$ $$\beta_{i+e,n}^{(6)} := \frac{(\mu_{i+e,n}^{3})^2}{72(s_{i+e,n}^{2})^{3}} = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)} {n-i-e+1} \quad ; \quad \beta_{i+e,n}^{(4)} := \frac{ \mu_{i+e,n}^{4} - 3\sum\limits_{j=i+e}^{n} (s_{j}^{2})^2} {24(s_{i+e,n}^{2})^{2}} = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)} {n-i-e+1},$$ $$\gamma_{i+e,n}^{(9)} := \frac{(\mu_{i+e,n}^{3})^3}{1296(s_{i+e,n}^{2})^{9/2}} \quad ; \quad \gamma_{i+e,n}^{(7)} := \frac{\mu_{i+e,n}^{3} \left( \mu_{i+e,n}^{4} - 3\sum\limits_{j=i+e}^{n} (s_{j}^{2})^2 \right)} {144(s_{i+e,n}^{2})^{7/2}} \quad ; \quad \gamma_{i+e,n}^{(5)} := \frac{ \mu_{i+e,n}^{5} - 10\sum\limits_{j=i+e}^{n} \mu_{j}^{3} s_{j}^{2}}{120(s_{i+e,n}^{2})^{5/2}},$$ where, for $\ell \in \left\{ 5, 7, 9 \right\}$, $$\gamma_{i+e,n}^{(\ell)} = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)} {(n-i-e+1)^{3/2}}.$$ \ For $m \in \left\{3, ..., 9 \right\}$, replacing $H_{m}(Z_{i+1})$ by its expression, we have that $$\overline{P}_{3}^{i+e,n}(Z_{i+1}) = \alpha_{i+e,n}^{(3)} \left[ Z_{i+1}^{3} - 3Z_{i+1} \right],$$ $$\overline{P}_{4}^{i+e,n}(Z_{i+1}) = \beta_{i+e,n}^{(6)} \left[ Z_{i+1}^{6} - 15Z_{i+1}^{4} + 45Z_{i+1}^{2} - 15 \right] + \beta_{i+e,n}^{(4)} \left[Z_{i+1}^{4} - 6Z_{i+1}^{2} + 3 \right],$$ $$\overline{P}_{5}^{i+e,n}(Z_{i+1}) = \gamma_{i+e,n}^{(9)} \left[ Z_{i+1}^{9}+ ... + 945Z_{i+1} \right] + \gamma_{i+e,n}^{(7)} \left[ Z_{i+1}^{7} + ... - 105Z_{i+1} \right] + \gamma_{i+e,n}^{(5)} \left[ Z_{i+1}^{5} + ... + 15Z_{i+1} \right].$$ \ Therefore, $$\sum\limits_{\nu = 3}^{5} \overline{P}_{\nu}^{i+2,n} (Z_{i+1}) = - 3 \alpha_{i+2,n}^{(3)} Z_{i+1} - 15 \beta_{i+2,n}^{(6)} + 3 \beta_{i+2,n}^{(4)} + \mathcal{O}_{\mathcal{P}_{n}}(1) \frac{(\log n)^{3}}{(n-i-1)^{2}}.$$ and $$\sum\limits_{\nu = 3}^{5} \overline{P}_{\nu,n}^{i+1} (0) = - 15 \beta_{i+1,n}^{(6)} + 3 \beta_{i+1,n}^{(4)}.$$ \ Since $\mathfrak{n}(Z_{i+1}) = \mathcal{O}_{\mathcal{P}_{n}}(1)$, we can factorize $\mathfrak{n}(Z_{i+1})$ in the numerator of the bracket of $(\ref{apresEdge})$, so that $$\pi_{i} = \widetilde{p}_{i+1}(Y_{i+1}) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \frac{ \mathfrak{n}(Z_{i+1}) \left[ 1 - 3 \alpha_{i+2,n}^{(3)} Z_{i+1} - 15 \beta_{i+2,n}^{(6)} + 3 \beta_{i+2,n}^{(4)} + \mathcal{O}_{\mathcal{P}_{n}}(1) \frac{(\log n)^{3}}{(n-i-1)^{2}} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i-1)^{3/2}} \right] } { \mathfrak{n}(0) \left[ 1 - 15 \beta_{i+1,n}^{(6)} + 3 \beta_{i+1,n}^{(4)} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i)^{3/2}} \right] }.$$ \ Since $n-k$ is of order $(\log n)^{\tau}$ with $\tau > 6$, we have for all $n > 1$, and $i = 0, ..., k-1$, $$0 \leq \frac{(\log n)^{3}}{(n-i-1)^{2}} (n-i-1)^{3/2} \leq \frac{(\log n)^{3}}{(n-k)^{1/2}} \longrightarrow 0 \enskip \textrm{ as } n \rightarrow \infty.$$ Therefore, $$\frac{(\log n)^{3}}{(n-i-1)^{2}} = \frac{o(1)}{(n-i-1)^{3/2}}, \enskip \textrm{so that} \enskip \mathcal{O}_{\mathcal{P}_{n}}(1) \frac{(\log n)^{3}}{(n-i-1)^{2}} = \frac{o_{\mathcal{P}_{n}}(1)} {(n-i-1)^{3/2}}$$ \ Consequently, $$\pi_{i} = \widetilde{p}_{i+1}(Y_{i+1}) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left( - \frac{Z_{i+1}^{2}}{2} \right) \left\{ \frac{ 1 + \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} Y_{i+1} - \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} - 15 \beta_{i+2,n}^{(6)} + 3 \beta_{i+2,n}^{(4)} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i-1)^{3/2}} } { 1 - 15 \beta_{i+1,n}^{(6)} + 3 \beta_{i+1,n}^{(4)} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i)^{3/2}} } \right\}$$ \ Now, we need to extract $Y_{i+1}$ from the numerator of the bracket hereabove. In that purpose, set $$U_{i,n} := \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} Y_{i+1} + U'_{i,n} \quad \textrm{where} \quad U'_{i,n} := - \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} - 15 \beta_{i+2,n}^{(6)} + 3 \beta_{i+2,n}^{(4)} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i-1)^{3/2}},$$ and $$V_{i,n} := - 15 \beta_{i+1,n}^{(6)} + 3 \beta_{i+1,n}^{(4)} + \frac{o_{\mathcal{P}_{n}}(1)} {(n-i)^{3/2}}.$$ For any $n \geq 1$, let $(W_{i,n})_{0 \leq i \leq k-1}$ be r.v.’s defined on $(\Omega_{n},\mathcal{A}_{n})$ s.t. $\max\limits_{0 \leq i \leq k-1}|W_{i,n}| = o_{\mathcal{P}_{n}}(1)$. Then, there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for all $n \geq 1$, we have on $B_{n}$ that for all $i=0, ..., k-1$, $$1 + W_{i,n} = \exp(W_{i,n} + W_{i,n}^{2} A_{i,n}) \enskip \textrm{where} \enskip \max\limits_{0 \leq i \leq k-1} |A_{i,n}|=\mathcal{O}_{\mathcal{P}_n}(1).$$ Let $\epsilon > 0$. For any $ n \geq 1$, set $$B_{n} := \left\{ \max\limits_{0 \leq i \leq k-1} |W_{i,n}| < 1/2 \right\}.$$ Since $\max\limits_{0 \leq i \leq k-1}|W_{i,n}| = o_{\mathcal{P}_{n}}(1)$, we have that $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$. Now, set $$f(x):=\log(1+x).$$ Then $f$ satisfies the conditions of Lemma $\ref{devLimLandau}$. Therefore, for all $i=0, ..., k-1$, there exists $C_{i,n}$ with $\max\limits_{0 \leq i \leq k-1} |C_{i,n}| \leq \max\limits_{0 \leq i \leq k-1} |W_{i,n}|$ s.t. $$f(W_{i,n}) = f(0) + W_{i,n}f'(0) + \frac{W_{i,n}^{2}}{2} f''(C_{i,n})$$ For $n \geq 1$ and $0 \leq i \leq k-1$, set $A_{i,n} := \frac{1}{2} f''(C_{i,n})$. Now, $f''(x)= - \frac{1}{(1+x)^2}$. Clearly, for all $x \in (0, \frac{1}{2})$, $|f''(x)| \leq \frac{1}{(1-x)^2}$. Therefore, for any $n \geq 1$, we have on $B_{n}$ that $$\max\limits_{0 \leq i \leq k-1} |A_{i,n}| \leq \frac{1}{\left( 1-\max\limits_{0 \leq i \leq k-1} |C_{i,n}| \right)^2} \leq \frac{1}{\left( 1-\max\limits_{0 \leq i \leq k-1} |W_{i,n}| \right)^2},$$ which implies that $\max\limits_{0 \leq i \leq k-1} |A_{i,n}| = \mathcal{O}_{\mathcal{P}_n}(1)$. \ Since $$\max\limits_{0 \leq i \leq k-1}|U_{i,n}| = o_{\mathcal{P}_{n}}(1) \quad \textrm{and} \quad \max\limits_{0 \leq i \leq k-1}|V_{i,n}| = o_{\mathcal{P}_{n}}(1),$$ we have that $$\frac{1+U_{i,n}}{1+V_{i,n}} = \exp \left(U_{i,n} + U_{i,n}^{2} A_{i,n} - V_{i,n} - V_{i,n}^{2} B_{i,n} \right),$$ where $$\max\limits_{0 \leq i \leq k-1} |A_{i,n}| = \mathcal{O}_{\mathcal{P}_n}(1) \quad \textrm{and} \quad \max\limits_{0 \leq i \leq k-1} |B_{i,n}| = \mathcal{O}_{\mathcal{P}_n}(1).$$ Consequently, the preceding Fact implies that there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for any $n \geq 1$ and $0 \leq i \leq k-1$, $$\pi_{i} = \Gamma_{i} \quad \textrm{on} \enskip B_{n},$$ where $$\Gamma_{i} = \widetilde{p}_{i+1}(Y_{i+1}) \exp \left(-\frac{(Y_{i+1} - m_{i+1})^{2}}{2s_{i+2,n}^{2}} \right) \exp \left( \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} Y_{i+1} \right) \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left\{ U'_{i,n} + U_{i,n}^{2} A_{i,n} - V_{i,n} - V_{i,n}^{2} B_{i,n} \right\}.$$ \ In order to identify $g(Y_{i+1}|Y_{1}^{i})$, we have grouped the factors containing $Y_{i+1}$. Thereby, we obtain a function of $Y_{i+1}$, which we normalize to get a density. Thus, set $$g(Y_{i+1}|Y_{1}^{i}) := C_{i}^{-1} \widetilde{p}_{i+1}(Y_{i+1}) \exp \left(-\frac{(Y_{i+1} - m_{i+1})^{2}}{2s_{i+2,n}^{2}} \right) \exp \left( \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} Y_{i+1} \right),$$ where $C_{i}$ satisfies that $$C_{i} = \int\displaystyle \exp \left(-\frac{(y - m_{i+1})^{2}}{2s_{i+2,n}^{2}} \right) \exp \left( \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} y \right) \widetilde{p}_{i+1}(y) dy.$$ \ Therefore, $$\label{identifierLi} \Gamma_{i} = g(Y_{i+1}|Y_{1}^{i}) \left\{ C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left( U'_{i,n} - V_{i,n} + U_{i,n}^{2} A_{i,n} - V_{i,n}^{2} B_{i,n} \right) \right\}.$$ \ Our objective is now to prove that $$\label{resteAprouver} \prod\limits_{i=0}^{k-1} C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left( U'_{i,n} - V_{i,n} + U_{i,n}^{2} A_{i,n} - V_{i,n}^{2} B_{i,n} \right) = 1 + o_{\mathcal{P}_n}(1).$$ \ In that purpose, we consider firstly the following result. \[uinZin\] For $n \geq 1$, let $(Z_{i,n})_{0 \leq i \leq k-1}$ be r.v.’s defined on $(\Omega_{n},\mathcal{A}_{n})$ and $(u_{i,n})_{0 \leq i \leq k-1}$ be a sequence of reals. Assume that $\max\limits_{0 \leq i \leq k-1} |Z_{i,n}| = \mathcal{O}_{\mathcal{P}_{n}}(1)$ and $\sum\limits_{i=0}^{k-1} u_{i,n} \longrightarrow 0$ as $n \rightarrow \infty$. Then, $$\prod\limits_{i=0}^{k-1} \exp\left( u_{i,n} Z_{i,n} \right) = 1+o_{\mathcal{P}_n} \left( 1 \right).$$ Consequently, for any $\alpha \geq 0$ and $\beta > 1$, $$\prod\limits_{i=0}^{k-1} \exp \left( \frac{(\log n)^{\alpha}}{(n-i-1)^{\beta}} Z_{i,n} \right) = 1 + o_{\mathcal{P}_n} \left( 1 \right).$$ It is enough to prove that $$\sum\limits_{i=0}^{k-1} u_{i,n} Z_{i,n} = o_{\mathcal{P}_n} \left( 1 \right).$$ Let $\epsilon > 0$ and $\delta > 0$. There exists $A_{\epsilon} > 0$ and $N_{\epsilon} > 0$ s.t. for all $n \geq N_{\epsilon}$, $$\mathcal{P}_{n} \left( \max\limits_{0 \leq i \leq k-1} |Z_{i,n}| \leq A_{\epsilon} \right) \geq 1-\epsilon.$$ Now, there exists $N_{\epsilon, \delta} > 0$ s.t. for all $n \geq N_{\epsilon, \delta}$, $$\sum\limits_{i=0}^{k-1} \left| u_{i,n} \right| < \frac{\delta}{A_{\epsilon}}.$$ Then, for all $n \geq \max \left\{ N_{\epsilon} ; N_{\epsilon, \delta} \right\}$, $$\begin{aligned} \mathcal{P}_{n} \left( \left| \sum\limits_{i=0}^{k-1} u_{i,n} Z_{i,n} \right| < \delta \right)&\geq \mathcal{P}_{n} \left( \left\{ \sum\limits_{i=0}^{k-1} \left| u_{i,n} \right| \left|Z_{i,n} \right| < \delta \right\} \bigcap \left\{ \max\limits_{0 \leq i \leq k-1} |Z_{i,n}| \leq A_{\epsilon} \right\} \right) \\ &\geq \mathcal{P}_{n} \left( \left\{ \sum\limits_{i=0}^{k-1} \left| u_{i,n} \right| < \frac{\delta}{A_{\epsilon}} \right\} \bigcap \left\{ \max\limits_{0 \leq i \leq k-1} |Z_{i,n}| \leq A_{\epsilon} \right\} \right) \\ &= \mathcal{P}_{n} \left( \max\limits_{0 \leq i \leq k-1} |Z_{i,n}| \leq A_{\epsilon} \right) \\ &\geq 1-\epsilon.\end{aligned}$$ ### The factors estimated by applying Lemma $\ref{uinZin}$ We have that $$\label{corrUinZin} \prod\limits_{i=0}^{k-1} \left\{ \exp \left( U_{i,n}^{2} A_{i,n} - V_{i,n}^{2} B_{i,n} \right) \right\} = 1 + o_{\mathcal{P}_n}(1).$$ We may apply Lemma $\ref{uinZin}$, since $$\label{UinBisVin} \max\limits_{0 \leq i \leq k-1} \left|U_{i,n}\right| = \frac{\log n}{n-i-1} \mathcal{O}_{\mathcal{P}_{n}}(1) \quad \textrm{and} \quad \max\limits_{0 \leq i \leq k-1} \left|V_{i,n}\right| = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)}{n-i-1}.$$ \ Unfortunately, $(\ref{UinBisVin})$ implies that we can not apply Lemma $\ref{uinZin}$ to $U'_{i,n}$ and $V_{i,n}$. However, we have that $$\label{UinBisMoinsVin} U'_{i,n} - V_{i,n} = - \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} + 3 \left( \beta_{i+2,n}^{(4)} - \beta_{i+1,n}^{(4)} \right) - 15 \left( \beta_{i+2,n}^{(6)} - \beta_{i+1,n}^{(6)} \right) + \frac{o_{\mathcal{P}_n}(1)}{(n-i-1)^{3/2}}.$$ \ Now, $$\begin{aligned} \beta_{i+2,n}^{(4)} - \beta_{i+1,n}^{(4)} &= \frac{\lambda_{i+2,n}(s_{i+1,n}^{2})^{2} - \lambda_{i+1,n}(s_{i+2,n}^{2})^{2}} {24(s_{i+1,n}^{2})^{2}(s_{i+2,n}^{2})^{2}}, \quad \textrm{where} \enskip \lambda_{i+e,n} = \sum\limits_{j=i+e}^{n} \lambda_{j} \enskip \textrm{and} \enskip \lambda_{j} = \mu_{j}^{4} - 3(s_{j}^{2})^{2} \\ &= \label{annulation} \frac{ \lambda_{i+2,n}\left[ (s_{i+2,n}^{2})^{2} + 2s_{i+2,n}^{2}s_{i+1} + (s_{i+1}^{2})^{2} \right] - (\lambda_{i+2,n}+\lambda_{i+1}) (s_{i+2,n}^{2})^{2}} {24(s_{i+1,n}^{2})^{2}(s_{i+2,n}^{2})^{2}} \\ &= \label{beta4} \frac{ \mathcal{O}_{\mathcal{P}_{n}}(1)}{(n-i-1)^{2}}, \end{aligned}$$ since in the numerator of $(\ref{annulation})$, the terms of order $(n-i-1)^{3}$, that is the terms $\lambda_{i+2,n}(s_{i+2,n}^{2})^{2}$, vanish. \ Similarly, we obtain that $$\label{beta6} \left( \beta_{i+2,n}^{(6)} - \beta_{i+1,n}^{(6)} \right) = \frac{ \mathcal{O}_{\mathcal{P}_{n}}(1)}{(n-i-1)^{2}}.$$ \ Combining $(\ref{UinBisMoinsVin})$, $(\ref{beta4})$, $(\ref{beta6})$, we obtain that $$\begin{aligned} \prod\limits_{i=0}^{k-1} \exp \left( U'_{i,n} - V_{i,n} \right) &= \left\{ \prod\limits_{i=0}^{k-1} \exp \left(-\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} \right) \right\} \left\{ \prod\limits_{i=0}^{k-1} \exp \left( \frac{ \mathcal{O}_{\mathcal{P}_{n}}(1)}{(n-i-1)^{2}} + \frac{o_{\mathcal{P}_n}(1)}{(n-i-1)^{3/2}} \right) \right\} \\ &= \label{isolerOthers} \left\{ \prod\limits_{i=0}^{k-1} \exp \left(-\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} \right) \right\} \left\{ 1 + o_{\mathcal{P}_n}(1) \right\}, \end{aligned}$$ where the last equality follows from Lemma $\ref{uinZin}$. Notice that $\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} = \frac{\mathcal{O}_{\mathcal{P}_{n}}(1)}{n-i-1}$, so that the corresponding factor is not in the range of Lemma $\ref{uinZin}$. Finally, $(\ref{corrUinZin})$ and $(\ref{isolerOthers})$ imply that $$\prod\limits_{i=0}^{k-1} C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left( U'_{i,n} - V_{i,n} + U_{i,n}^{2} A_{i,n} - V_{i,n}^{2} B_{i,n} \right) = \left\{ \prod\limits_{i=0}^{k-1} C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left( - \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} \right) \right\} \left\{ 1 + o_{\mathcal{P}_n}(1) \right\}.$$ ### The other factors Therefore, in order to conclude, it is enough to prove that $$\prod\limits_{i=0}^{k-1} L_{i,n} = 1+o_{\mathcal{P}_n} \left( 1 \right), \quad \textrm{where} \enskip L_{i,n} := C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left(-\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} \right).$$ We have that $$\label{rrb} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} = 1 + \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2},$$ and $$\label{rrc} \exp \left(-\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} \right) = 1 - \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}$$ We have that $$\frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} = \left(1 + \frac{s_{i+1}^{2}}{s_{i+2,n}^{2}}\right)^{1/2} \quad \textrm{and} \quad \frac{s_{i+1}^{2}}{s_{i+2,n}^{2}} = \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}.$$ Therefore, $(\ref{rrb})$ follows readily from Lemma $\ref{devLimLandau}$, applied with the function $f : x \mapsto (1+x)^{1/2}$. Similarly, we get $(\ref{rrc})$ by applying Lemma $\ref{devLimLandau}$ with the function $f : x \mapsto \exp(x)$. We have that $$\label{CiRecap} C_{i} = 1 + \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}}m_{i+1} - \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}.$$ Recall that $$C_{i} = \int\displaystyle \exp \left(v_{i}(y) \right) \widetilde{p}_{i+1}(y) dy \quad \textrm{where} \quad v_{i}(y):=-\frac{(y-m_{i+1})^2}{2s_{i+2,n}^2}+\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}}y.$$ A Taylor expansion implies the existence of $w_{i}(y)$ with $|w_{i}(y)| \leq |v_{i}(y)|$ s.t. $$\label{rrd} \exp(v_{i}(y)) = 1 + v_{i}(y) + \frac{v_{i}(y)^2}{2}\exp(w_{i}(y)).$$ Now, $$\begin{aligned} \int (1+v_{i}(y))\widetilde{p}_{i+1}(y) dy &= \int \left[ 1 - \frac{(y-m_{i+1})^2}{2s_{i+2,n}^2} + \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} y \right] \widetilde{p}_{i+1}(y) dy \\ &= \int \widetilde{p}_{i+1}(y)dy - \frac{1}{2s_{i+2,n}^2} \int (y-m_{i+1})^2 \widetilde{p}_{i+1}(y)dy + \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} \int y \widetilde{p}_{i+1}(y) dy \\ &= 1 - \frac{s_{i+1}^2}{2s_{i+2,n}^2} + \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} m_{i+1}. \end{aligned}$$ \ Consequently, it is enough to prove the following Fact. \[Ji\] We have that $$\label{rrf} J_{i} := \int \frac{v_{i}(y)^2}{2} \exp(w_{i}(y))\widetilde{p}_{i+1}(y) dy = \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}.$$ We have that $|w_{i}(y)| \leq |v_{i}(y)|$. Moreover, $w_{i}(y)$ and $v_{i}(y)$ are actually of the same sign, so that $\exp(w_{i}(y)) \leq 1 + \exp(v_{i}(y))$. Therefore, $$J_{i} \leq J_{i}^{(1)}+J_{i}^{(2)} \enskip \textrm{where} \enskip J_{i}^{(1)} := \int \frac{v_{i}(y)^2}{2} \widetilde{p}_{i+1}(y) dy \enskip \textrm{and} \enskip J_{i}^{(2)} := \int \frac{v_{i}(y)^2}{2} \exp(v_{i}(y)) \widetilde{p}_{i+1}(y) dy.$$ Now, expanding $v_{i}(y)$, we get readily that $$J_{i}^{(1)} = \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}.$$ \ Fix $\epsilon > 0$. \ Then there exist $\alpha_{\epsilon}$, $\beta_{\epsilon}$, $\gamma_{\epsilon}$ positive and a compact $K_{\epsilon}$ s.t., for all $n$ large enough, $$\mathcal{P}_n \left( B_{n}^{\epsilon} := \bigcap\limits_{i = 0}^{k-1} \left\{ t_{i} \in K_{\epsilon} \enskip ; \enskip |m_{i+1}| \leq \alpha_{\epsilon} \enskip ; \enskip \frac{1}{2s_{i+2,n}^2} \leq \frac{\beta_{\epsilon}}{n-i-1} \enskip ; \enskip \left|\frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} \right| \leq \frac{\gamma_{\epsilon}}{n-i-1} \right\} \right) \geq 1 - \epsilon.$$ \ *The following lines hold on $B_{n}^{\epsilon}$*. \ For all real $y$, we have that $$|v_{i}(y)| \leq \frac{\beta_{\epsilon}(|y|+\alpha_{\epsilon})^2}{n-i-1} + \frac{\gamma_{\epsilon} |y|}{n-i-1}$$ For $|y| \geq \alpha_{\epsilon}$, we have that $|y - m_{i+1}| \geq |y - \alpha_{\epsilon}|$, so that $$v_{i}(y) \leq - \frac{\beta_{\epsilon} (y-\alpha_{\epsilon})^2}{n-i-1} + \frac{\gamma_{\epsilon} |y|}{n-i-1}.$$ Therefore, $$\begin{aligned} J_{i}^{(2)} & \leq \frac{1}{2(n-i-1)^2} \int_{|y| \leq \alpha_{\epsilon}} \left[ \beta_{\epsilon} (|y| + \alpha_{\epsilon})^2 + \gamma_{\epsilon} |y| \right]^{2} \exp(v_{i}(y)) \widetilde{p}_{i+1}(y)dy \\ & + \frac{1}{2(n-i-1)^2} \int_{|y| \geq \alpha_{\epsilon}} \left[ \beta_{\epsilon} (|y| + \alpha_{\epsilon})^2 + \gamma_{\epsilon} |y| \right]^{2} \exp\left( - \frac{\beta_{\epsilon} (y - \alpha_{\epsilon})^2}{n-i-1} + \frac{\gamma_{\epsilon} |y|}{n-i-1} \right) \widetilde{p}_{i+1}(y)dy. \end{aligned}$$ Clearly, on $B_{n}^{\epsilon}$, the first integral hereabove is bounded by a constant $I_{\epsilon}$. For the second integral, an integration by parts and Assumption $(\mathcal{C}f)$ imply that, on $B_{n}^{\epsilon}$, it is also bounded by a constant $L_{\epsilon}$. So, $$J_{i}^{(2)} = \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2},$$ which concludes the proof. \ Combining $(\ref{CiRecap})$, $(\ref{rrb})$ and $(\ref{rrc})$, we obtain that $$\begin{aligned} L_{i,n} &:= C_{i} \frac{\sigma_{i+1,n}}{\sigma_{i+2,n}} \exp \left(-\kappa_{i,n} m_{i+1} \right) \quad \textrm{where} \quad \kappa_{i,n} := \frac{3\alpha_{i+2,n}^{(3)}}{\sigma_{i+2,n}} \\ &= \left[ 1 + \kappa_{i,n} m_{i+1} - \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2} \right] \left[ 1 + \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2} \right] \left[ 1 - \kappa_{i,n} m_{i+1} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2} \right] \\ &= \left[ 1 + \kappa_{i,n} m_{i+1} - \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2} \right] \left[ 1 - \kappa_{i,n} m_{i+1} + \frac{s_{i+1}^{2}}{2s_{i+2,n}^{2}} + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2} \right] \\ &= 1 + \frac{\mathcal{O}_{\mathcal{P}_n}(1)}{(n-i-1)^2}. \end{aligned}$$ \ Therefore, we may write $L_{i,n} = 1 + \frac{W_{i,n}}{(n-i-1)^2}$, where $\max\limits_{0 \leq i \leq k-1} |W_{i,n}| = \mathcal{O}_{\mathcal{P}_n}(1)$. Then, we get from Lemma $\ref{devLimLandau}$ applied with $f : x \mapsto \log(1+x)$ that $$\log(L_{i,n}) = \log \left(1 + \frac{W_{i,n}}{(n-i-1)^2} \right) = \frac{W_{i,n}}{(n-i-1)^2} + \left(\frac{W_{i,n}}{(n-i-1)^2}\right)^{2} \mathcal{O}_{\mathcal{P}_n}(1)$$ Therefore, $$\log \left( \prod\limits_{i=0}^{k-1} L_{i,n} \right) = \sum\limits_{i=0}^{k-1} \log(L_{i}) = \mathcal{O}_{\mathcal{P}_n}(1) \sum\limits_{i=0}^{k-1} \frac{1}{(n-i-1)^2} = o_{\mathcal{P}_n}(1).$$ Consequently, $$\prod\limits_{i=0}^{k-1} L_{i,n} = 1 + o_{\mathcal{P}_n}(1).$$ Finally, we have proved that there exists $(B_{n})_{n \geq 1} \in \mathcal{A}_{\rightarrow 1}$ s.t. for any $n \geq 1$, $$p_{k}(Y_{1}^{k}) = \prod\limits_{i=0}^{k-1} \Gamma_{i} \quad \textrm{on} \enskip B_{n}$$ and $$\prod\limits_{i=0}^{k-1} \Gamma_{i} = g_{k}(Y_{1}^{k}) \left[1 + o_{\mathcal{P}_n}(1)\right].$$ Barndorff-Nielsen Ole, E. 2014. . J. Wiley and sons. Bertoin, J. (1998). . Cambridge University Press. Bhattacharya, R. and Rao, R. (1976). . Classics in applied mathematics, 64. Billingsley, P. (1999). . Wiley Series in Probability and Statistics : Probability and Statistics. Borovkov, K.A. (1990). . Theory Probab. Appl. 35, no. 4, 762-766. Broniatowski, M. and Caron, V. (2014). . Ann. Appl. Probab. 24, no 6, 2246-2296, 2014. Broniatowski, M. and Ritov, Y. (2009). . arXiv preprint arXiv : 0910.1819, 2009. Cramer, H. (1937). . Colloquium on Theory of Probability. Herman, Paris. Deheuvels, P. (1991). . Studia Scientiarum Mathematicarum Hungarica 26, 261-295. Deheuvels, P. (2007). . EMS, Zürich, 2007, 93-190 Deheuvels, P. and Mason, D. (1993). . Probability in Banach Spaces, 9, Birkhäuser. Deheuvels, P. and Steinebach, J. (2016). . High dimensional probability VII, 219-254, Prog. probab., 71, Springer, \[Cham\], 2016. Dembo, A. and Zeitouni, O. (1993). . Boston : Jones and Bartlett 1993. Dembo, A. and Zeitouni, O. (1996). . Probab. Theory Related Fields 104 (1996), no. 1, 1-14. Diaconis, P. and Freedman, D.A. (1987). . Ann. Inst. Henri Poincaré Sup. au no 2 (1987), Vol. 23, p. 397-423. Diaconis, P. and Freedman, D.A. (1988). . J. Theoret. Probab. 1 (1988), no. 4, 381-410. Erdős, P. and Rényi, A. (1970). Freedman, D.A. (1983). . Springer, New York. Frolov, A. N. (2008). . Theory Probab. Appl. Vol. 49, No. 3, 531-540. Hognas, G. (1977). . Math. Scand. 41, 175-184. Jensen, J. L. (1995). . Oxford Statistical Science Series, 16. Khinchin, A. I. (1949). . New York : Dover Publications, copyright 1949. Lynch, J.D. and Sethuraman, J. (1987). . Ann. Probab. 15, 610-627. Petrov, V. V. (1975). . Berlin Heidelberg New York : Springer-Verlag, 1975, cop. 1975. Sanchis, G. (2002). . Probab. Theory Relat. Fields 98, 1-5 (1994). . Probab. Theory Relat. Fields 99, 475 (1994). Revesz, P. (1979). . Z. Wahrsch. Verw. Gebiete 50, 257-264. Schroeder, C. (1993). . Ann. Probab. 21, 721-758 (1993). Steinebach, J. (1978). . J. Appl. Probability 15 (1978), no 1, 96-111. Strassen, V. (1964). . Z. Wahrsch. Verw. Gebiete **3**, 211-226. Stroock, D.W. (1994). . Cambridge university press, 1994, cop. 1993. Stroock, D.W. and Zeitouni, O. (1991). . In : Durrett, R., kesten, H. (eds.) Festchrift in honor of F. Spitzer, pp. 399-424. Basel, Switzerland : Birkäuser, 1991. Varadhan, S.R.S. (1966). . Comm. Pure Appl. Math. 19, 261-286.
--- abstract: 'It is shown how to introduce a geometric description of the algebraic approach to the non-relativistic quantum mechanics. It turns out that the GNS representation provides not only symplectic but also Hermitian realization of a ‘quantum Poisson algebra’. We discuss alternative Hamiltonian structures emerging out of different GNS representations which provide a natural setting for quantum bi-Hamiltonian systems.' author: - | Dariusz Chruściński\ Institute of Physics, Nicolaus Copernicus University,\ Grudziadzka 5/7, 87–100 Toruń, Poland\ \ Giuseppe Marmo\ Dipartamento di Scienze Fisiche, Universitá “Federico II" di Napoli\ and Instituto Nazionale di Fisica Nucleare, Sezione di Napoli,\ Complesso Universitario di Monte Sant Angelo,\ Via Cintia, I-80126 Napoli, Italy title: '**Remarks on the GNS Representation and the Geometry of Quantum States**' --- Introduction ============ The important role played by geometry in the formulation of theories aimed at the descriptions of fundamental interactions cannot be denied. At the moment classical theories like mechanics, electromagnetism, Einstein’s General Relativity, Yang-Mills gauge theories and thermodynamics have reached a very high degree of geometrization. The same cannot be said for quantum theories, even though the relevance of geometric structures, like the symplectic structure, may be traced back to Segal and Mackey [@Mackey; @Segal], and since quite few papers have been written on the subject \[3–16\]. For historical reasons [@BOOK] the geometrical structures are hidden in the standard algebraic setting of quantum mechanics (notably the Dirac formulation) because one starts from the Hilbert space and identifies the space of physical states with the associated complex projective space, which in a natural way calls for a differential geometric treatment [@Kibble; @Cantoni1; @Cirelli; @K-bundle], however, for simplicity, computations are carried out on the initial Hilbert space. In this approach the $\mathbb{C}^*$-algebra, which contains observables as real elements, arises as a derived concept — as complex valued functions on the complex projective space endowed with an appropriate associative product even though non-commutative and non-local. In this short note we would like to consider, on the contrary, a different approach, often called an algebraic one, where the Hilbert space loses its primary importance. The primary object one starts with is an abstract $\mathbb{C}^*$-algebra containing an algebra of quantum observables and the Hilbert space is a secondary concept which may be derived by constructing particular representation of $\mathcal{A}$ in the spirit of GNS construction (see e.g. [@Bratteli; @Kadison]). Algebraic approach started with the work of Haag and Kastler [@Haag-Kastler] and then it was used mainly in the mathematical approach to quantum field theory [@Haag]. This approach is much more flexible than the standard one: a Hilbert space is not a priori given but it is derived by using a given state of the system. Different states give rise to different realization of the original algebra as an algebra of operators, that is, one is able to derive different Hilbert spaces, inner products and multiplication rules in the space of operators acting in the constructed Hilbert space. The analogy that we pursue is the following: in many classical situations one is presented with a Poisson manifold and looks for a symplectic realization of its Poisson algebra. Here, in a similar way, we would like to consider the ‘quantum Poisson algebra’ of the complexification of the space of observables and search for a Hermitian realization of it. We observe that this ‘Hermitian realization’ is the essence of the well known GNS construction. We start with a $\mathbb{C}^\star$-algebra $\mathcal{A}$. It may be decomposed into two real vector spaces of real and imaginary elements $$\mathcal{A}_{\rm re} = \{ a+a^*\ | \ a \in \mathcal{A}\}\ , \ \ \ \ \mathcal{A}_{\rm im} = \{ a-a^*\ | \ a \in \mathcal{A}\}\ ,$$ respectively. There is a one-to-one correspondence between $\mathcal{A}_{\rm re}$ and $\mathcal{A}_{\rm im}$ by means of multiplication by ‘$i$’. Consider the space of density states $\mathcal{D}(\mathcal{A})$ over $\mathcal{A}$ which is a convex body spanned by extremal (pure) states $\mathcal{D}^1(\mathcal{A})$. Out of the vector space $\cal A$ we may construct the dual space $L(\mathcal{A})$ by taking real combinations of $\mathcal{D}^1(\mathcal{A})$, then we may immerse $\mathcal{A}_{\rm re}$ into the space of linear functionals on the real vector space $L(\mathcal{A})$. Now, using the commutator product in $\mathcal{A}$ the linear subspace $\mathcal{A}_{\rm re}$ induces a Poisson structure on $\mathcal{D}(\mathcal{A})$. It shows that the real vector space $L(\mathcal{A})$ constructed out of $\mathcal{D}^1(\mathcal{A})$ may be thought of as the dual to $\mathcal{A}_{\rm re}$ (or equivalently to $\mathcal{A}_{\rm im}$). $L(\mathcal{A})$ may be endowed with a Poisson structure and gives rise to the Lie algebra of Hamiltonian vector fields. It turns out that Hamiltonian vector fields associated with linear maps on $L(\mathcal{A})$ – i.e. element from $\mbox{Lin}(L(\mathcal{A}),\mathbb{R})$ – may be thought of as derivations of the product available on $\mathcal{A}$ or of the pointwise product that we may construct on the polynomials of $\mbox{Lin}(L(\mathcal{A}),\mathbb{R})$. The pointwise (commutative) product identifies the Poisson bracket as those of a Poisson algebra (commutative algebra on which derivations act). Moreover, one may introduce a noncommutative $\star$-product in $\mbox{Lin}(L(\mathcal{A}),\mathbb{C})$, that is, in the $\mathbb{C}^*$-algebra of complex valued function on the space of states $L(\mathcal{A}) \supset \mathcal{D}(\mathcal{A})$. In this way the Poisson bracket ‘$f\star g - g \star f$’ with $f,g \in \mbox{Lin}(L(\mathcal{A}),\mathbb{C})$, should be considered as a quantum bracket in the sense of Dirac. Summarizing: the Poisson bracket on the dual space to $\mathcal{A}_{\rm re}$ may be used to generate derivations for the commutative algebra of polynomials and therefore as a ‘classical’ Poisson algebra. The same Poisson bracket when restricted to linear functions defines derivations for the usual (noncommutative) operator product defined in the space of operators but thought of as functions on the dual space. This gives rise to a ‘quantum’ Poisson algebra. Having noticed that on the space of self-adjoint elements of a $C^*$-algebra one has a Lie algebra structure and a Jordan structure , one may “geometrize”, i.e. describe these products in terms of tensorial objects, by using functions and tensors defined on the dual of the $C^*$-algebra (Lie algebra). In this way we obtain a Poisson manifold and a Lie-Jordan product associated with a Jordan tensor. The main idea for our geometrization uses the dual space of the $C^*$-algebra. We recall that in the works of Gelfand and collaborators the dual space of Banach algebras the study of the dual spaces has been found extremely useful. In the sixties Fell [@Fell] has considered dual spaces of $C^*$-algebras and Banach algebras providing many interesting results. Here we would like to take a different route and, to present the geometrical ideas more clearly, we restrict to finite dimensional algebras. We shall use, however, an intrinsic formulation, paving the way to an extension to the infinite dimensional situation. We shall use coordinates only to allow the reader to became more familiar with our construction. The paper is organized as follows: we start with a short review of the geometric formulation of the standard non-relativistic Quantum Mechanics. Then we review GNS construction and provide its simple illustration in the case of matrix algebra in section \[GNS-IL\]. Section \[SYMP\] shows how the GNS construction gives rise to the symplectic realization of the Poisson algebra of observables: either via a corresponding Hilbert space defining the representation space of GNS or via the associated complex projective space. In section \[Bi\] we discuss alternative Hamiltonian structures emerging out of different GNS realizations of the original $\mathbb{C}^*$-algebra. It turns out that the GNS representation provides a natural arena for quantum bi-Hamiltonian systems. Final conclusions are collected in the last section. Geometric formulation of Quantum Mechanics ========================================== We first review very briefly the geometrical formulation of Quantum Mechanics starting with a standard Hilbert space formulation. The essential steps are the following: The probabilistic interpretation requires that the physical carrier space of our formulation should be identified with the space of rays $$\mathbb{C}_0 \ \longrightarrow\ \mathcal{H}_0\ \longrightarrow\ \mathcal{R}(\mathcal{H})\ ,$$ where $\mathbb{C}_0 = \mathbb{C}- 0$ and $\mathcal{H}_0 = \mathcal{H} - \{0\}$. The [*true*]{} space of quantum states – space of rays $\mathcal{R}(\mathcal{H})$ – is nothing but the complex projective Hilbert space $\mathbb{P}\mathcal{H}$. Now, to replace vectors and linear transformations by tensor fields we have to replace $\mathcal{H}$ with $T\mathcal{H}$, its tangent bundle, which may be identified as a Cartesian product $T\mathcal{H} \sim \mathcal{H} \times \mathcal{H}$. Any vector $\varphi \in {\cal H}$ gives rise to a vector field $X_\varphi : {\cal H} \longrightarrow T{\cal H}$ defined by $$\label{} X_\varphi(\psi) := (\psi,\varphi) \in {\cal H}\times {\cal H} \ .$$ Similarly an endomorphism $A : {\cal H} \longrightarrow {\cal H}$ gives rise to a map $T_A : T{\cal H} \longrightarrow T{\cal H}$ defined as follows $$\label{} T_A(\psi,\varphi):= (\psi,A\varphi)\ .$$ Moreover, one introduces a complex structure ${\cal J} : T{\cal H} \longrightarrow T{\cal H}$ defined by the $(1,1)$-tensor field $$\label{} {\cal J}(\psi,\varphi):= (\psi,i\varphi)\ ,$$ and a linear structure $\Delta : {\cal H} \longrightarrow T{\cal H}$ defined by the Liouville vector field $$\label{} \Delta(\psi):= (\psi,\psi)\ ,$$ and finally the so called phase–vector field $\Gamma : {\cal H} \longrightarrow T{\cal H}$ defined by $\Gamma := {\cal J} \circ \Delta\ $, i.e. $$\label{} \Gamma(\psi) =(\psi,i\psi) \ .$$ In this way the Hermitian product $\< \psi|\varphi\>$ on $\mathcal{H}$ is replaced by an Hermitian tensor field $$\label{} {\cal K}(X_{\varphi_1},X_{\varphi_2})(\psi) := \<\varphi_1|\varphi_2\>\ .$$ On the corresponding real differential manifold $\mathcal{H}^{\rm R}$ the real part of ${\cal K}$ is a Riemannian metric tensor ‘$g$’ while its imaginary part is a symplectic tensor field ‘$\omega$’ $$\label{} \mathcal{K} = g + i\omega\ ,$$ together with $$\label{} \omega(X,Y) = g(\mathcal{J}X,Y)\ .$$ The above tensor fields endow $\mathcal{H}$ with the structure of a Kähler manifold. When written in a contravariant form $G$ and $\Lambda$, respectively, give rise to two bi-differential operators which may be used to define two brackets on the space of one-forms. We should notice that the symmetric tensor may also be associated with a second order differential operator (Laplacian). These tensor fields may be used to define the metric structure and Poisson bracket on the space of rays $\mathcal{R}(\mathcal{H})$. Note, however, that neither $G$ nor $\Lambda$ can be directly projected from $\mathcal{H}$ to $\mathcal{R}(\mathcal{H})$. It is easy to show that tensor fields which are projectable are given by $$\label{} \widetilde{G} := e^{\sigma} G - \Delta {{\,\otimes\,}}\Delta - \Gamma {{\,\otimes\,}}\Gamma\ ,$$ and $$\label{} \widetilde{\Lambda} := e^\sigma \Lambda - (\Delta {{\,\otimes\,}}\Gamma - \Gamma {{\,\otimes\,}}\Delta) \ ,$$ where the conformal factor $e^\sigma \geq 0$ is defined by $\sigma(\psi) := \ln \<\psi|\psi\>$. Now, projected tensor fields allow for the definition of two products in the space of functions on $\mathcal{R}(\mathcal{H})$: the symmetric bracket $$\label{} \{f_1,f_2\}_+ := \widetilde{G}(df_1,df_2) + f_1\cdot f_2 \ ,$$ and antisymmetric Poisson bracket $$\label{} \{f_1,f_2\} := \widetilde{\Lambda}(df_1,df_2) \ .$$ The above operations are defined for arbitrary real valued functions from $\mathcal{F}(\mathcal{R}(\mathcal{H}))$. In this formulation quantum observables are defined to be functions from $\mathcal{F}(\mathcal{R}(\mathcal{H}))$ whose Hamiltonian vector fields are at the same time also Killing vector fields, i.e. $$\label{} \mathcal{F}_{\rm K}(\mathcal{R}(\mathcal{H})) := \{\ f\in \mathcal{F}(\mathcal{R}(\mathcal{H})) \ |\ {\bf L}_{X_f} \widetilde{G} = 0 \ \}\ ,$$ where $X_f = \widetilde{\Lambda}(df)$. We call such ‘$f$’ a [*Kählerian function*]{}. To deal with complex valued functions, we need the extension from real valued functions to complex valued functions. A complex valued function is Kählerian iff both real and imaginary parts are Kählerian. On this selected space of Kähler functions we may define an associative bilinear product $f \star g$ corresponding to the Hermitian tensor $\widetilde{\cal K} = \widetilde{G} + i\widetilde{\Lambda}$: $$\label{} f\star g := f \cdot g + \frac 12\, \widetilde{\cal K}(df,dg) \ .$$ One shows that for any two Kähler functions ‘$f$’ and ‘$g$’ the nonlocal product ‘$f \star g$’ defines a Kähler function. Consider now the complexified space $\mathcal{F}^{\mathbb{C}}_{\rm K}(\mathcal{R}(\mathcal{H}))$. Let us observe that any complex valued Kählerian function on $\mathcal{R}(\mathcal{H})$ corresponds to an operator $A \in \mathcal{B}(\mathcal{H})$ $$\label{} A \ \longrightarrow\ f_A([\psi]) := \frac{\<\psi|A\psi\>}{\<\psi|\psi\>}\ ,$$ that is, $f_A$ is an expectation value function. It is easy to show that $$\label{} f_A \star f_B = f_{AB}\ .$$ Quantum observables correspond to real valued Kählerian functions and hence they are represented by Hermitian operators on $\cal H$. The complexified space $\mathcal{F}^{\mathbb{C}}_{\rm K}(\mathcal{R}(\mathcal{H}))$ equipped with the above noncommutative $\star$-product provides a realization of a $\mathbb{C}^*$-algebra corresponding to a $\mathbb{C}^*$-algebra of bounded operators acting on the initial Hilbert space $\mathcal{H}$, i.e. the algebra $\mathcal{B}(\mathcal{H})$ [@K-bundle]. Consider now a general Kählerian manifold $(\mathcal{M},\widetilde{\cal K})$ not necessarily a projective Hilbert space $\mathbb{P}\mathcal{H}$. It is clear that one may define a nonlocal $\star$-product $$\label{} f\star g := f \cdot g + \frac 12\, \widetilde{\cal K}(df,dg) \ ,$$ for arbitrary $f,g \in \mathcal{F}_{\rm K}^{\mathbb{C}}(\mathcal{M})$. Now, for arbitrary $\cal M$ the corresponding space of complex valued Kählerian functions is not closed under $\star$–product. The Poisson bracket $$\label{} \{f,g\} = \frac i2 (f \star g - g \star f) \ ,$$ does belong to $\mathcal{F}_{\rm K}^{\mathbb{C}}(\mathcal{M})$, however, the symmetric bracket $$\label{} \{f,g\}_+ = \frac 12 (f \star g + g \star f) \ ,$$ in general is not a Kählerian function. The condition that the space of Kählerian function over $\cal M$ is closed with respect to symmetric bracket puts strong conditions on the Kähler structure. It turns out that it is equivalent to the very intricate geometric property of $\cal M$, namely, that holomorphic sectional curvature of $\cal M$ is constant [@CMP]. This in turn implies that $\cal M$ is a projective Hilbert space $\mathbb{P}\mathcal{H}$ or the covering space of the symplectic orbit in $u^*(\mathcal{H})$. Thus only orbits of the unitary group are associated with $\mathbb{C}^*$-algebras – they will be given by the generating functions of the Hamiltonian action of the unitary group. After the GNS construction one should be able to prove that realization of the $\mathbb{C}^*$-algebras are in one-to-one correspondence with the action of the unitary group on the Kähler manifold. Finite dimensional setting ========================== Let us illustrate the above geometrical formulation for finite dimensional Hilbert space ${\cal H} = \mathbb{C}^{n+1}$. Denote by $\{|e_j\>\}$, with $j=0,1,\ldots,n$, an orthonormal basis in $\mathbb{C}^{n+1}$. Then for any vector $\psi \in \mathbb{C}^{n+1}$ one has $$\label{} \<e_j|\psi\> = z^j =: q^j + ip^j \ ,$$ and $$\label{} |d\psi\> = dz^j|e_j\> = (d q^j + idp^j)|e_j\> \ .$$ Using Cartesian coordinate system $(q^j,p^k)$ on ${\cal H}^{\rm R}$ one easily finds $$\label{} \Delta = q^j\frac{\partial}{\partial q^j} + p^j \frac{\partial}{\partial p^j} \ ,$$ and $$\label{} \Gamma = p^j\frac{\partial}{\partial q^j} - q^j \frac{\partial}{\partial p^j} \ .$$ Moreover, the Hermitian tensor field reads as follows $$\begin{aligned} \label{} \mathcal{K}(d\psi,d\psi) &=& \overline{(d q^j + idp^j)} {{\,\otimes\,}}(d q^k + idp^k)\<e_j|e_k\> \nonumber \\ &=& (d q^j {{\,\otimes\,}}dq^k + d p^j {{\,\otimes\,}}dp^k)\<e_j|e_k\> \\ &+& i(d q^j {{\,\otimes\,}}dp^k - d p^j {{\,\otimes\,}}dq^k)\<e_j|e_k\>\ \nonumber .\end{aligned}$$ The corresponding contravariant tensors $G$ and $\Lambda$ are therefore given by $$\label{} G = \left( \frac{\partial}{\partial q_j} {{\,\otimes\,}}\frac{\partial}{\partial q_k} + \frac{\partial}{\partial p_j} {{\,\otimes\,}}\frac{\partial}{\partial p_k} \right) \<e_j|e_k\> \ ,$$ and for the Poisson tensor $$\label{} \Lambda = \left( \frac{\partial}{\partial p_j} {{\,\otimes\,}}\frac{\partial}{\partial q_k} - \frac{\partial}{\partial p_k} {{\,\otimes\,}}\frac{\partial}{\partial q_j} \right) \<e_j|e_k\> \ .$$ Finally, one may introduce the following local coordinates on $\mathbb{C}P^n \equiv \mathbb{P}\mathcal{H}$: $$\label{} w_k = \frac{z_k}{z_0} \ ,$$ for $z_0\neq 0$. Using projective coordinates $(w_1,\ldots,w_n)$ one obtains the following formula for $\widetilde{\mathcal{K}}$ $$\label{} \widetilde{\mathcal{K}} = \sum_{i,j=1}^n \frac{(1+ |w|^2)\delta_{ij} - \overline{w}_i w_j}{(1 + |w|^2)^2} \, dw_i {{\,\otimes\,}}d\overline{w}_j \ ,$$ where $|w|^2 = \sum_{k=1}^n w_k \overline{w}_k$. Interestingly, Kählerian functions on complex projective space are eigenfunctions of the corresponding Laplacian $\Delta_n$. It is well known [@Ikeda] that the spectrum of the Laplacian on $\mathbb{C}P^n$ is given by[^1] $$\label{} \lambda_{n,l} = - l(n+l)\ , \ \ \ \ l=0,1,2,\ldots\ ,$$ and the corresponding multiplicity of $\lambda_{n,l}$ reads as follows [@Boucetta] $$\label{} N_{n,l} = n(n+2l)\left(\frac{(n+l-1)!}{n!\,l!} \right)^2\ .$$ Note that for $l=1$ one obtains $$\label{} N_{n,1} = n(n+2) = (n+1)^2-1\ ,$$ that is, it reproduces the dimension of the space of traceless Hermitian operators in $\mathbb{C}^{n+1}$. Now, one may prove that $f$ is Kählerian iff $$\label{} \Delta_n f = 0 \ , \ \ \ \mbox{or} \ \ \ \Delta_n f = \lambda_{n,1} f \ ,$$ that is, $f$ is either a zero mode of $\Delta_n$, or it is an eigenvector of $\Delta_n$ corresponding to the first nonvanishing eigenvalue ‘$-(n+1)$’. Since zero mode span 1-dimensional space one finds that the space of Kählerian functions is $(n+1)^2$–dimensional, i.e. has the same dimension as the space of Hermitian operators in $\mathbb{C}^{n+1}$. To see how it works let us consider the simplest case $n = 1$. The corresponding projective space $\mathbb{C}P^1$ is given by the Bloch sphere $S^2$ and the eigenvalue problem $\Delta_1 f = \lambda_{1,1} f$ is well known from the theory of angular momentum. One has $$\label{Ylm} \Delta_1 Y_{lm} = -l(l + 1)Y_{lm}\ ,$$ where $Y_{lm}$ are spherical harmonics and the integer $m$ runs from $-l$ to $l$. Note, that (\[Ylm\]) implies that $l = 0$ or $l = 1$. In the first case $Y_{00}$ defines a constant function on $S^2$, whereas in the second case we have three independent dipole functions $Y_{11} = x$, $Y_{1-1} = y$, and $Y_{10} = z$. A constant function corresponds to the identity operator $\mathbb{I}$. One easily checks that dipole functions correspond to Pauli matrices: $\sigma_x$, $\sigma_y$, and $\sigma_z$. Review of the GNS construction ============================== The geometrization we have presented starts from the Hilbert space formulation of Quantum Mechanics. Now, we would like to consider directly the $\mathbb{C}^*$-algebra approach and provide a direct geometrization of this approach. According to the algebraic approach to quantum theory [@Haag-Kastler; @Haag; @B-Segal] the basic notion is the space of observables which consists of real elements of a $\mathbb{C}^*$-algebra with unity $\mathcal{A}$. Note, that observables carry a structure of Jordan algebra equipped with the symmetric Jordan product $$\label{} a \circ b := \frac 12 (ab + ba)\ ,$$ and of Lie algebra with the antisymmetric Lie product $$\label{} [a ,b] := \frac i2 (ab - ba)\ .$$ These two products recover an original product in $\cal A$: $$\label{} ab = a \circ b - i [a,b]\ .$$ In this approach states are represented by positive, normalized linear functionals on $\mathcal{A}$, that is $\omega \in {\cal D}(\mathcal{A})$ (set of states over $\mathcal{A}$) if for any $a \in \mathcal{A}$ one has $\omega(aa^*)\geq 0$ and $\omega(\oper) =1$, where $\oper$ stands for a unit element in $\mathcal{A}$. That is, the set of states ${\cal D}(\mathcal{A})$ may be embedded $\mathcal{D}(\mathcal{A}) \hookrightarrow L(\mathcal{A})$ into the dual of ${\cal A}$. The Hilbert space which in the traditional Schrödinger formalism is considered as a primary object does not any longer play this distinguished role. In the algebraic approach it appears as a secondary object which is constructed out of a selected state of the system under consideration. The construction which associates with each state $\omega$ over $\mathcal{A}$ a particular Hilbert space $\mathcal{H}_\omega$ is known as the GNS-construction: note that $\omega$ defines the following pairing between elements from $\mathcal{A}$ $$\label{} \< a | b\>_\omega = \omega(a^*b)\ .$$ Positivity of $\omega$ guarantees that $\< a|a\>_\omega \geq 0$ but this pairing may be degenerate, that is, one may have $\< a|a\>_\omega = 0$ for $a \neq 0$. To cure this problem one introduces the so called Gelfand ideal $\mathcal{J}_\omega$ consisting of all elements $a \in \mathcal{A}$ such that $\omega(a^*a)=0$. The set of classes $\mathcal{A}/\mathcal{J}_\omega$ defines a pre-Hilbert space and the positive definite scalar product on $\mathcal{A}/\mathcal{J}_\omega$ $$\label{scalar} \< \Psi_a|\Psi_b\> = \omega(a^*b)\ ,$$ where $\Psi_a$ and $\Psi_b$ stand for the equivalence classes of $a$ and $b$, respectively: $$\label{} \Psi_a = [ a + \mathcal{J}_\omega ] \ , \ \ \ \ \Psi_b = [ b + \mathcal{J}_\omega ] \ .$$ Formula (\[scalar\]) does not depend on the choice of elements $a$ and $b$ from the classes $\Psi_a$ and $\Psi_b$. Finally, completing $\mathcal{A}/\mathcal{J}_\omega$ in the norm topology induced by the scalar product (\[scalar\]) one obtains a Hilbert space $\mathcal{H}_\omega$. This construction gives rise to the following representation of $\mathcal{A}$: for any $a \in \mathcal{A}$ one defines a linear operator $\pi_\omega(a)$ acting on $\mathcal{H}_\omega$ as follows $$\label{} \pi_\omega(a)\Psi_b = \Psi_{ab} \ ,$$ where $b$ is any element from the class $\Psi_b$. Moreover, if $\pi_\omega$ is a faithful representation (that is, $a\neq 0 \Longrightarrow \pi_\omega(a) \neq 0$) then the operator norm of $\pi_\omega(a)$ equals the $\mathbb{C}^*$-norm of $a$ in $\mathcal{A}$. It is clear that the GNS-construction provides a cyclic representation with a cyclic vector $\Omega \in \mathcal{H}_\omega$ corresponding to the class of the unit element in $\mathcal{A}$, i.e. $\Omega = \Psi_\oper$. Moreover, $$\label{} \omega(a) = \< \Omega| \pi_\omega(a) | \Omega\> \ .$$ By the duality $\cal A$ acts on ${\cal D}({\cal A})$ and hence a Hilbert space corresponding to a state $\omega \in {\cal D}({\cal A})$ is nothing but an orbit of $\cal A$ passing through $\omega$, i.e. $\mathcal{H}_\omega \equiv \mathcal{A} \cdot \omega$. Note that given any element $b \in \mathcal{A}$ one obtains a new vector $\Psi=\pi_\omega(b)\Omega \in\mathcal{H}_\omega$. If $\Psi$ has norm one, this defines a new state $\omega_\Psi$ over $\mathcal{A}$ given by $$\label{vector} \omega_\Psi(a) = \< \Psi| \pi_\omega(a) | \Psi\> \ ,$$ or equivalently $$\label{} \omega_\Psi(a) = \omega(b^*ab)\ ,$$ for all $a \in \mathcal{A}$. One calls states over $\mathcal{A}$ defined by (\[vector\]) vector states of representation $\pi_\omega$. More general states may be defined by density operators $\rho$ in $\mathcal{B}(\mathcal{H}_\omega)$ via $$\label{rho-states} \omega_\rho(a) = \mbox{Tr}( \rho\,\pi_\omega(a))\ .$$ One calls all states (\[rho-states\]) a folium of the representation $\pi_\omega$. Let us recall that two representations $\pi_1$ and $\pi_2$ of $\mathcal{A}$ defined on two Hilbert spaces $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively, are equivalent, if there exists a unitary intertwiner $U : \mathcal{H}_1 \longrightarrow \mathcal{H}_2$ such that $$\label{} U\pi_1(a)U^* = \pi_2(a)\ ,$$ for any $a \in \mathcal{A}$. The GNS representation is universal in the following sense: if $\pi$ is a cyclic representation of $\mathcal{A}$ defined on $\mathcal{H}$, then the vector representation $\omega_\Psi$ defined via (\[vector\]) is equivalent to $\pi$ for any normalized $\Psi \in {\cal H}_\omega$. Now, a state $\omega$ over $\mathcal{A}$ is pure if and only if it cannot be written as a convex combination of other states from $\mathcal{D}(\mathcal{A})$. It is clear that the set of pure states (denoted by $\mathcal{D}^1(\mathcal{A})$) defines a set of extremal points of the convex body $\mathcal{D}(\mathcal{A})$. The importance of pure states follows from the following \[IRREP\] A GNS representation $\pi_\omega$ of $\mathcal{A}$ is irreducible if and only if $\omega$ is a pure state over $\mathcal{A}$. Illustration: GNS for matrix algebra {#GNS-IL} ==================================== To illustrate how the Hilbert space emerges out of a $\mathbb{C}^*$-algebra $\mathcal{A}$ consider the following simple example. Let $\mathcal{A} = \mathcal{B}(\mathbb{C}^n)$, i.e. the algebra of $n \times n$ complex matrices. Any semi-positive operator $\omega \in \mathcal{B}(\mathbb{C}^n)$ defines a state over $\mathcal{A}$ via $$\label{} \omega(A) = \mbox{Tr}(\omega A)\ ,$$ for $A \in \mathcal{A}$. Now, for any $A,B \in \mathcal{A}$ one defines the inner product $$\label{inner} \<A|B\>_\omega = \omega(A^*B) = \mbox{Tr}(B\omega A^*) \ .$$ Let $\omega$ be a rank-1 projector. Then there is a basis $\{e_k\}$ in $\mathbb{C}^n$ such that $\omega = |e_1\>\<e_1|$. Hence $$\label{} \<A|B\>_\omega = \sum_{k=1}^n \overline{A}_{k1} B_{k1} =: \sum_{k=1}^n \overline{a}_{k} b_{k}\ ,$$ with $a_k := A_{k1}$ and $b_k:=B_{k1}$. Note, that the corresponding Gelfand ideal is defined as follows: $$\label{} \mathcal{J}_\omega = \{ \, X \in \mathcal{A}\ | \ X_{k1}=0\ , k=1,\ldots,n\, \}\ ,$$ that is, $$\label{} \<A+X|B+Y\>_\omega = \<A|B\>_\omega\ ,$$ for any $X,Y\in \mathcal{J}_\omega$. This shows that the Hilbert space $\mathcal{H}_\omega \equiv {\cal A}/\mathcal{J}_\omega \subset {\cal A}^*$ emerging out of rank-1 projector is nothing but $\mathbb{C}^n$. It is, therefore, clear that the GNS representation of $\mathcal{A}$ in $\mathcal{H}_\omega$ reproduces the defining representation of $\mathcal{B}(\mathbb{C}^n)$. To see that the Hilbert space does indeed depend upon the state over $\mathcal{A}$ consider rank-$m$ projector in $\mathcal{B}(\mathbb{C}^n)$ given by $\omega = p_1 |e_1\>\<e_1| + \ldots + p_m |e_m\>\<e_m|$, with $p_1,\ldots,p_m >0$ and $p_1 + \ldots + p_m =1$. One obtains $$\label{AB-m} \<A|B\>_\omega = \sum_{k=1}^n \left(p_1 \overline{A}_{k1} B_{k1} + \ldots + p_m \overline{A}_{km} B_{km} \right) =: \sum_{k=1}^n \left( \overline{a}^{(1)}_{k} b^{(1)}_{k} + \ldots + \overline{a}^{(m)}_{k} b^{(m)}_{k} \right) \ ,$$ where $$\label{} {a}^{(j)}_{k} = \sqrt{p_j}\, A_{kj} \ , \ \ \ \ \ {b}^{(j)}_{k} = \sqrt{p_j}\, B_{kj}\ .$$ The r.h.s. of (\[AB-m\]) may be called “normal form” of the Hermitian product. This construction shows very clearly that the Hermitian product on the Hilbert space we have constructed depends on the state. In a sense the “preparation” of the state $\omega$ selects the Hermitian structure in ${\cal H}_\omega$. Note that the corresponding Gelfand ideal is defined as follows: $$\label{} \mathcal{J}_\omega = \{ \, X \in \mathcal{A}\ | \ X_{kj}=0\ , k=1,\ldots,n\ , \ j=1,\ldots,m\, \}\ .$$ If $m=n$, then $\mathcal{J}_\omega$ is trivial. It shows that the resulting Hilbert space reads as $ \mathcal{H}_\omega \cong \mathbb{C}^n \oplus \ldots \oplus \mathbb{C}^n$ ($m$ copies). Now, the corresponding GNS representation $\pi_\omega$ is no longer irreducible in $\mathbb{C}^n \oplus \ldots \oplus \mathbb{C}^n$ but decomposes into the direct sum of $m$ irreducible (defining) representations $$\label{} \pi_\omega = \bigoplus_{k=1}^m \pi_k \ ,$$ that is $\pi_\omega(A) = \mathbb{I}_m {{\,\otimes\,}}A$, where $\mathbb{I}_m$ is an $m \times m$ identity matrix. Let us observe that the form of the inner product (\[inner\]) suggests to define a new multiplication rule in the space of operators in $\mathcal{B}(\mathbb{C}^n)$, indeed from $$\label{inner-new} \<A|B\>_\omega = \mbox{Tr}(B\omega A^*) \ ,$$ we may set $$\label{} A\cdot_\omega B := A\omega B \ .$$ It defines a new associative product in $\mathcal{B}(\mathbb{C}^n)$. As we shall see in section \[Bi\] this new product turns out to be very useful to define bi-Hamiltonian structure for quantum evolution [@Dubrovin; @GS1; @CGM; @EIMM-a; @EIMM]. Geometrization of algebraic structures ====================================== Let $V$ be a vector space and consider its dual $V^*$. One may imbed $V$ into its bi-dual $(V^*)^*$ $$\label{} V \ni v \ \longrightarrow \ \widehat{v} \in {\cal F}(V^*) \ ,$$ by $$\label{} \widehat{v}(\alpha) := \alpha(v)\ ,$$ for $\alpha \in V$. This imbedding allows to deal with polynomial functions directly associated with multilinear functions on $V^* \times \ldots \times V^*$ by restricting them to the diagonal, i.e. for any multilinear function $$\label{} f \ : \ V^*\times \ldots \times V^*\ \longrightarrow\ \mathbb{R}\ ,$$ its reduction $\widetilde{f}(\alpha) := f(\alpha,\ldots,\alpha)$ is a polynomial function in ${\cal F}(V^*)$. Note, that for any $v_1,v_2 \in V$ one defines the product $\widehat{v}_1 \cdot \widehat{v}_2$ by $$\label{V-product} (\widehat{v}_1 \cdot \widehat{v}_2)(\alpha) := \widehat{v}_1(\alpha) \cdot \widehat{v}_2(\alpha)\ ,$$ with $\alpha \in V^*$. Clearly, $\widehat{v}_1 \cdot \widehat{v}_2$ defines a polynomial in ${\cal F}(V^*)$. Suppose now that $V$ carries an additional structure defined by a bilinear operation $$\label{} B \ : \ V \times V \ \longrightarrow \ V\ .$$ Let us observe that we may use $B$ to define a 2-tensor field $\tau_B$ by setting $$\label{product-2} \tau_B(d\widehat{v}_1 ,d\widehat{v}_2)(\alpha) := \alpha(B(v_1,v_2))\ .$$ Using $$\label{} d(\widehat{v}_1 \cdot \widehat{v}_2) = (d\widehat{v}_1 )\cdot \widehat{v}_2 + \widehat{v}_1\cdot (d\widehat{v}_2)\ ,$$ one finds $$\label{} \tau_B(d\widehat{v} ,d(\widehat{v}_1\cdot \widehat{v}_2))= \tau_B(d\widehat{v} ,d\widehat{v}_1)\cdot \widehat{v}_2 + \widehat{v}_1\cdot \tau_B(d\widehat{v},d \widehat{v}_2)\ ,$$ which shows that $\tau_B(d\widehat{v})$ is a derivation of the product (\[V-product\]). In this sense we may speak of the geometrical description of the binary product by introducing the tensor field $\tau_B$ which defines a bi-differential operator. Now, we shall consider the special cases when $B$ endows $V$ with the structure of Lie algebra or Jordan algebra. Let us start with a Lie algebra $g=(V,B)$, where $B$ is skew-symmetric and satisfies the Jacobi identity $$\label{} B(v_1,B(v_2,v_3)) + {\rm cyclic\ permutations} = 0 \ .$$ It is evident that $\Lambda:=\tau_B$ defines a Poisson tensor on $\mathcal{F}(V^*)$. Moreover, one may prove that in this case $\Lambda(d\widehat{v})$ is also a derivation of (\[product-2\]). [*Example:*]{} as an example consider the 3-dimensional Lie algebra $V=\mathbb{R}^3$ defined by $$\label{} B(v_1,v_2)=a_3 v_3\ , \ \ B(v_2,v_3)=a_1 v_1\ , \ \ B(v_3,v_1)=a_2 v_2\ ,$$ with $a_1,a_2,a_3 \in \mathbb{R}$. Defining 3 coordinate functions $$\label{} x_1 = \widehat{v}_1\ , \ \ x_2 = \widehat{v}_2\ , \ \ x_3 = \widehat{v}_3\ ,$$ together with $$\label{} \mathcal{C}(x_1,x_2,x_3) = \frac 12 (a_1 x_1^2 + a_2 x_2^2 + a_3 x_3^2) \ ,$$ one finds for the Poisson tensor $$\label{} \Lambda = \epsilon_{ijk}\, \frac{\partial \mathcal{C}}{\partial x_i}\, \frac{\partial }{\partial x_j} \wedge \frac{\partial }{\partial x_k}\ .$$ Note, that $\cal C$ is a Casimir function, i.e. $\Lambda({\cal C},f)=0$. By properly choosing $a_1,a_2,a_3$ one obtains all unimodular 3-dimensional Lie algebras. $\Box$ Consider now $V$ equipped with a Jordan product $$\label{product-3} B(v_1,v_2) = v_1 \circ v_2\ .$$ The corresponding Riemann tensor ${\cal R}:= \tau_B$ is defined by $$\label{} {\cal R}(d\widehat{v}_1,d\widehat{v}_2)(\alpha) = \alpha(v_1 \circ v_2)\ .$$ Now, contrary to the Poisson tensor, ${\cal R}(d\widehat{v})$ is a derivation of (\[V-product\]) but no longer a derivation of the Jordan product (\[product-3\]). Finally, let $(V,\cdot)$ be a $\mathbb{C}^*$-algebra. It is equipped both with the antisymmetric Lie product $$\label{} B(v_1,v_2) := \frac i2 (v_1 \cdot v_2 - v_2 \cdot v_1) \ ,$$ and the symmetric Jordan product $$\label{} B'(v_1,v_2) := \frac 12 (v_1 \cdot v_2 + v_2 \cdot v_1) \ .$$ Let $\Lambda := \tau_B$ and ${\cal R}:= \tau_{B'}$ be the corresponding Poisson and Riemann tensors. Note, that these two structures endow the real elements of $\mathbb{C}^*$-algebra with a structure of a Lie-Jordan algebra A Lie-Jordan algebra $(\mathcal{B},\circ,[\ ,\ ])$ is a real vector space endowed with two bilinear operations ‘$\circ$’ and $[\ ,\ ]$ with the following properties $$\begin{aligned} \label{} a \circ b &=& b \circ a\ , \\ {}[a,b] &=& - [b,a] \ .\end{aligned}$$ Moreover, Lie-Jordan brackets satisfy the Leibniz rule $$\label{} [a,b\circ c] = [a,b]\circ c + b\circ [a,c]\ ,$$ and Jacobi identity $$\label{} [a,[b,c]] = [[a,b],c] + [b,[a,c]]\ .$$ Finally, $$\label{} (a\circ b)\circ c - a\circ (b \circ c) = \lambda^2 [[a,c],b] \ ,$$ for some real number $\lambda$. Hamiltonian vector fields on $V^*$ constructed with $\Lambda$ define derivation for the Jordan product. This construction completes the ‘geometrization’ of a $\mathbb{C}^*$-algebra. [*Example:*]{} Consider the Lie algebra $u(2)$ in the defining representation on $\mathbb{C}^2$. It is spanned by 4 anti-Hermitian matrices $v_\alpha = i \sigma_\alpha$, with $\alpha=0,1,2,3$, where $$\label{} \sigma_0 = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \ , \ \ \ \sigma_1 = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) \ , \ \ \ \sigma_2 = \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) \ , \ \ \ \sigma_3 = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) \ ,$$ are Pauli matrices. Now, let us define coordinate functions $$\label{} y_\alpha(A) = \frac 12 \, \mbox{Tr}(\sigma_\alpha A) \ ,$$ for $A \in u(2)$. Using the well known property $$\label{} \sigma_k \sigma_l = i \epsilon_{klm} \sigma_m \ ,$$ one obtains the following formulae for the Poisson tensor $$\label{} \Lambda = 2 \sum_{k,l,m=1}^3\, \epsilon_{klm} \, y_k \, \frac{\partial}{\partial y_l} \wedge \frac{\partial}{\partial y_m} \ ,$$ and for the Riemann tensor $$\label{} {\cal R} = \frac{\partial}{\partial y_0} {{\,\otimes\,}}_s \sum_{k=1}^3 y_k\, \frac{\partial}{\partial y_k} + y_0 \sum_{k=1}^3\ \frac{\partial}{\partial y_k} {{\,\otimes\,}}\frac{\partial}{\partial y_k}\ ,$$ where ${{\,\otimes\,}}_s$ stands for the symmetrized tensor product, i.e. $a{{\,\otimes\,}}_s b = a{{\,\otimes\,}}b + b {{\,\otimes\,}}a$. Moreover, the Hamiltonian vector fields $H_\alpha$ corresponding to coordinate functions $y_\alpha$, i.e. $H_\alpha = \Lambda(y_\alpha,\cdot)$, are defined as follows $$\label{} H_0 = 0 \ , \ \ \ H_k = \sum_{l,m=1}^3\, \epsilon_{klm} \, y_m \, \frac{\partial}{\partial y_l}\ ,$$ for $k=1,2,3$. Finally the gradient vector fields $X_\alpha$ defined by $X_\alpha := {\cal R}(y_\alpha,\cdot)$ read as follows $$\label{} X_0 = \sum_{\alpha=0}^3 y_\alpha \frac{\partial}{\partial y_\alpha}\ , \ \ \ \ X_k = y_k \frac{\partial}{\partial y_0} + y_0 \frac{\partial}{\partial y_k} \ ,$$ for $k=1,2,3$. Note, that $$\label{} [X_\alpha,X_\beta] = y_\alpha \frac{\partial}{\partial y_\beta} - y_\beta \frac{\partial}{\partial y_\alpha}\ .$$ Finally, one may show that the union of these two distributions $H_k$ and $X_k$ $(k=1,2,3$) generates $SL(2,{\mathbb{C}})$. Hermitian and Kählerian realizations via GNS construction {#SYMP} ========================================================= Each pure state $\omega$ over $\mathcal{A}$ gives rise to irreducible representation $\pi_\omega$ of $\mathcal{A}$ in the Hilbert space $\mathcal{H}_\omega$. It is clear that real elements in $\mathcal{A}$ are represented via $\pi_\omega$ by self-adjoint operators in $\mathcal{B}(\mathcal{H}_\omega)$ which are in a one-to-one correspondence with the real Lie algebra $u(\mathcal{H}_\omega)$ of the unitary group $U(\mathcal{H}_\omega)$. The symplectic action of $U(\mathcal{H}_\omega)$ on $\mathcal{H}_\omega$ by $(U,\Psi) \longrightarrow U\Psi$, provides us with the corresponding momentum map $$\label{mu-H} \mu_\omega\ :\ \mathcal{H}_\omega \ \longrightarrow\ u^*(\mathcal{H}_\omega) \ ,$$ where $u^*(\mathcal{H}_\omega)$ denotes the dual of the Lie algebra $u(\mathcal{H}_\omega)$. The map is defined by $$\label{} \mu_\omega(\psi) = |\psi\>\<\psi| \ .$$ Note, that $u^*(\mathcal{H}_\omega)$ is a Poisson manifold and hence (\[mu-H\]) provides a symplectic realization. Let us recall that a symplectic realization of a Poisson manifold $(M,\Lambda)$ is a Poisson map $\Phi : S \longrightarrow M$, where $(S,\Omega)$ is a symplectic manifold. When $S$ is a symplectic vector space one calls $\Phi$ a classical Jordan-Schwinger map [@J-S]. When $S$ is a Hilbert space we shall call it [*Hermitian realization*]{}. Now, the action of $U({\cal H}_\omega)$ on ${\cal H}_\omega$ induces the symplectic action of $U(\mathcal{H}_\omega)$ on the space of rays $\mathcal{R}(\mathcal{H}_\omega)$ via $$\label{} (U,[\psi]) \ \longrightarrow [U\psi] \ .$$ The above action provides us with the corresponding momentum map $$\label{mu-RH} \widetilde{\mu}_\omega\ :\ \mathcal{R}(\mathcal{H}_\omega) \ \longrightarrow\ u^*(\mathcal{H}_\omega) \ ,$$ defined by $$\label{} \widetilde{\mu}_\omega([\psi]) = \frac{\mu_\omega(\psi)}{\<\psi|\psi\>}\ .$$ Now, because the above action preserves also the Riemann tensor, the momentum map relates also this tensor on $\mathcal{R}(\mathcal{H}_\omega)$ with the symmetric tensor on $u^*(\mathcal{H}_\omega)$ obtained from the Jordan algebra on $u^*(\mathcal{H}_\omega)$. The Hermitian tensor on $\mathcal{R}(\mathcal{H}_\omega)$ will be therefore $\mu_\omega$–related to a corresponding tensor on $u^*(\mathcal{H}_\omega)$. Again (\[mu-RH\]) provides a symplectic realization. We shall call a symplectic realization $\Phi : S \longrightarrow M$ [*Kählerian realization*]{} if $S$ is a submanifold of the complex projective space. Actually, it was proved by Gromov [@Gr1; @Gr2] (see also [@Gr3]) that any compact Kählerian manifold may be immersed into the complex projective space (in the analogy to the Whitney theorem about embedding of a manifold into the Euclidean space $\mathbb{R}^N$). The linear structure of $u^*(\mathcal{H}_\omega)$ allows for convex combinations in $\mu_\omega(\mathcal{R}(\mathcal{H}_\omega))\subset u^*(\mathcal{H}_\omega)$ and hence enables one to consider density operators. Consider now a general mixed state $\varphi$ over $\mathcal{A}$. The corresponding GNS-representation $\pi_\varphi$ is no longer irreducible on $\mathcal{H}_\varphi$. One has therefore the following direct sum decomposition $$\label{} \pi_\varphi\, =\, \bigoplus_\alpha\, \pi_\alpha \ ,$$ where $\pi_\alpha$ are irreducible representations of $\mathcal{A}$ on $\mathcal{H}_\alpha$, and $$\label{} \mathcal{H}_\varphi\, =\, \bigoplus_\alpha\, \mathcal{H}_\alpha\ .$$ It implies that a ‘vacuum’ vector $\Omega \in \mathcal{H}_\varphi$ decomposes as follows $$\label{} \Omega\, =\, \bigoplus_\alpha\,\Omega_\alpha\ , \ \ \ \ \Omega_\alpha \in \mathcal{H}_\alpha\ .$$ It is clear that each irreducible representation $\pi_\alpha$ corresponds to a pure state $\varphi_\alpha$ defined by $$\label{} \varphi_\alpha(a) = \frac{1}{p_\alpha}\, \< \Omega_\alpha|\pi_\alpha(a)|\Omega_\alpha\>_\alpha \ ,$$ where $\<\ |\ \>_\alpha$ denotes the scalar product in $\mathcal{H}_\alpha$, and $$\label{p-alpha} p_\alpha = \< \Omega_\alpha|\Omega_\alpha\>_\alpha\ .$$ Normalization of $\Omega$ implies $\sum_\alpha p_\alpha=1$. It shows that a mixed state $\varphi$ decomposes as the following convex combination of pure states $\varphi_\alpha$ $$\label{} \varphi = \sum_\alpha\,p_\alpha \,\varphi_\alpha\ ,$$ that is $$\label{} \varphi(a) = \sum_\alpha\, \< \Omega_\alpha|\pi_\alpha(a)|\Omega_\alpha\>_\alpha\ .$$ Alternative Hamiltonian structures {#Bi} ================================== We stress that different states over $\mathcal{B}(\mathbb{C}^n)$ give rise to different GNS representations and hence to different realizations of the Hilbert spaces. As we already observed a state over $\mathcal{B}(\mathbb{C}^n)$ corresponds to a positive $n \times n$ matrix $K$ (we replaced abstract $\omega$ by $K$) and hence may be used to define an alternative scalar product in $\mathbb{C}^n$ $$\label{} z\cdot_K w = \sum_{k,l=1}^n \overline{z}_k K_{kl} w_l\ ,$$ for any $z,w \in \mathbb{C}^n$. One recovers the standard form if $K=\mathbb{I}$, that is $$\label{} z\cdot w = \sum_{k=1}^n \overline{z}_k w_k\ .$$ Different inner products in $\mathcal{H}$ are associated with different multiplication rules in the space of operators $$\label{} A \cdot_K B = A\cdot K \cdot B\ ,$$ for any $A,B \in \mathcal{B}(\mathbb{C}^n)$. Note that the product ‘$\,\cdot_K\,$’ defined by the above formula is associative, and hence $(\mathcal{B}(\mathbb{C}^n),\cdot_K)$ carries a structure of a $\mathbb{C}^*$-algebra. With these alternative associative products we may associate alternative Lie algebra structures and alternative Jordan algebras. According to what we have said earlier they are similar to the alternative Poisson structures we find in classical dynamics when dealing with bi-Hamiltonian systems and complete integrability. To carry the analogy consider now quantum dynamics governed by the Hamiltonian $H$ and suppose, that $$\label{} [H,K] = H\cdot K - K \cdot H = 0 \ .$$ Note that $$\label{} [A,H] = A\cdot H - H \cdot A = A\cdot_K H_K - H_K \cdot_K A =: [A,H_K]_K\ ,$$ with $$\label{} H_K = K^{-1} \cdot H\ .$$ It proves that one has two alternative descriptions of quantum evolution: either the standard Heisenberg equation $$\label{} i\hbar\dot{A} = [A,H]\ ,$$ or the equivalent description using deformed multiplication $$\label{} i\hbar\dot{A} = [A,H_K]_K\ .$$ Consider now a description of the quantum systems in terms of the Wigner-Weyl formalism. In this approach an operator $A$ on $\cal H$ is represented by a function $f_A$ on a classical phase space $\cal P$. The commutative product in the space of functions ${\cal F}({\cal P})$ is deformed into the noncommutative $\star$-product such that $f_A \star f_B = f_{AB}$. Moreover, in the classical limit $$\label{} \lim_{\hbar \rightarrow 0} \frac 1\hbar \{\!\{ f,g \}\!\}_{\star } = \{f,g\} \ ,$$ where $\{\!\{f,g\}\!\}_\star = \frac{1}{2i}(f\star g - g \star f)$. As was already found by Rubio [@Rubio], any associative local product in the commutative algebra of functions ${\cal F}({\cal P})$ has the following form $$\label{} f\cdot_k g := fkg\ ,$$ where $f,k,g \in \mathcal{F}(\mathcal{P})$ and $k > 0$. Therefore, one may use this new product ‘$\cdot_k$’ to define an alternative $\star_k$-product $$\label{} f_A \star_k f_B := f_A \star k \star f_B\ ,\ .$$ It gives rise to the following equation of motion $$\label{} i\hbar\dot{f}_{A} = \{\!\{ f_A,f_H \}\!\}_{\star k} \ ,$$ where the Moyal-like $\star_k$ bracket reads as follows $$\label{} \{\!\{ f_A,f_B \}\!\}_{\star k} = \frac{1}{2i} (f_A \star_k f_B - f_B \star_k f_A)\ .$$ Note, that in the ‘classical limit’ $$\label{} \lim_{\hbar\rightarrow 0} \frac 1\hbar \{\!\{ f_A,f_B \}\!\}_{\star k} = k\, \{f_A,f_B\} + f_A X_k(f_B) - f_BX_k(f_A)\ ,$$ where $X_k$ is a Hamiltonian vector field corresponding to $k$. Interestingly, the ‘classical limit’ of the Moyal $\star_k$ bracket is not a Poisson one but a Jacobi bracket. For $k=1$ one has $X_k =0$ and hence one recovers the standard Poisson bracket. Similarly, the ‘classical limit’ of the symmetric Jordan bracket gives $$\label{} \lim_{\hbar\rightarrow 0}\ \frac 12 ( f_A \star_k f_B + f_B \star_k f_A) = f_A \cdot_k f_B\ .$$ It shows that there are alternative deformation quantization schemes depending upon the associative product $f\cdot_k g$ in the original commutative algebra $\mathcal{F}(\mathcal{P})$. The additional function ‘$k$’ has been related to the Kubo-Martin-Schwinger (KMS) state [@Sternheimer; @St2]. Conclusions =========== The contribution of this paper is to start directly from $\mathbb{C}^*$-algebra to ‘geometrize’ it and then use the GNS construction to recover the Hilbert space. As a matter of fact in our geometric version we naturally obtain a Kähler bundle defined on the space of states. Let us recall A Kähler bundle is a triple $(P,B,p)$, where $P$ (total space) and $B$ (base) are topological spaces and $p : P \longrightarrow B$ is a surjective continuous map. Moreover, for each $b \in B$ the fiber $p^{-1}(b)$ is a Kähler manifold. Indeed, the space of states over $\mathbb{C}^*$-algebra $\cal A$ is naturally embedded into the dual $L({\cal A})$ $$\label{e} e \ : \ {\cal D}({\cal A}) \ \longrightarrow\ L({\cal A})\ .$$ For any state $\varphi \in {\cal D}({\cal A})$ its ‘orbit’ of $\cal A$ passing through $\varphi$ defines the Hilbert space ${\cal H}_\varphi$ with $$\label{} \< a\varphi|b\varphi\> = \varphi(a^*b)\ .$$ Now, the embedding (\[e\]) gives rise to the pull-backed bundle $e^*(T^*{\cal A}^*)$. Its reduction by the left Gelfand ideal $\mathcal{J}_\varphi$ at each point provides us with a GNS-bundle which replaces the universal representation of a $\mathbb{C}^*$-algebra (as a direct sum of all its irreducible GNS-representations). When $\varphi$ is a pure state we obtain a Kählerian realization of a $\mathbb{C}^*$-algebra which generalizes to the quantum setting the symplectic realization of a Poisson manifold. This bundle turns out to be related to the one defined by Shultz [@Shultz] (see also [@K-bundle]). We shall come back to some of these bundle aspects in a forthcoming paper. Acknowledgments {#acknowledgments .unnumbered} =============== A preliminary account of these results was presented in a series of conferences: Holbaek Quantum Gravity Workshop (May 2008), MATHQCI 2008 CSIC Madrid (March 2008), XII Jornada SIMUMAT: [*Mathematical Structures of Quantum Mechanics*]{}, [*Geometry and Quanta*]{}, Toruń (June 2008). G.M. thanks the organizers of these conferences for inviting him. D.C. thanks Beppe Marmo for the warm hospitality in Naples where the main part of this paper was prepared. [1]{} G.W. Mackey, [*The mathematical foundations of Quantum Mechanics*]{}, Benjamin, 1962. I.E. Segal, [*Postulates for general quantum mechanics*]{}, Ann. Math. [**48**]{} (1947) 930-948. F. Strocchi, [*Complex coordinates and Quantum Mechanics*]{}, Rev. Mod. Phys. [**38**]{} (1956) 36-40. T.W. Kibble, [*Geometrization of quantum mechanics*]{}, Comm. Math. Phys. [**65**]{} (1979) 189-201. V. Cantoni, [*Geometric aspects of Quantum Systems*]{}, Rend. sem. Mat. Fis. Milano [**48**]{} (1980) 35–42. V. Cantoni, [*Generalized “transition probability”*]{}, Comm. Math. Phys. [**44**]{} (1975) 125– 128. D.J. Rowe, A. Ryman and G. Rosensteel, [*Many body Quantum Mechanics as a symplectic dynamical system*]{}, Phys. Rev. A [**22**]{} (1980) 2362-2372. R. Cirelli, A. Maniá and L. Pizzocchero, [*Quantum Mechanics as an infinite dimensional Hamiltonian system with uncertainty structure*]{}, J. Math. Phys. [**31**]{} (1984) 2891-2903 (part I and II). R. Cirelli, A. Maniá and L. Pizzocchero, [*A functional representation for non-commutative $\mathbb{C}^*$-algebras*]{}, Rev. Math. Phys. [**6**]{} (1994) 675-697. M.C. Abati, R. Cirelli, P. Lanzavecchia and A. Maniá, [*Pure states of general quantum mechanical systems as a Kähler bundles*]{}, Nuovo Cimento B [**83**]{} (1984) 43-60. A. Heslot, [*Quantum Mechanics as a classical theory*]{}, Phys. Rev. D [**31**]{} (1985) 1341-1348. J.S. Anandan, [*A Geometric approach to Quantum Mechanics*]{}, Found. Phys. [**21**]{} (1991) 1265-1284. A. Ashtekar and T.A. Schilling, [*Geometrical formulation of Quantum Mechanics*]{}, on Einstein’s path, pp. 23–65, New York: Springer, 1999. D. Brody and L.P. Hughston, [*Geometric Quantum Mechanics*]{}, J. Geom. Phys. [**38**]{} (2001) 19–53. V.I. Manko, G. Marmo, E.C.G. Sudarshan and F. Zaccaria, [*The geometry of density states*]{}, Rep. Math. Phys. [**55**]{} (2005) 405-422. J. Grabowski, M. Kuś and G. Marmo, [*Geometry of quantum systems: density states and entanglement*]{}, J. Phys. A: Math. Gen [**38**]{} (2005) 10217-10244. J. Grabowski, M. Kuś and G. Marmo, [*Symmetry, group actions and entanglement*]{}, Open sys. & Inform. Dyn. [**13**]{} (2006) 343-362. G. Esposito, G. Marmo, and E.C.G. Sudarshan, [*From Classical to Quantum Mechanics: An Introduction to the Formalism, Foundations and Applications*]{}, Cambridge University Press, 2004. O. Bratteli and D.W. Robinson, [*Operator algebras and quantum statistical mechanics*]{}, Springer-Verlag, Berlin, 1987. R.V. Kadison and J.R. Ringrose, [*Fundamentals of the theory of operators algebras*]{}, Vol. I & II, Academic Press, Orlando, 1986. R. Haag and D. Kastler, [*An algebraic approach to quantum field theory*]{}, J. Math. Phys. [**5**]{} (1964) 884-861. R. Haag, [*Local quantum physics: fields, particles, algebras*]{}, Springer-Verlag, 1992. J.M.G. Fell, [*The dual space of $\mathbb{C}^*$-algeabras*]{}, Trans. Amer. Math. Soc. [**94**]{} (1960) 365-403; [*The dual space of Banach algeabras*]{}, Trans. Amer. Math. Soc. [**114**]{} (1965) 227-250; [*$\mathbb{C}^*$-algebras with smooth dual*]{}, Illinois J. Math. [**4**]{} (1960) 221-230; [*Algebras and fibre bundles*]{}, Pacific J. Math. [**16**]{} (1966) 497-503. A. Ikeda, Y. Taniguchi, [*Spectra and eigenforms of the Laplacian on $S^n$ and $\mathbb{C}P^n$*]{}, Osaka J. Math, [**15**]{} (1978) 515-546. M. Boucetta, [*Spectra and symmetric eigentensors of the Lichnerowicz Laplacian on $\mathbb{C}P^n$*]{}, Arxiv:0712.2830 J.C. Baez, I.E. Segal and Z. Zhou, [*Introduction to algebraic and constructive quantum field theory*]{}, Princeton University Press, Princeton, 1992. B. A. Dubrovin, G. Marmo and A. Simoni, [*Alternative Hamiltonian descriptions for quantum systems*]{}, Mod. Phys. Lett. A [**5**]{} (1990) 1229-1234. V.I. Manko, G. Marmo, E.C.G. Sudarshan and F. Zaccaria, [*Wigner’s Problem and Alternative Commutation Relations for Quantum Mechanics*]{}, Int. J. Mod. Phys. B [**11**]{} (1996) 1281-1296. J. Cariñena, J. Grabowski and G. Marmo, [*Quantum Bi-Hamiltonian Systems*]{}, Int. J. Mod. Phys. A [**15**]{} (2000) 4797-4810. E. Ercolessi, A. Ibort, G. Marmo and G. Morandi, [*Alternative Linear Structures Associated with Regular Lagrangians. Weyl quantization and the von Neumann Uniqueness Theorem*]{}, math-ph/0602011 E. Ercolessi, A. Ibort, G. Marmo and G. Morandi, [*Alternative linear structures for classical and quantum systems*]{}, to appear in Int. J. Mod. Phys. A. V.I. Manko, G. Marmo, P. Vitale and F. Zaccaria, [*A generalization of the Jordan-Schwinger map: classical version and its q-deformatuion*]{}, Int. J. Mod. Phys. A [**9**]{} (1994) 5541-5561. S. Kobayashi and K. Nomizu, [*Foundations of Differential Geometry*]{}, Interscience, New York, 1969, Vol II. N.S. Hawley, [*Constant holomorphic curvature*]{}, Can. J. Math. [**5**]{} (1953) 53-56. J. Igusa, [*On the structure of a certain class of Kähler varietes*]{}, Amer. J. Math. [**76**]{} (1954) 669-678. D. Chruściński and A. Jamio[ł]{}kowski, [*Geometric Phases in Classical and Quantum Mechanics*]{}, Birkhäuser, Boston, 2004. B. Simon, [*Holonomy, the quantum adiabatic theorem, and Berry’s phase*]{}, Phys. Rev. Lett. [**51**]{} (1983) 2167-2170. Y. Aharonov and J. Anandan, [*Phase change during a cyclic quantum evolution*]{}, Phys. Rev. Lett. [**58**]{} (1987) 1593-1596. Y. Aharonov and J. Anandan, [*Geometry of quantum evolution*]{}, Phys. Rev. Lett. [**65**]{} (1990) 1697-1700. P.A.M. Dirac, [*The Principles of Quantum Mechanics*]{}, Oxford University Press, Oxford 1958. J. Grabowski and G. Marmo, [*Binary operations in classical and quantum mechanics*]{}, arXiv: math/0201089. K. McCrimmon, [*A taste of Jordan algebras*]{}, Springer-Verlag, Berlin, 2004. F.W. Shultz, [*Pure states as a dual object of $C^*$-algebras*]{}, Comm. Math. Phys. [**82**]{} (1982) 497-502. E.M. Alfsen and F.W. Shultz, [*Geometry of state spaces of operator algebras*]{}, Birkhäuser, Boston, 2003. R. Rubio, C.R. Acad. Sc. Paris, [**299**]{}, Série 1, no. 14 (1984) 699. M. Gromov, [*A topological technique for the construction of solutions of differential equations and inequalities*]{}, Actes Congrés Inter. Math., (Nice, 1970), Gauthier-Villars, Paris, No 2, 1971, 221-225. M. Gromov, [*Partial differential relations*]{}, Springer, Berlin, 1986. D. Tischler, [*Closed 2-forms and an embedding theorem for symplectic manifolds*]{}, J. Diff. Geom. [**12**]{} (1977) 229-235. H. Basart, M. Flato, A. Lichnerowicz and D. Sternheimer, [*Deformation theory applied to quantization and statistical mechanics*]{}, Lett. Math. Phys. [**8**]{} (1984) 483-494. H. Basart and A. Lichnerowicz, [*Conformal symplectic geometry, deformations, rigidity and geometrical (KMS) conditions*]{}, Lett. Math. Phys. [**10**]{} (1985) 167-177. [^1]: Actually, in [@Ikeda] the eigenvalues differ by a factor ‘4’. It corresponds to different normalization of $\Delta_n$. Our convention reproduces quantum mechanical result ‘$l(l+1)$’ on $\mathbb{C}P^1 \cong S^2$.